id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.09243
Statistics of grain microstructure evolution under anisotropic grain boundary energies and mobilities using threshold-dynamics
This paper investigates the statistical behavior of two-dimensional grain microstructures during grain growth under anisotropic grain boundary characters. We employ the threshold-dynamics method, which allows for unparalleled computational speed, to simulate the full-field curvature motion of grain boundaries in a large polycrystal ensemble. Two sets of numerical experiments are performed to explore the effect of grain boundary anisotropy on the evolution of microstructure features. In the first experiment, we focus on abnormal grain growth and find that grain boundary anisotropy introduces a statistical preference for certain grain orientations. This leads to changes in the overall grain size distribution from the isotropic case. In the second experiment, we examine the texture development and growth of twin grain boundaries at different initial microstructures. We find that both phenomena are more pronounced when the initial microstructure has a dominant fraction of high-angle grain boundaries. Our results suggest effective grain boundary engineering strategies for improving material properties.
Jaekwang Kim, Nikhil Chandra Admal
2023-09-17T11:45:50Z
http://arxiv.org/abs/2309.09243v1
Statistics of grain microstructure evolution under anisotropic grain boundary energies and mobilities using threshold-dynamics ###### Abstract This paper investigates the statistical behavior of two-dimensional grain microstructures during grain growth under anisotropic grain boundary characters. We employ the threshold-dynamics method, which allows for unparalleled computational speed, to simulate the full-field curvature motion of grain boundaries in a large polycrystal ensemble. Two sets of numerical experiments are performed to explore the effect of grain boundary anisotropy on the evolution of microstructure features. In the first experiment, we focus on abnormal grain growth and find that grain boundary anisotropy introduces a statistical preference for certain grain orientations. This leads to changes in the overall grain size distribution from the isotropic case. In the second experiment, we examine the texture development and growth of twin grain boundaries at different initial microstructures. We find that both phenomena are more pronounced when the initial microstructure has a dominant fraction of high-angle grain boundaries. Our results suggest effective grain boundary engineering strategies for improving material properties. keywords: grain growth, motion by curvature, grain statistics, microstructure, threshold-dynamics, polycrystalline materials, grain texture + Footnote †: journal: Modelling and Simulation in Materials Science and Engineering ## 1 Introduction The macroscopic properties of polycrystalline materials are strongly influenced by their grain microstructure, which is determined by the thermomechanical loads during materials processing. At the macroscale, a grain microstructure is typically described by the grain boundary (GB) character distribution. GB engineering refers to the strategy of enhancing the properties of a polycrystal by transforming its GB character distribution to a target distribution using thermomechanical processes [1]. Recently, the GB engineering paradigm has been extended to tailor the properties of nanocrystalline materials, which are promising next-generation structural materials with high strength, fatigue life, and wear resistance [2; 3; 4]. For example, subjecting nanocrystalline materials to thermomechanical cycling leads to a considerable increase in the fraction of \(\Sigma 3\) grain boundaries [5], which demonstrate high resistance to sliding, cavitation, and fracture [6]. Establishing the relationship between microstructure and materials properties, as well as predicting the evolution of microstructure during various manufacturing processes are ongoing research challenges in the field of GB engineering. In this regard, atomistic [7; 8; 9], mesoscopic [10; 11], and macroscopic continuum [12; 13; 14] models have been developed over the last few decades to uncover the connection between microstructures, properties, and process parameters. A defining characteristic of these models is the motion of GBs driven by surface tension to decrease the interfacial energy. When a polycrystalline material is annealed, grains grow to decrease the total energy of the system by reducing the GB area, leading to the motion of GBs towards their centers of curvatures. The Mullins model [15] describes such a motion as \[v=-m\gamma\kappa, \tag{1.1}\] where \(v\), \(\kappa\), \(\gamma\), and \(m\) denote signed velocity magnitude, curvature, misorientation-dependent energy density, and mobility of the GB, respectively. If \(m\) and \(\gamma\) are constants, the system is called _isotropic_. Grain growth in an isotropic system is characterized by a relatively narrow range in the grain size distributions that obeys a simple scaling relation, regardless of the initial configuration of the grain microstructure [16; 17]. On the other hand, anisotropic grain growth occurs when \(m\) and \(\gamma\) depend on the GB character, defined by the five macroscopic degrees of freedom (dofs).1 Footnote 1: Here, three degrees represent a rotation associated with the misorientation between the two grains, and the remaining two degrees correspond to the inclination of the grain boundary. Recent studies have reported that microstructure evolution is heavily influenced by the anisotropy of GBs [11; 18; 19; 20]. With the aid of accurate interatomic potentials, significant progress has been made in mapping the GB anisotropy by the construction of grain boundary energies and mobilities as functions of the GB character [21; 22; 23; 24; 25]. However, precisely identifying the GB character distribution responsible for spontaneous microstructure transformation phenomena such as _abnormal grain growth_ (AGG) and _recrystallization_ remains a fundamental challenge in materials science. One of the main difficulties arises from the enormity of the microstructure space (relative to the space of processes and properties), which encompasses all possible configurations of grain assemblies. Due to the relatively small size of the property space, the structure-property relationship is necessarily a many-to-one mapping [26; 27]. Therefore, grain microstructures are commonly described using statistical distributions for grain sizes, topologies, and orientations, which makes it possible to express structure-property relationship using reduced order models [28; 29; 30]. For example, the reduced order model developed by Kim and Admal [28] describes the evolution of distributions for grain sizes and topologies under the idealized isotropic grain growth. Extending the model to anisotropic grain growth requires careful investigation of the role of GB anisotropy on the evolution of the statistics of a microstructure, which is the main goal of this paper. _Threshold-dynamics_ (TD) [31], a class of numerical methods to simulate GB motion, has proven to be highly efficient in simulating large ensembles of grains. In the TD framework, GBs are sharp interfaces that evolve according to motion by curvature, given in (1.1). Each grain is described by a level set function that is assigned a positive value within the grain and a negative value outside of it. This implies that the zero-valued isosurface of the function corresponds to the interface surrounding the grain. One advantage of the sharp interface framework is that it allows a relatively coarse grid representation of a GB compared to diffuse-interface models (including phase-field models), which require the grid to be refined enough to resolve the width of a GB. In addition, the high computational efficiency of the TD methods stems from the fast Fourier transform (FFT) algorithms used to implement the diffusion operator, which drives GB motion. In addition, the TD scheme was recently equipped with a variational structure for multi-phase settings [31]. This ensured the correct prediction of the dihedral angles between three anisotropic interfaces at triple junctions, which would necessitate considerably more computational effort to achieve using diffused-interface models [11, 32, 33]. The superior computational performance of the TD methods has enabled simulations of large collections of grains that adequately sample microstructure distributions. Martinez et al. [34] used the TD method to investigate the evolution of grain size and topology distributions in two dimensions and compared them to the predictions of experiments and the phase-field-crystal method. 2 Peng et al. [36] examined the evolution of grain morphologies of individual grains of Ni in three dimensions and the overall grain size distribution, and compared to experiments. However, their implementation of TD was limited to an isotropic system with constant GB energy and mobility. GB anisotropy was noted as a contributing factor in instances when disagreement with experiments was identified. On the other hand, Nino and Johnson [37] implemented the TD scheme for anisotropic systems using several GB energy functions, including the energy function of Bulatov et al. [38] that depends on all five dofs of a GB. While they observed distinct signatures in the morphological evolution of individual grains for different GB energy functions, GB statistics were reported to evolve identically in all the cases. However, it must be noted that Nino and Johnson [37] assumed a unit reduced mobility, meaning that the product of the mobility and surface tension is set to one. In this case, although the dihedral angles at triple junctions are dictated by GB energy anisotropy, the anisotropy of surface tension is canceled by that of mobility. Consequently, the resulting motion by curvature remains similar to the isotropic case. In our view, it is important to isolate GB mobility to appropriately estimate the role of GB energy anisotropy. Fortunately, this is made possible by a recent update to the algorithm of Salvador and Esedoglu [39], which allows the prescription of GB energies and mobilities independently. Using the latest TD algorithm, Salvador and Esedoglu [40] investigated grain statistics under a constant mobility and a Read-Shockley GB energy (RSE). 3 It was observed that in two dimensions, the RSE also did not lead to significant deviation in the grain size distribution compared to the isotropic case. However, since the RSE is valid for only small misorientation angles, the full extent of the impact of GB energy anisotropy on grain growth remains unexplored. Footnote 2: The phase field crystal method resolves the atomic positions and describes their evolution at a diffusive time scale. This technique involves a free energy functional that is minimized when the density field is periodic, thereby facilitating the formation of density field patterns in solid phases [35]. Footnote 3: A grain boundary energy is of the Read–Shockley form, if it is a monotonously increasing concave function of the misorientation angle. In this paper, we investigate the influence of GB anisotropy on the statistical evolution of grain microstructure and its steady state in two dimensions. Our first objective is to investigate how GB energy anisotropy affects AGG, a key feature of microstructure evolution that significantly impacts materials' properties. Secondly, we will investigate texture development and the growth of special boundaries in initial microstructure configurations with different fractions of low-angle grain boundaries (LAGBs) and high-angle grain boundaries (HAGBs). The question is particularly significant in GB engineering of nanocrystalline materials. For example, _Equal Channel Angular Pressing_ (ECAP) [41; 42], which is a popular synthesizing nanostructured solids, involves severe plastic deformation of coarse-grained materials leading to substantial grain refinement and a nanostructure [43]. While it has been reported that the number of ECAP cycles largely determines the fraction of HAGB and LAGB [44], it is still not clear how these different initial microstructures evolve during subsequent annealing treatment. Our study aims to shed light on this important question. The manuscript is organized as follows. Section 2 provides a brief introduction to the TD with anisotropic GB energy and mobility, and its implementation that ensures numerical stability. In Section 3, we present results of carefully designed numerical experiments to achieve the goals of this paper. We summarize and conclude in Section 4. Table 1 collects the abbreviations used in this paper. ## 2 Method The TD algorithm employs two highly efficient operations in an alternating manner to simulate motion by curvature. The first operation is a convolution of a radially symmetric kernel with a characteristic function, which is equal to 1 inside the interface and 0 outside. In the second operation, the resulting output is subjected to point-wise _thresholding_ to obtain an updated characteristic function. The TD method was originally introduced by Merriman, Bence, and Osher (MBO) [45] for a two-phase system. The key idea is that the level-set of a distance function, under the action of a diffusion operator, moves in the normal direction with a velocity equal to the mean curvature of the level-set surface. While many extensions to multi-phase systems were subsequently developed, the generalization of Esedoglu and Otto [31] demonstrates superior characteristics due to its variational structure. Moreover, recent developments to the TD method include enhanced accuracy [46], grain grouping techniques to save computational memory [47; 48], and incorporating anisotropic mobility [49]. In the following section, we summarize the TD method used in this paper. \begin{table} \begin{tabular}{l l} **Abbreviation** & **Defintion** \\ GB & grain boundary \\ LAGB & low-angle grain boundary \\ HAGB & high-angle grain boundary \\ TB & twin boundary \\ HAGB* & high angle grain boundaries excluding twin boundaries \\ STGB & symmetric tilt grain boundary \\ RSE & Read–Shockley (grain boundary) energy \\ AGG & abnormal grain growth \\ TD & threshold-dynamics \\ PSP & process-(micro)structure-property \\ ECAP & equal channel angular pressing \\ \end{tabular} \end{table} Table 1: List of abbreviations ### Background Consider a two-dimensional polycrystal \(D=\cup_{j=1}^{N}\Sigma_{j}\) partitioned by \(N\) grains occupying regions \(\Sigma_{j}\). If \(\gamma_{ij}(=\gamma_{ji})\) denotes the interfacial energy density between grains \(\Sigma_{i}\) and \(\Sigma_{j}\), the total grain boundary energy of the polycrystal is given by \[E=\frac{1}{2}\sum_{i,j=1}^{N}\gamma_{ij}\text{Area}(\Gamma_{ij}), \tag{2.1}\] where \(\Gamma_{ij}\) denotes the boundary between two adjacent grains \(\Sigma_{i}\) and \(\Sigma_{j}\). If \(\gamma\) represents the matrix with \(\gamma_{ij}\) as its entries, then \(\gamma\) belongs to the following class of surface tension matrices \[\mathcal{S}_{N}=\{\gamma\in R^{N\times N}:\gamma_{ii}=0\quad\text{and}\quad \gamma_{ij}=\gamma_{ji}>0\quad\text{for all distinct }i,j\}. \tag{2.2}\] The steepest descent of the energy in (2.1) results in an anisotropic motion by curvature, \[v_{ij}=-\mu_{ij}\gamma_{ij}\kappa_{ij},\] where \(v_{ij}\), \(\kappa_{ij}\), and \(\mu_{ij}\) denote the velocity, mean curvature, mobility, and mobility of the interface \(\Gamma_{ij}\). The variational structure of the TD originates from the following non-local approximation of \(E\) \[E_{\delta t}(\Sigma_{1},...,\Sigma_{N})=\frac{1}{2\delta t}\sum_{i,j=1}^{N} \gamma_{ij}\int_{D}\mathbb{I}_{\Sigma_{i}}K_{\delta t}*\mathbb{I}_{\Sigma_{j} }\;dx, \tag{2.3}\] where \(K_{\delta t}:\mathbb{R}^{d}\to\mathbb{R}\) is a positive-valued convolution kernel with a characteristic width \(\delta t\), and \(\mathbb{I}_{\Sigma_{i}}\) is the characteristic function \[\mathbb{I}_{\Sigma_{i}}(\mathbf{x})=\begin{cases}1&\text{ if }\mathbf{x}\in\Sigma_{i},\\ 0&\text{ otherwise.}\end{cases}\] Esedoglu and Otto [31] showed that \(E_{\delta t}\to E\), in the sense of \(\Gamma\)-convergence, as \(\delta t\to 0\). \(E_{\delta t}\) may be viewed as a functional of \(N\) characteristic functions. More formally, \(E_{\delta t}\) is defined on the following set \(\mathcal{B}\) consisting of \(n\)-tuples of binary functions \[\mathcal{B}=\{(u_{1},\ldots,u_{N}): \text{for each }\mathbf{x}\in D\text{ there is an }i\text{ such that }u_{i}(\mathbf{x})=1\] \[\text{and }u_{j}(\mathbf{x})=0\text{ for all }j\neq i\}. \tag{2.4}\] The construction of \(E_{\delta t}\) (2.3) relies on the interpretation [31] that the surface area of \(\Gamma_{ij}\) can be estimated by the amount of heat that escapes from \(\Sigma_{i}\) into \(\Sigma_{j}\) in \(\delta t\) time, i.e., \[\text{Area}(\Gamma_{ij})\approx\frac{1}{\delta t}\sum_{i,j=1}^{N}\int_{D} \mathbb{I}_{\Sigma_{i}}K_{\delta t}*\mathbb{I}_{\Sigma_{j}}\;dx. \tag{2.5}\] While a common choice of \(K_{\delta t}\) is the Gaussian kernel \[G_{\delta t}(\mathbf{x})=\frac{1}{(4\pi\delta t)^{\frac{d}{2}}}\exp\left(-\frac{|\bm {x}|^{2}}{4\delta t}\right), \tag{2.6}\] alternate kernels have been proposed more recently. A point-wise thresholding rule is designed to move grain boundaries in their normal direction by distances of \(\gamma_{ij}\mu_{ij}\kappa_{ij}\delta t\) at every time step, such that the energy \(E_{\delta t}\) monotonically decreases. The characteristic width, \(\delta t\), of the kernel corresponds to the time step size \(t_{S}\). The parameter \(\delta t\) also determines the minimum grid size \(\delta x<\gamma_{ij}\mu_{ij}\kappa_{\min}\delta t\) required by the TD algorithm, where \(\kappa_{\min}\) is the minimum curvature (such as the curvature of the largest grain in \(D\)). If the grid size condition is not satisfied, grain boundaries would stagnate. The variational nature of the TD algorithms ensures the correct equilibrium dihedral angle condition at triple junctions (known as Herring angle condition [50]) is satisfied [31]. ### Algorithm The TD scheme employed in this paper is a recent version proposed by Salvador and Esedoglu [39]. Comparing to the original version [31], the current TD scheme broadens the choice of anisotropic mobilities \(\mu_{ij}\) with minimum algorithmic complication. This is achieved by constructing the kernel \(K_{\delta t}\) using two Gaussian kernels with distinct non-negative width parameters \(\alpha\) and \(\beta\) as \[K_{\delta t}=a_{ij}G_{\sqrt{\alpha\delta t}}+b_{ij}G_{\sqrt{\beta\delta t}}, \tag{2.7}\] where \[a_{ij}=\frac{\sqrt{\pi\alpha}}{\alpha-\beta}(\gamma_{ij}-\beta\mu_{ij}^{-1}), \tag{2.8}\] and \[b_{ij}=\frac{\sqrt{\pi\beta}}{\alpha-\beta}(-\gamma_{ij}+\alpha\mu_{ij}^{-1}). \tag{2.9}\] The evolution of grain domains \(\Sigma_{1}^{n},\ldots,\Sigma_{N}^{n}\) at the \(n\)-th step involves three steps. First, convolutions \(\phi_{1,i}^{n}:=G_{\sqrt{\alpha\delta t}}*\mathbb{I}_{\Sigma_{i}^{n}}\) and \(\phi_{2,i}^{n}:=G_{\sqrt{\beta\delta t}}*\mathbb{I}_{\Sigma_{i}^{n}}\) are computed. Second, comparison functions \(\psi_{i}^{n}:=\sum_{i\neq j}a_{ij}\phi_{1,j}^{n}+b_{ij}\phi_{2,j}^{n}\) are assembled. In the final step, thresholding is carried out using the criterion \[\Sigma_{i}^{n+1}=\{\mathbf{x}:\psi_{i}^{n}(\mathbf{x})<\min_{j\neq i}\,\psi_{j}^{n}( \mathbf{x})\}. \tag{2.10}\] The aforementioned steps are summarized in Algorithm 1. In real materials, the dependence of GB energies and mobilities on the misorientation is highly complex with multiple local maxima/minima. However, Algorithm 1 and other TD versions are restricted to the following class of GB energies \[\mathcal{T}_{N}=\{\gamma\in\mathcal{S}_{N}:\gamma_{ij}+\gamma_{jk}\leq\gamma_ {ik}\quad\text{for any }i,j,k\}, \tag{2.11}\] which includes energies that satisfy a triangle inequality. If \(\gamma\notin\mathcal{T}_{N}\), a TD scheme may lead to grain boundary wetting by nucleating a new grain along one of the existing boundaries [31]. Such a nucleation is entirely a numerical artifact and thus unphysical. Wetting can be circumvented by restricting the thresholding condition in (2.10) to \(j\) in the neighborhood of \(i\). In addition, for the TD algorithm to be numerically stable -- i.e., dissipate \(E\) at every iteration regardless of the choice of \(\delta t\) -- the surface tension matrix has to be _conditionally negative semi-definite_, which implies \(\gamma\in\mathcal{S}_{N}\) has to satisfy 4 Footnote 4: Condition (2.12) implies \(\gamma\) is a matrix that is negative semi-definite as a quadratic form on \((1,\ldots,1)^{\perp}\). \[\sum_{i,j}^{N}\gamma_{ij}\xi_{i}\xi_{j}\leq 0\;\text{ whenever }\sum_{i}^{N}\xi_{i}=0. \tag{2.12}\] If condition (2.12) fails, the algorithm could lead to erroneous movement of a grain boundary network; possibly, the TD algorithm causes the local energy of the system to increase, despite the overall free energy decreasing. While Esedoglu and Otto [31] proved that the stability condition (2.12) is satisfied for energies sampled from the RSE form regardless of the size \(N\) of the system, the stability condition may not hold for general GB energy functions. In addition, Algorithm 1 also requires the matrix of reciprocal mobilities \(1/\mu\), with entries \(\mu_{ij}^{-1}\), to be conditionally negative semi-definite. If \(\gamma\) and \(1/\mu\) are conditionally negative-semidefinite, a judicious choice of parameters \(\alpha\) and \(\beta\) will ensure the unconditional stability of the algorithm. In Ref. [39], it was shown that \[\alpha\geq\frac{\min_{i=1,\ldots,N-1}s_{i}}{\max_{i=1,\ldots,N-1}m_{i}}\quad \text{and}\quad\beta\leq\frac{\max_{i=1,\ldots,N-1}s_{i}}{\min_{i=1,\ldots,N-1} m_{i}} \tag{2.13}\] guarantees unconditional stability. In (2.13), \(s_{i}\) and \(m_{i}\) are the nonzero eigenvalues of matrices \(J\gamma J\) and \(J\frac{1}{\mu}J\), where \(J=I-\frac{1}{N}\mathbf{e}\otimes\mathbf{e}\) and \(\mathbf{e}=(1,\ldots,1)\). Note that \(\alpha\) and \(\beta\) that satisfy (2.13) automatically guarantee the positiveness of the kernel (\(K>0\)), which is an essential condition for TD algorithms to attain the viscosity solution for corresponding interfacial motion [51]. ## 3 Simulations We will now present a statistical study of grain microstructure evolution under anisotropy energies and mobilities in large two-dimensional polycrystals using Algorithm 1. In particular, we focus on the following two phenomena: 1. Onset of AGG under anisotropic grain boundary energy and mobility 2. Development of texture during grain growth ### Abnormal grain growth AGG refers to the enlargement of a minority of grains in a polycrystal at the expense of the surrounding grains [52], and is widely observed in many systems including thin films. AGG contrasts with normal grain growth, characterized by a relatively narrow grain size range and a self-similar grain size distribution in time [53]. While anisotropy in GB properties has been proposed as the underlying cause of AGG [54], quantitatively validating the hypothesis is a challenging task due to the complexity of GB anisotropy, expressed as a function of five **Algorithm 1** Threshold-dynamics of Salvador and Esedoglu [39] ``` 0: The surface tension matrix \(\gamma\), the mobility matrix \(\mu\), the initial grain structure \(\Sigma_{1}^{0},...,\Sigma_{N}^{0}\) described on a discrete grid of which size is \(\delta x\), and the final time \(T\) 0:\(\Sigma_{N}^{n+1}\) at the \((n+1)\)-th time from the grain shapes \(\Sigma_{1}^{n},...,\Sigma_{N}^{n}\) 1: Choose the kernel width \(\delta t\) (equivalent to the time step size \(t_{s}\) of the algorithm) ensuring: \[\delta x<\gamma_{ij}\mu_{ij}\kappa_{\min}\delta t\] 2: Choose \(\alpha\) and \(\beta\) according to (2.13) 3: Construct the two Gaussians \(G_{\sqrt{\alpha\delta t}}\) and \(G_{\sqrt{\beta\delta t}}\) 4:\(t=0\) 5:while\(t<T\)do 6: Compute the convolutions: \[\phi_{1,i}^{n}=G_{\sqrt{\alpha\delta t}}*\mathbb{I}_{\Sigma_{i}^{n}}\quad\text {and}\quad\phi_{2,i}^{n}=G_{\sqrt{\beta\delta t}}*\mathbb{I}_{\Sigma_{i}^{n}}\] 7: Form the comparison functions: \[\psi_{i}^{n}=\sum_{i\neq j}a_{ij}\phi_{1,j}^{n}+b_{ij}\phi_{2,j}^{n}\] where \(a_{ij}\) and \(b_{ij}\) are given by (2.8) and (2.9) 8: Update the grain shapes: \[\Sigma_{i}^{n+1}=\{\mathbf{x}:\psi_{i}^{n}(\mathbf{x})<\min_{j\neq i}\,\psi_{j}^{n}(\bm {x})\}\] 9:\(t=t+t_{s}\) 10:endwhile ``` **Algorithm 2** Threshold-dynamics of Salvador and Esedoglu [39] dofs of the grain boundary character. Moreover, there is no agreement on which particular microstructural characteristics signify the existence of AGG [52]. This motivates a statistical study on AGG in a simplified system amenable for further analysis. #### 3.1.1 AGG in an anisotropic tricrystal To investigate and analyze AGG, we considered a polycrystal with grains belonging to three distinct groups (A, B, and C), and described by a few parameters. Due to its simplicity, this system serves as a minimal example that leads to AGG. We assume that the misorientations between grains within each group are small and the GBs formed by them have identical energies. On the other hand, it is assumed that grains belonging to different groups are highly misoriented and the GB energies between them are larger than the energy of those formed between grains from the same group. If \(\gamma_{ij}\) (\(i=A,B\) or \(C\)) denotes the GB energy between the \(i\)-th and the \(j\)-th groups of grains, then we have \(\gamma_{AA}=\gamma_{BB}=\gamma_{CC}:=\gamma_{1}\) and \(\gamma_{AB}\), \(\gamma_{BC}\), and \(\gamma_{CA}\) are greater than \(\gamma_{1}\). In addition, we also assume that \(\gamma_{CA}=\gamma_{BC}=:\gamma_{2}\) is smaller than \(\gamma_{3}:=\gamma_{AB}\). This implies the orientation angles of grains in group \(C\) are closer to those in \(A\) and \(B\) than \(A\)-grains are to \(B\)-grains. We also assume the system has only two mobilities -- one for small misorientation angle GBs \(m_{L}\) (for the same group) and the other for larger misorientation angles \(m_{H}\) (for different groups). The aforementioned GB anisotropy is characterized by three non-dimensional parameters -- \(\lambda_{1}=\gamma_{2}/\gamma_{1}\) and \(\lambda_{2}=\gamma_{3}/\gamma_{1}\) describe energy anisotropy, and \(\omega=m_{H}/m_{L}\) describes the anisotropy in mobility. Note that if all the parameters are equal to one, the system is isotropic. The mnemonic in Fig. 1 depicts the GB anisotropy of our system. We investigated the following three cases: 1. Case 1a: isotropic grain growth, i.e., \(\lambda_{1}=\lambda_{2}=1.0\) and \(\omega=1.0\) 2. Case 1b: energy ratios \(\lambda_{1}=1.5\) and \(\lambda_{2}=2.0\), and mobility ratio \(\omega=1.0\) 3. Case 1c: energy ratios \(\lambda_{1}=1.5\) and \(\lambda_{2}=2.0\), and mobility ratio \(\omega=1.2\) In Case 1b and 1c, the strengths of GB anisotropy parameters remain _weak_ so that \(\gamma\) Figure 1: A mnemonic depicting grain boundary anisotropy of a polycrystal used to study AGG in Section 3.1. satisfies the stability condition (2.12) of the Algorithm 1.5 However, in these cases, C-type grains are still energetically favored and we anticipate unusual growth of C-type growth. Footnote 5: In Ref. [31], it is provided that a necessary condition for satisfying (2.12) is that the matrix \(\sqrt{\gamma}\) belongs to \(\mathcal{T}_{N}\). In all of the cases, we begin with the initial grain microstructure configuration, shown in Fig. 2a. The initial microstructure consists of a total of \(8,000\) grains generated from the Voronoi tessellations of uniformly distributed random points in the region \(\Omega=[0,1]^{2}\), discretized by a \(3000\times 3000\) regular grid. Each grain is randomly assigned to one of three groups from A to C, such that the number fractions and the area fractions of the three groups are equal to \(1/3\). In Fig. 2a, the red (group A), green (group B), and blue (group C) colors distinguish the three groups. #### 3.1.2 Results Simulations are carried out with a time step size \(t_{S}=5.57\times 10^{-7}\mathsf{t}\) where \(\mathsf{t}\) is a unit conversion factor with dimension \([\text{time}][\text{length}]^{-2}\). Grain microstructures at time \(t_{1}=2.78\times 10^{-4}\mathsf{t}=500t_{S}\) for the three cases are shown in Fig. 2b-Fig. 2d. At the end of the simulations, microstructures of cases 1a, 1b, and 1c contain 2563, 1984, and 1797 grains, respectively. This implies GB anisotropy increases the rate of grain coarsening. Comparing Fig. 2b to Fig. 2c and Fig. 2d, we observe that anisotropy results in clustering of grains belonging to the same groups and an increase in the area fraction of C-type grains. Next, we examine and compare the statistical features of the microstructure evolution in the three cases. Fig. 3a shows plots of grain size distributions at \(t=t_{1}\). The \(x\)-axes represent grain areas normalized by the average grain area at \(t=t_{1}\), and the \(y\)-axes represents the probability density, which implies the areas under the graphs are equal to one. To compute the probability density, a bin size of \(0.1\) was used to partition the \(x\)-axes. From Fig. 3a, we conclude that the three grain size distributions are similar for most of the normalized grain areas. However, comparing the distributions is a delicate exercise as abnormally grown grains are spatially rare [52], and therefore, AGG manifests in the tail of a distribution. A closeup of the tails (yellow shaded region in Fig. 3a) of the distributions is shown in Fig. 3b, which clearly shows that abnormally larger grains (\(A/A_{\text{avg}}\geq 3.5\)) are observed only in cases 1b and 1c. To further investigate the role of each grain type, grain size distribution of each group is separately plotted in Fig. 4. Similar to Fig. 3, \(x\)-axes represents normalized grain sizes. However, the \(y\)-axes represent densities (number fractions) calculated for individual groups. While the distributions of grain types are identical in the isotropic case (Fig. 4a), the C-type grain has an extended tail and a smaller peak value in the presence of anisotropy in cases 1b and 1c (Fig. 4b and Fig. 4c). In other words, anisotropy results in the abnormal growth of C-type grains and lower (relative to the isotropic case) number of smaller grains. The latter is not only a consequence of AGG but also grain shrinkage, which can be reasoned as follows. If a C-type grain is surrounded by grains of similar type but not C-type, it is energetically favorable for it to shrink as \(\gamma_{1}<\gamma_{2}\). On the other hand, if a C-type grain is surrounded by grains of different types (A or B), growth is preferred since \(\gamma_{2}<\gamma_{3}\). Table 2 lists the number fraction and areas of grains of different types measured in the three case studies. The number fraction and the area of C-type grains is the maximum in Figure 2: Grain microstructure at time \(t_{1}=500t_{s}\) for the cases summarized in Section 3.1. The initial microstructure (a) contains 8,000 grains. The numbers of remaining grains in (b), (c), and (d) are 2563, 1984, and 1797, respectively, implying GB anisotropy increases the rate of grain coarsening. Figure 4: Grain size distributions of each group of grains for the three cases introduced in Section 3.1. The results of anisotropic case (b, c) show that the C groups are statistically preferred in Case 1b and Case 1c, resulting in a distinguished grain size distribution from the overall size distribution. Figure 3: Grain size distributions of the cases considered in Section 3.1. A closeup of the tail region, depicted in yellow shade in (a), shows that abnormally larger grains (\(A/A_{\text{avg}}\geq 3.5\)) are only observed in Case 1b and Case 1c. Case 1c. This quantitative comparison shows that mobility anisotropy (Case 1c) enhances the effects of energy anisotropy (Case 1b). Fig. 5 shows plots of _area fractions_ versus the normalized grain sizes for the three cases. Compared to the probability density plots (Fig. 3 and Fig. 4), AGG is more conspicuous in the area fraction plots as the smaller number of large-sized grains have a significant effect in the latter plots. In Fig. 4(b), we observe that GB energy anisotropy triggers a bimodal distribution, as predicted by a mean-field theory of Abbruzzese and Lucke [55]. In the presence of anisotropic grain boundary mobilities (Case 1c), Fig. 4(c) shows that the second mode moves further right. In summary, GB anisotropy introduces a _statistical preference_ to grains with particular orientations, resulting in inhomogeneity in grain size distributions and spatial arrangements As the strength of anisotropy increases, it will eventually lead to abnormal grain growth as seen in experiments [52]. ### Texture formation In this section, we study the role of grain boundary anisotropy anisotropy and bicrystallography on the process of texture formation during grain growth. For this study, we consider atomistically informed grain boundary energies that respect the boundary's bicrystallography. For example, Fig. 6 shows a plot of energy versus misorientation angle for a [110] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & & A & B & C \\ \hline \multirow{2}{*}{Case 1a} & Number Fraction & 0.3246 & 0.3433 & 0.3320 \\ \cline{2-5} & Area Sum & 0.3236 & 0.3383 & 0.3381 \\ \hline \multirow{2}{*}{Case 1b} & Number Fraction & 0.2070 & 0.2504 & 0.5426 \\ \cline{2-5} & Area Sum & 0.1730 & 0.2143 & 0.6127 \\ \hline \multirow{2}{*}{Case 1c} & Number Fraction & 0.2132 & 0.2324 & 0.5544 \\ \cline{2-5} & Area Sum & 0.1739 & 0.1957 & 0.6304 \\ \hline \end{tabular} \end{table} Table 2: The number and area fractions of each group at time \(t_{1}=500t_{s}\) for Case 1a to 1c. Figure 5: Grain size distribution in terms of area fractions. The same bins are used as Fig. 3. The grain boundary energy anisotropy (b) triggers a bimodular distribution, while anisotropy mobility (c) moves the location of the second mode further right. Figure 6: Molecular dynamics predictions on grain boundary energy density as a function of misorientation angle for a [110] symmetric-tilt grain boundary in face-centered-cubic materials [56; 57]. Misorientations corresponding to low energy \(\Sigma\) boundaries are marked on the upper axis. Figure 7: The [110]-STGB grain boundary energy up to misorientation angle \(70.6^{\circ}\) from the reference state shown in (a). The sub-domains for group A to E, in which grain orientations are sampled, are colorized in (b). symmetric tilt grain boundary energy (STGB).6 The plot shows characteristic local minima, marked as \(\Sigma 3\) and \(\Sigma 11\), due to enhanced lattice matching of the grains [22; 58] at certain misorientation angles. In the following section, we describe the polycrystal and the bicrystallography respecting grain boundary energy function used to explore texture formation. Footnote 6: The domain of the plot in Fig. 6 is restricted to misorientation angles up to \(180^{\circ}\) since \(\gamma\) is symmetric about \(180^{\circ}\) misorientation angle due to the lattice symmetry. #### 3.2.1 The system We consider a two-dimensional fcc polycrystal with the \(\langle 110\rangle\) direction of all grains aligned along the z-axis. All grain boundaries are assumed to be \([110]\) symmetric-tilt type. The orientations of the grains are measured with respect to a reference grain whose \([100]\) direction is aligned along the \(x\)-axis, as depicted in Fig. 6(a). Under these constraints, we only need a single scalar \(\theta_{i}\) to describe the orientations of grain samples. To strategically investigate the roles of low and high-angle grain boundaries, we restricted the grain orientations, \(\theta_{i}\), to groups of \(3^{\circ}\)-length intervals of the misorientation angle as follows: \[\theta_{i}\in\begin{cases}\text{Group A},\text{ if }\theta_{i}\in[0,3]^{ \circ},\\ \text{Group B},\text{ if }\theta_{i}\in[13,16]^{\circ},\\ \text{Group C},\text{ if }\theta_{i}\in[26,29]^{\circ},\\ \text{Group D},\text{ if }\theta_{i}\in[39,42]^{\circ},\\ \text{Group E},\text{ if }\theta_{i}\in[63,66]^{\circ}.\end{cases} \tag{3.1}\] The above intervals are shown in color in Fig. 6(b). Boundaries between grains from the same group are identified as LAGBs with misorientation \(<3^{\circ}\). HAGBs of the system are formed by grains from different groups, and have a misorientation angle \(>10^{\circ}\). Groups A and E in (3.1) are constructed such that the misorientation of a grain boundary between an A-type grain and an E-type grain is close to that of a twin boundary (TB), which has a misorientation angle of \(70.6^{\circ}\). TBs are often desired in grain boundary engineering as they enhance the strength and ductility of a polycrystal [59; 60]. The energies of LAGBs are typically lower than those of HAGBs and increase steeply with misorientation angles (Fig. 6(b)). The energy of a typical HAGB is less sensitive to changes in the misorientation angle. However, TBs are HAGBs that are exceptions to the above two properties as can be inferred from Fig. 6(b). In this study, we explored microstructures _with_ and _without_ subgrains. In the latter, grains are partitioned into subgrains, which are connected along LAGBs. Subgrains are commonly observed in polycrystalline materials subjected to plastic deformation followed by _recovery_ at a temperature below the recrystallization temperature. The mechanism is as follows. Plastic deformation leads to an increase in dislocations, which subsequently rearrange during the recovery process to form LAGBs and subgrains. Further deformation promotes the rotation of subgrains resulting in the transformation of LAGBs into HAGBs [61; 62]. Recent progress in _severe plastic deformation_, employed during manufacturing processes, can be used to facilitate this mechanism [43; 44]. The following initial microstructures are considered in our simulations: 1. Case 2a: a tricrystal with subgrains, wherein the three primary grains belong to A, B, and D groups. TBs are excluded 2. Case 2b: a polycrystal with subgrains. TBs are included, and LAGBs are dominant. 3. Case 2c: a polycrystal without subgrains. TBs are included, and HAGBs are dominant. In all cases, we examine 5,000 distinct subgrains (or only grains for Case 2c) arranged in a periodic domain discretized by a regular grid of size \(3000\times 3000\). The mobility ratio between high to low angle grain boundary is again set to \(\omega=1.2\). The initial tricrystal microstructure of Case 2a is shown in Fig. 8a, wherein grains and subgrains are colored based on the color keys (Fig. 7b) of the groups they belong to. The tricrystal consists of A (red), B (green), and D (yellow) type grains. The subgrains have the same color as the grain they belong to and are shaded depending on their orientation relative to their parent grain. By construction, boundaries between different colored grains are HAGBs. The microstructure was generated using a Voronoi tessellation of random seeds. Orientation groups are determined by the location of Voronoi seeds, while specific orientation values are randomly chosen within the domain of each group. Cases 2b and 2c are designed to evaluate the role of TBs. The initial microstructure for Case 2b is shown in Fig. 9a, wherein the TBs are the boundaries between type A (red) and type E (blue) grains. It was generated using a two-level Voronoi tessellation [63]. The coarse-level Voronoi seeds are used to determine group types, while the \(30\times\)-refined seeds define specific orientation values. This procedure yields a LAGB-dominated polycrystal wherein each grain is comprised of approximately 160 subgrains. On the other hand, The microstructure of Case 2c generated from a single-level Voronoi tessellation is dominated by HAGB. The initial microstructures of cases 2b and 2c, however different, have similar uniform grain orientation distributions (Fig. 10a and Fig. 12a), and therefore, are non-textured. Table 3 documents the microstructure and simulation parameters of the three case studies of this section. An implementation of Algorithm 1 for a large-sized system is a delicate exercise. Typically, a moderate quotient \(\alpha/\beta\) of the widths of the two Gaussians (2.7) is chosen so that the two Gaussians, \(G_{\sqrt{\alpha\delta t}}\) and \(G_{\sqrt{\beta\delta t}}\), are well-resolved even at a fairly refined grid [39]. In a large-scale system with a wide range of grain boundary energies, however, a naive choice of the width parameters (2.13) easily becomes ill-posed, because the minimum grain boundary energy in the system can be arbitrarily small. To address the issue, we follow the suggestion in Ref. [39]. The idea is to limit the possible minimum misorientation angles between any two grains to no more than \(0.5^{\circ}\) when constructing the two Gaussian kernels. This is equivalent to setting the minimum energies that can be resolved by the algorithm. #### 3.2.2 Results _Case 2a_: \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & Number of subgrains & Number of grains & Grid size & Time step \(t_{S}\) \\ \hline Case 2a & 5000 & 3 & \(3000\times 3000\) & \(4.44\times 10^{-6}\) \\ \hline Case 2b & 5000 & 30 & \(3000\times 3000\) & \(2.22\times 10^{-6}\) \\ \hline Case 2c & - & 5000 & \(3000\times 3000\) & \(2.22\times 10^{-6}\) \\ \hline \end{tabular} \end{table} Table 3: Microstructure and simulation parameters of numerical experiments considered in Section 3.2 Fig. (a)a through Fig. (c)c show the time evolution of the tricrystal with subgrains. All subgrains in the initial microstructure have comparable sizes. From Fig. (b)b, we can infer that the grain coarsening rate is higher near a HAGB because of the higher energy and mobility of a HAGB. Fig. (b)b and Fig. (c)c shows that when a grain (circled) near a HAGB reaches a critical size, it undergoes AGG. It is important to note that the critical size plays an important role in selecting grains as not all grains near HAGBs undergo AGG. A plausible explanation for this observation is that beyond a critical size, it is energetically favorable for the subgrain to grow in size and absorb the subgrain boundaries in the adjoining grain. The critical size may depend on the relative sizes of the grains compared to neighbors. As a result, certain subgrains located close to grain boundaries are more likely to outgrow those located in the interior, as shown in Fig. (c)c. It is interesting to note that this mechanism resembles _discontinuous recrystallization_, wherein new defect-free grains nucleate near HAGBs and grow to replace the microstructure entirely. Fig. (a)a through Fig. (c)c show the time evolution of a grain microstructure with subgrains and a high fraction of LAGBs. Similar to Case (a)a, certain subgrains located close to a Figure 8: Microstructure evolution of a tricrystal consisting of subgrains (same colored grains). The white circled subgrain is an example of abnormally growing grain near grain boundaries. Figure 9: Microstructure evolution of a polycrystal consisting of subgrains (same colored grains). The boundaries between groups A (red) and E(blue) are twin grain boundaries. GB begin to grow at the expense of subgrains of the adjoining grain. To examine how the statistics of the microstructure evolve, we classified the grains into three types: LAGB, HAGB*, and TB, where HAGB* represents high-angle grain boundaries that are not TBs. Fig. (a)a to Fig. (c)c show plots of the area fraction of each group (A to E) and Fig. (d)d to Fig. (f)f show the fraction of GB types, at three different times. The plot in Fig. (d)d shows that the majority of GBs in the initial microstructure are LAGBs, and only a negligible fraction of them are TBs. Since TBs have low energy, we expect area fractions of grain types A and E -- which together form a TB -- to grow. However, we observe that there is only a marginal increase in their area fractions in the final microstructure (Fig. (c)c). The increase in the fraction of TBs is also negligible. Based on the above observation, we conclude that in a system with a high fraction of LAGBs, there is no strong preference for TBs and texture does not form. Next, we examine the microstructure evolution in Case 2c wherein HAGBs are dominant and the system has no subgrains. Expectedly, the initial microstructure shown in Fig. (a)a has no subgrains. The initial fraction of TB is also higher than Case 2b (Fig. (d)d). However, the initial texture, seen in Fig. (a)a, is not as strong as previous. In this case, as grain growth continues, grains from the same groups coalesce (Fig. (b)b), because the system penalizes HAGB*, which have higher energies. During this process, the fractions of both LAGB and TBs steeply increase at the expense of HAGB*. Consequently, the final microstructure (Fig. (c)c) shows a considerable growth of type A and E grains, indicating texture development in the microstructure. In Fig. (f)f, we also observed that the growth of TBs is substantially facilitated using an initial microstructure with a high fraction of HAGB. Lastly, we discuss the results of the present simulations in comparison to recent exper Figure 10: Texture (b-c) and types of grain boundary (e-f) changes during grain growth of a LAGB dominant polycrystal. HAGB* refers high-angle grain boundaries excluding TBs. Figure 11: Microstructure evolution of a HAGB dominant polycrystal (a). The boundaries between groups A (red) and E(blue) are twin grain boundaries. Grains from the same group coalesce first (b) and type A and E grains become dominant (c). Figure 12: Texture (b-c) and types of grain boundary (e-f) changes during grain growth of a HAGB dominant polycrystal. HAGB* refers to high-angle grain boundaries excluding TBs. imental observations of TB development in nanocrystalline materials [64]. Julie et al. [64] investigated the effect of grain size on texture formation during annealing using electrodeposited nickel samples with different average grain sizes, ranging from 20 nm to 200 nm. It is observed that the fraction of TBs in the final microstructure increases as the average grain size in the initial microstructure decreases. This trend is attributed to the relationship between the probability of accidental twin formation and the velocity of the migrating grain boundary, which in turn is inversely proportional to grain size. In our simulation, the initial microstructure of Case 2c can also be viewed as a substantially smaller average grain size than that of Case 2b (if one ignores its sub-microstructure). Since smaller grains coarsen faster, the grains in Case 2c have a higher probability of forming new grain boundaries during the same period of time, which also increases the probability of forming twin grain boundaries. In this regard, our simulation results of Case 2c are consistent with experimental observations. Yet, it is also important to note that the size difference is not the only factor in our simulations. The main difference between cases 2b and 2c is the fraction of grain boundary types in the initial condition. Unfortunately, however, since the initial fractions of GB types are not reported in Ref. [64], a conclusive discussion cannot be made at this time. ## 4 Conclusion A fundamental open problem in materials science is to establish the relationship between process parameters and the evolution of grain microstructure. Given that the process-structure relationship is inherently statistical, it is necessary to use lightweight models that can efficiently capture the microstructure evolution of a polycrystal ensemble. Recent years have seen considerable advances in threshold-dynamics techniques, which have revolutionized the way to simulate full-field grain microstructure evolution during growth. The method serves as a highly efficient and robust algorithm for statistical studies on grain microstructure. In this paper, we utilized the TD method to investigate the statistical behavior of grain microstructure under anisotropic GB characters, with a focus on abnormal grain growth and texture development. To ensure the numerical stability of the algorithm and reliable result dynamic evolution of the grain network, we imposed a restriction on the degree of GB anisotropy. We considered GBs with energies and mobilities that are compatible with the fundamental restrictions of the threshold-dynamics method. Our first numerical experiment involves a system with the simplest GB anisotropy, which facilitates the analysis of simulations of abnormal grain growth. We found that GB anisotropy introduces a statistical preference for certain grain orientations, leading to changes to the grain size distribution compared to an isotropic system. In our second numerical experiment, we incorporated crystallographic grain boundary energy and examined the evolution of microstructure features at different initial configurations. We observed that the development of texture and the growth of twin grain boundaries were more pronounced when the initial microstructure had a higher fraction of high-angle grain boundaries. These findings suggest effective grain boundary engineering strategies for improving material properties. In our future work, we aim to enhance the TD method by integrating grain rotation [65] and grain boundary plasticity [66], allowing for the simultaneous evolution of microstructure and deformation. Especially, this is crucial for investigating phenomena such as dynamic recrystallization, superplasticity, and severe plastic deformation [10; 67; 68], which require a more comprehensive understanding of the underlying mechanisms. We anticipate that incorporating these features will enhance the accuracy and applicability of the model, thereby advancing our ability to predict and optimize material properties. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2309.05479
Topological transitions in dissipatively coupled Su-Schrieffer-Heeger models
Non-Hermitian topological phenomena have gained much interest among physicists in recent years. In this paper, we expound on the physics of dissipatively coupled Su-Schrieffer-Heeger (SSH) lattices, specifically in systems with bosonic and electrical constituents. In the context of electrical circuits, we demonstrate that a series of resistively coupled LCR circuits mimics the topology of a dissipatively coupled SSH model. In addition, we foreground a scheme to construct dissipatively coupled SSH lattices involving a set of non-interacting bosonic oscillators weakly coupled to engineered reservoirs of modes possessing substantially small lifetimes when compared to other system timescales. Further, by activating the coherent coupling between bosonic oscillators, we elucidate the emergence of non-reciprocal dissipative coupling which can be controlled by the phase of the coherent interaction strength precipitating in phase-dependent topological transitions and skin effect. Our analyses are generic, apropos of a large class of systems involving, for instance, optical and microwave settings, while the circuit implementation represents the most straightforward of them.
Jayakrishnan M. P. Nair, Marlan O. Scully, Girish S. Agarwal
2023-09-11T14:17:50Z
http://arxiv.org/abs/2309.05479v1
# Topological transitions in dissipatively coupled Su-Schrieffer-Heeger models ###### Abstract Non-Hermitian topological phenomena have gained much interest among physicists in recent years. In this paper, we expound on the physics of dissipatively coupled Su-Schrieffer-Heeger (SSH) lattices, specifically in systems with bosonic and electrical constituents. In the context of electrical circuits, we demonstrate that a series of resistively coupled LCR circuits mimics the topology of a dissipatively coupled SSH model. In addition, we foreground a scheme to construct dissipatively coupled SSH lattices involving a set of non-interacting bosonic oscillators weakly coupled to engineered reservoirs of modes possessing substantially small lifetimes when compared to other system timescales. Further, by activating the coherent coupling between bosonic oscillators, we elucidate the emergence of non-reciprocal dissipative coupling which can be controlled by the phase of the coherent interaction strength precipitating in phase-dependent topological transitions and skin effect. Our analyses are generic, apropos of a large class of systems involving, for instance, optical and microwave settings, while the circuit implementation represents the most straightforward of them. ## I Introduction One of the prime objectives of research in condensed matter physics is the characterisation of matter phases earmarked by the (spontaneous breaking of) symmetries of the system under consideration. In this context, the discovery of the quantum Hall effect marked a stark shift in the understanding of phases and introduced the concept of topological order, spawning the field of topological insulators [1; 2; 3]. This was subsequently realized on a variety of different platforms, including, photonic [4], cold atomic systems [5] and many more [6; 7; 8]. One of the key features of the topological classification of phases is the bulk boundary correspondence (BBC) and the emergence of edge and surface states that are impervious to environmental loss and disorder with applications ranging from the realization of topological qubits [9; 10; 11; 12; 13] to lasing [14; 15; 16; 17] among others [18; 19; 20; 21; 22; 23; 24; 25]. Until recently, the lion's share of research on the physics of topological systems involved Hermitian models. However, real physical systems interact with their environment resulting in open quantum dynamics [26] and effective non-Hermitian Hamiltonians. In the last few years, the topology of non-Hermitian lattice systems has been a subject of intense research activity [27; 28], unravelling some exciting new physics, for example, the breakdown of BBC and skin effect in systems possessing non-reciprocal couplings [29; 30; 31; 32; 33; 34; 35; 36; 37]. A quintessential model in the study of topological physics is the Su-Schrieffer-Heeger (SSH) model [38; 39; 40; 41; 42] and several non-Hermitian extensions of the model have been considered in literature [43; 44; 45; 46; 47]. For instance, [45; 46] considered \(PT\)-symmetric extensions of the SSH model which can be engineered by the incorporation gain into the system. However, the physics of lattice models with a purely dissipative form of coupling [48; 49] between the constituents is largely unplumbed. Dissipative coupling between two otherwise non-interacting systems emanates from their decay into common dissipative channels [50] and it is worth noting that dissipative couplings are more prevalent in nature compared to their coherent counterparts. Such couplings have been investigated both theoretically and experimentally in a multitude of settings [51; 52; 53; 54; 55], for example, involving magnonic and photonic sub-systems [56; 57; 58; 59; 60]. In this work, we focus on the physics of dissipatively coupled SSH models. In particular, we demonstrate two distinct experimentally realizable schemes involving bosonic and electrical subsystems. We show that a system of resistively coupled LCR resonators mimics the topology of dissipatively coupled SSH (DSSH) models. Subsequently, we illustrate that a lattice of otherwise non-interacting bosonic oscillators interacting with engineered reservoirs of modes having significantly large decay parameters compared to other system parameters can be described by an effective non-Hermitian Hamiltonian akin to the DSSH model. Furthermore, by triggering the coherent coupling between the oscillators, we outline the generation of non-reciprocal coupling in DSSH models featuring topological transitions that can be controlled by the phase of the coherent interaction strength. Note, _en passant_, the generality of our results grants an immediate experimental realization of the protocols discussed in the subsequent sections, especially in the microwave and optical domains. This paper is organized as follows. In section II, we revisit the SSH model with coherent couplings followed by a discussion of its dissipative counterpart in section III. Subsequently, we delineate two independent schemes comprising of electrical and bosonic constituents for the realization of the DSSH model in section IV. In section V, we foreground a protocol for the realization of DSSH model with non-reciprocal couplings through the application of a coherent form interaction between bosonic oscillators, translating into topological transitions and skin effect. Finally, we conclude our results in section VI. ## II Key features of the SSH model We begin by revisiting the SSH model coherently coupled unit cells. To this end, consider a one-dimensional (1-D) lattice of two different types of sites, A and B with staggered nearest neighbor couplings as depicted in Fig. 1. The interaction Hamiltonian of the system subject to open boundary conditions (OBC) is given by \[H=\sum_{i=1}^{N}t_{1}\left|A_{i}\right\rangle\left\langle B_{i}\right|+\sum_{i= 1}^{N-1}t_{2}\left|A_{i+1}\right\rangle\left\langle B_{i}\right|+h.c, \tag{1}\] where \(N\) denotes the number of unit cells, \(\left|A_{i}\right\rangle\), \(\left|B_{i}\right\rangle\) characterize the particle excitation at their respective location while \(t_{1}\) and \(t_{2}\in\mathbb{R}\) are the intra and inter-cellular couplings respectively. Equivalently, we can write the SSH Hamiltonian subject to periodic boundary conditions (PBC) as \[H=\sum_{i=1}^{N}t_{1}\left|A_{i}\right\rangle\left\langle B_{i}\right|+\sum_{i =1}^{N}t_{2}\left|A_{f(i)}\right\rangle\left\langle B_{i}\right|+h.c, \tag{2}\] where \(f(i)=i+1\) mod \(N\). Invoking the Bloch theorem, the Hamiltonian under PBC can be recast in the Fourier domain in terms of the Bloch Hamiltonian provided by \[H_{k}=\begin{pmatrix}0&R(k)e^{-i\phi(k)}\\ R(k)e^{i\phi(k)}&0\end{pmatrix}, \tag{3}\] where \(k=\frac{2\pi n}{N}\), \(m\in\{1,2..N\}\), \(R(k)=\sqrt{t_{1}^{2}+t_{2}^{2}+2t_{1}t_{2}\cos{(k)}}\), the phase \(\phi(k)=\arctan(\frac{r_{k}\sin{k}}{t_{1}+r_{2}\cos{(k)}})\) and we set the inter-cellular spacing \(a=1\). Note that the Hermitian matrix \(H_{k}\) is chiral symmetric, that is \(\sigma_{z}H_{k}\sigma_{z}=-H_{k}\) precipitating in symmetric eigenvalues \(E_{\pm}=\pm R(k)\) and corresponding eigenstates \[\left|E_{\pm},k\right\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}\pm 1\\ e^{i\phi(k)}\end{pmatrix}. \tag{4}\] Palpably, the gap between the energy eigenvalues vanishes at \(k=\pi\) and \(t_{1}=t_{2}\) as demonstrated in Fig. 2 (a). In contrast, the Hamiltonian under OBC described by Eq. (1) supports two zero-energy eigenvalues in the large \(N\) limit for \(\left|\frac{t_{1}}{t_{2}}\right|<1\), eliciting the well-known edge modes of the SSH model, a testament to the non-trivial topology of the system. In addition, one can define a topological invariant, _viz_, the winding number \(\nu_{\pm}\) defined in terms of the Berry connection \(A_{\pm}(k)=i\left\langle E_{\pm},k|\partial_{k}\left|E_{\pm},k\right\rangle\) as \[\nu_{\pm}=\frac{1}{\pi}\oint A_{\pm}(k)dk. \tag{5}\] For instance, \(\nu_{+}\) calculated from Eq. (4) and Eq. (5) satisfies \[\nu_{+}=\begin{pmatrix}1&\text{if}\left|\frac{t_{1}}{t_{2}}\right|<1\\ 0&\text{if}\left|\frac{t_{1}}{t_{2}}\right|>1.\end{pmatrix} \tag{6}\] A non-zero winding number \(\nu_{\pm}\) is a direct manifestation of the non-trivial topology of the system demonstrating \(2|\nu_{\pm}|\) number of edge modes and \(t_{1}=t_{2}\), the point of vanishing gap between bulk energy bands demarcates the boundary between the two phases. This is known as the bulk boundary correspondence (BBC) in Hermitian lattice systems. It is worth noting that coherent coupling between systems emanates from the spatial overlap between their respective modes. By contrast, dissipatively coupled systems with a non-Hermitian form of interaction are prevalent in nature. In essence, any two systems interacting with a common intermediary channel will spawn a dissipative form coupling. In the following, we will discuss the general properties of dissipatively coupled SSH models. ## III Dissipatively coupled SSH (DSM) model Consider a 1-D lattice of sites A and B coupled dissipatively as depicted in Fig. 1. This is analogous to the Hermitian SSH model, except for a notable difference in the off-diagonal elements, wherein, the real couplings \(t_{1}\) and \(t_{2}\) are now replaced by purely imaginary numbers leading to an effective Figure 1: Schematic of the system described by Eqs. (1-2) and Eq. (7) under open and periodic boundary conditions. Figure 2: (a) The eigenvalues of the Hermitian SSH model described by Eqs. (1-2) under periodic (green) and open (blue) boundary conditions; (b) The imaginary part of the eigenvalues of the dissipative SSH model described by Eq. (7). under periodic (green) and open (blue) boundary conditions and the number of unit cells \(N=25\) and the effective damping \(\Gamma_{r}=\gamma+\Gamma_{1}+\Gamma_{2}\) and \(\gamma=3\). non-Hermitian Hamiltonian given by \[H=-\sum_{i=1}^{N}\left(\left(\Delta_{1}-i\Gamma_{r}\right)\left|A_{ i}\right\rangle\left\langle A_{i}\right|+\left(\Delta_{2}-i\Gamma_{r}\right) \left|B_{i}\right\rangle\left\langle B_{i}\right|\right)\] \[+\sum_{i=1}^{N}i\Gamma_{1}\left|A_{i}\right\rangle\left\langle B_ {i}\right|+\sum_{i=1}^{N-1}i\Gamma_{2}\left|A_{i+1}\right\rangle\left\langle B _{i}\right|+h.c, \tag{7}\] where \(\Gamma_{r}=\gamma+\Gamma_{1}+\Gamma_{2}\) denotes the effective damping constant corresponding to \(A_{i}\) and \(B_{i}\) and we assume modes \(A_{i}\) and \(B_{i}\) decay at the same rate \(\gamma\). The emergence of an effective Hamiltonian describing the above equation from considerations of a real-space Hermitian system will be explicated in subsequent sections. For simplicity, we set \(\Delta_{1}=\Delta_{2}=0\). The effective Bloch Hamiltonian under PBC is provided by \[\mathcal{H}(k)=\begin{pmatrix}-i\Gamma_{r}&i\Gamma_{1}+i\Gamma_{2}e^{-ik}\\ i\Gamma_{1}+i\Gamma_{2}e^{ik}&-i\Gamma_{r}\end{pmatrix}, \tag{8}\] with eigenvalues \(E_{\pm}=-i\Gamma_{r}\pm i\sqrt{\Gamma_{1}^{2}+\Gamma_{2}^{2}+2\Gamma_{1} \Gamma_{2}\cos k}\) where \(k=\frac{2\pi n}{N}\), the parameters \(\Gamma_{j}\), \(j\in\{1,2\}\) are the absolute value of the strength of dissipative coupling and we have set the diagonal elements to be identical. Owing to the non-Hermitian nature of the system, the right eigenvectors and their dual left eigenvector basis of \(\mathcal{H}(k)\) are, in general, not identical. Let \(\left|R_{\pm},k\right\rangle\) and \(\left|L_{\pm},k\right\rangle\) be the bi-orthogonal right and left eigenvectors respectively of \(\mathcal{H}(k)\) defined by \[\mathcal{H}(k)\left|R_{\pm},k\right\rangle=E_{\pm}\left|R_{\pm},k\right\rangle\] \[\mathcal{H}^{\dagger}(k)\left|L_{\pm},k\right\rangle=E_{\pm}^{*} \left|L_{\pm},k\right\rangle, \tag{9}\] such that \(\left\langle L_{m},k|R_{n},k\right\rangle=\delta_{m,n}\) where \(m,n\in\{+,-\}\) and \(\delta_{m,n}\) is the Kronecker delta. Interestingly, \(\mathcal{H}(k)\) is anti-Hermitian, i.e., \(\mathcal{H}^{\dagger}(k)=-\mathcal{H}(k)\). As a result, the bi-orthogonal eigenvectors have the property \[\left|R_{\pm},k\right\rangle=\left|L_{\pm},k\right\rangle=\frac{1}{\sqrt{2}} \begin{pmatrix}1\\ \pm ie^{\phi(k)},\end{pmatrix} \tag{10}\] where \(\phi(k)=-\arctan\!\left(\frac{\Gamma_{1}+\Gamma_{2}\cos k}{\Gamma_{1}\sin k}\right)\). Subsequently, one can define the Berry connection involving the bi-orthogonal eigenvectors as \(A_{\pm}(k)=i\left\langle L_{\pm},k\right|\partial_{k}\left|R_{\pm},k\right\rangle\) and analogous to the Eq. (5) of coherently coupled SSH model, the system is topological with \(\nu_{+}=1\) for \(\left|\Gamma_{2}\right|>\left|\Gamma_{1}\right|\). Note, _en passant_, the constant diagonal decay is irrelevant for topological considerations. An interesting consequence of non-Hermiticity is the breakdown of BBC. In other words, the parameters corresponding to the energy gap closing in a non-Hermitian Bloch Hamiltonian do not, in general, signify the boundary between topological and trivial phases. On the contrary, owing to the anti-Hermitian nature of \(\mathcal{H}(k)\), the dissipatively coupled system described in Fig. 1 follows BBC. In Fig. 2 (b), we plot the eigenvalues of the system under PBC and OBC and when \(\left|\frac{\Gamma_{1}}{\Gamma_{2}}\right|<1\), it displays the conspicuous emergence of two distinct eigenvalues corresponding to the edge modes flanked on either side by the bulk modes. Not surprisingly, this is exactly the point of the vanishing energy gap between the Bloch modes corroborating BBC. _Role of real diagonal terms in DSSH:_ Consider now, the scenario \(\Delta_{1}=-\Delta_{2}=\Delta\). The Bloch Hamiltonian under PBC modifies to \[\mathcal{H}_{k}=\begin{pmatrix}\Delta-i\Gamma_{r}&h(k)\\ -h^{*}(k)&-\Delta-i\Gamma_{r}\end{pmatrix}, \tag{11}\] where \(h(k)=i\Gamma_{1}+i\Gamma_{2}e^{-ik}\). Observe that the effective momentum space Hamiltonian is anti-PT symmetric, in other words, \((\mathcal{P}T)\mathcal{H}_{k}(\mathcal{P}T)=-\mathcal{H}_{k}\). We may rewrite \(h(k)=B_{x}-iB_{y}\), where \(B_{x}=\Gamma_{2}\sin(k)\) and \(B_{y}=-(\Gamma_{1}+\Gamma_{2}\cos(k))\) are pseudo magnetic fields providing an analogy to spin half particles interacting with magnetic fields. When discussing the topological properties of the system, the constant diagonal term \(-i\Gamma_{r}\) is irrelevant. The eigenvalues of the system, ignoring \(-i\Gamma_{r}\) are given by \(E_{\pm}=\pm\sqrt{\tilde{\Delta}^{2}-(B_{x}^{2}+B_{y}^{2})}\) for \(|\tilde{\Delta}|>\left|h(k)\right|\) and \(E_{\pm}=\pm i\sqrt{B_{x}^{2}+B_{y}^{2}-\tilde{\Delta}^{2}}\) for \(|\tilde{\Delta}|<\left|h(k)\right|\). Here, we focus on the region where \(|\tilde{\Delta}|<\left|h(k)\right|\) and use the parametrization \(\frac{\tilde{\Delta}}{R}=\sinh(\theta)\), \(\frac{B_{x}}{R}=\cosh(\theta)\cos(\phi)\), \(\frac{B_{y}}{R}=\cosh(\theta)\sin(\phi)\), where \(R=\sqrt{B_{x}^{2}+B_{y}^{2}-\tilde{\Delta}^{2}}\) to obtain the right and left eigenvectors of the eigenvalue \(\lambda_{+}\) as \[\left|R_{+}\right\rangle=\frac{1}{\sqrt{2(1+i\sinh(\theta))}}\begin{pmatrix}-i \cosh(\theta)e^{-i\phi}\\ 1+i\sinh(\theta)\end{pmatrix} \tag{12}\] \[\left|L_{+}\right\rangle=\frac{1}{\sqrt{2(1+i\sinh(\theta))}}\begin{pmatrix}-i \cosh(\theta)e^{-i\phi}\\ 1-i\sinh(\theta)\end{pmatrix}. \tag{13}\] The Berry connection of the system is defined as \(A_{+}(k)=i\left\langle L_{+}|\partial_{k}|R_{+}\right\rangle\). Utilizing the parametrization discussed above, we can recast the Berry connection as \(A(k)=(\partial_{k}\phi)A(\phi)+(\partial_{t}\theta)A(\theta)\). If we consider, for example, a trajectory in the \(\theta\)-\(\phi\) space, wherein, \(\partial_{t}\theta=0\), the Berry connection is given by \(A(k)=\frac{1}{2}(1-i\sinh(\theta)\partial_{k}\phi\). Note, when \(\theta=0\), the equation for winding number \[\nu_{+}=\frac{1}{\pi}\oint A_{+}(k)dk, \tag{14}\] morphs into Eq. (5) and the system is topological with \(\nu_{+}=1\) for \(\left|\Gamma_{2}\right|>\left|\Gamma_{1}\right|\). More precisely, the anti-PT symmetric system demonstrates partial topological phases for trajectories where \(\partial_{k}\theta=0\), wherein, the real part of the winding numbers mimics the topology of a Hermitian SSH model. It is worth noting that the Hamiltonian \(\mathcal{H}_{k}\) is not chiral symmetric, in other words \(\sigma_{x}\mathcal{H}_{k}\sigma_{z}\neq-\mathcal{H}_{k}\). In the following section, we provide two experimentally realizable protocols to engineer DSSH model. ## IV Realization of the dissipative SSH model _Circuit model:_ In this section, we provide a circuit model to construct the DSSH model employing an electrical circuit involving coupled amplifying LRC resonators connected in parallel through a coupling resistor as depicted in Fig. 3. Upon solving for Kirchhoff's equations of motion for voltages, we obtain \[\tilde{V}_{n}+\omega_{1}^{2}V_{n}+(\gamma_{1}+\Gamma_{1}+\Gamma_{2}) V_{n}=\Gamma_{1}\tilde{V}_{n}+\Gamma_{2}\tilde{V}_{n-1}\] \[\tilde{V}_{n}+\omega_{2}^{2}\tilde{V}_{n}+(\gamma_{2}+\Gamma_{1}+ \Gamma_{2})\tilde{V}_{n}=\Gamma_{1}\dot{V}_{n}+\Gamma_{2}\dot{V}_{n+1}. \tag{15}\] Here, \(\omega_{1}=1/\sqrt{L_{1}C_{1}}\), \(\omega_{2}=1/\sqrt{L_{2}C_{2}}\), \(\Gamma_{1}=\frac{1}{R_{c}C_{1}}\), \(\Gamma_{2}=\frac{1}{R_{c}C_{2}}\), \(\gamma_{1}=\frac{1}{R_{c}C_{1}}\) and \(L_{1,2}\), \(R_{c}\), \(C_{1,2}\) are respectively inductance, the two coupling resistances, capacitance of the constituent elements in the unit cell and we assume \(C_{1}=C_{2}\). Note _en passant_, the constants \(\Gamma_{1}\) and \(\Gamma_{2}\) represent the intra and inter-cellular couplings in the lattice. In the weak coupling and small detunings regime, that is, when \([\Gamma_{1},\Gamma_{2}]<<\{\omega_{1},\omega_{2}\}\) and \(|\omega_{1}-\omega_{2}|<<\frac{\omega_{1}+\omega_{2}}{2}\), we can reduce the above equations by using the slowly varying envelope functions \(v_{n}(t),\tilde{v}_{n}(t)\), such that \[V_{n}(t)=\frac{v_{n}(t)e^{-i\omega_{0}(t)}+c.c}{2}\] \[\tilde{V}_{n}(t)=\frac{\tilde{v}_{n}(t)e^{-i\omega_{0}(t)}+c.c}{2}, \tag{16}\] where \(\omega_{0}=\frac{\omega_{1}+\omega_{2}}{2}\). Employing Eq. (16), the dynamics of the envelope functions are obtained as \[\dot{v}_{n}=-i\frac{\bar{\Delta}-i(\gamma_{1}+\Gamma_{1}+\Gamma_{2 })}{2}v_{n}+\frac{\Gamma_{1}}{2}\tilde{v}_{n}+\frac{\Gamma_{2}}{2}\tilde{v}_{n -1}\] \[\dot{v}_{n}=i\frac{\bar{\Delta}+i(\gamma_{2}+\Gamma_{1}+\Gamma_{2 })}{2}\tilde{v}_{n}+\frac{\Gamma_{1}}{2}v_{n}+\frac{\Gamma_{2}}{2}v_{n+1}, \tag{17}\] with \(\bar{\Delta}=\frac{\omega_{1}-\omega_{2}}{2}\). We assume propagating solutions for the sub-lattice elements, that is \[v_{n}=\sum_{k}v_{k,\omega}e^{i(kn-\omega t)}+c.c\] \[\bar{v}_{n}=\sum_{k}\bar{v}_{k,\omega}e^{i(kn-\omega t)}+c.c, \tag{18}\] where \(k\) is the wave vector and the lattice constant is taken to be unit. Substituting Eq. (18) into Eq. (17), we arrive at the eigenvalue equation \((H_{k}-\omega)X=0\), where \(X^{T}=[v_{k,\omega},\bar{v}_{k,\omega}]\) and \[H_{k}=\begin{pmatrix}\bar{\Delta}-i(\gamma_{1}+\Gamma_{1}+\Gamma_{2})&i(\frac {\Gamma_{1}}{2}+\frac{\Gamma_{2}}{2}e^{-ik})\\ i(\frac{\Gamma_{1}}{2}+\frac{\Gamma_{2}}{2}e^{-ik})&-\bar{\Delta}-i(\gamma_{2}+ \Gamma_{1}+\Gamma_{2})\end{pmatrix}. \tag{19}\] The Hamiltonian \(H_{k}\) is equivalent to \(\mathcal{H}_{k}\). In other words, the circuit lattice is topologically equivalent to a dissipatively coupled SSH model. _Photonic systems:_ We begin by considering the following generic Hamiltonian comprising of a chain of coherently coupled bosonic sub-lattice elements \(a_{i}\), \(b_{i}\), \(c_{i}\) and \(d_{i}\) under OBC as depicted in Fig. 4. \[\mathcal{H}/\hbar=\sum_{i=1}^{N}\omega_{b_{i}}b_{i}^{\dagger}b_{i }+\sum_{i=1}^{N}\omega_{c_{i}}c_{i}^{\dagger}c_{i}+\sum_{i=1}^{N}\omega_{a_{i} }a_{i}^{\dagger}a_{i}+\sum_{i=0}^{N}\omega_{d_{i}}d_{i}^{\dagger}d_{i}\] \[\qquad+\sum_{i=1}^{N}[g_{1}b_{i}^{\dagger}(a_{i}+d_{i-1})+h.c]+ \sum_{i=1}^{N}[g_{2}c_{i}^{\dagger}(a_{i}+d_{i})+h.c] \tag{20}\] Here, \(\omega_{x,i}\) characterizes the resonance frequencies of the modes \(x_{i}\), where \(x\in\{a,b,c,d\}\) and \(g_{1}\), \(g_{2}\in\mathbb{R}\) are the strength of dispersive coupling between the modes. We assume that all the modes \(b_{i}\) and \(c_{i}\) decay at approximately the same rate \(\gamma\) whereas the modes \(a_{i}\) and \(d_{i}\) decay at rates \(\kappa_{1}\) and \(\kappa_{2}\) respectively. Further, we set \(\Delta_{b,i}=\Delta_{1}=\omega_{b,1}-\omega_{a,1}\), \(\Delta_{c,i}=\Delta_{2}=\omega_{c,1}-\omega_{a,1}\), \(\omega_{a,i}=\omega_{b,i}=\omega_{a,1}\) and \(\kappa_{1}>\kappa_{2}\). In the weak coupling domain, that is, when the leakage rates \(\kappa_{i}\) strongly dominates the dynamics of the system, in other words, \(\{g_{1},g_{2},\gamma,\Delta_{b,1},\Delta_{c,1}\}<<\{\kappa_{1},\kappa_{2}\}\), we can adiabatically eliminate the \(a_{i}\) and \(d_{i}\) modes resulting in the effective momentum space Hamiltonian of the system under PBC (\(d_{0}=d_{N}\)) in the frame rotating at \((\Delta_{1}+\Delta_{2})/2\) as \[\mathcal{H}_{k}=\begin{pmatrix}\bar{\Delta}-i\Gamma_{r}&h(k)\\ -h^{*}(k)&-\bar{\Delta}-i\Gamma_{r}\end{pmatrix}, \tag{21}\] where, we set \(g_{1}=-g_{2}=g\), \(\bar{\Delta}=(\Delta_{1}-\Delta_{2})/2\), \(\Gamma_{r}=\gamma+\Gamma_{1}+\Gamma_{2}\), \(h(k)=i\Gamma_{1}+i\Gamma_{2}e^{-ik}\) and \(k\) is the lattice constant equivalent to Eq. (11) and \(\Gamma_{i}=\frac{x^{2}}{\kappa_{i}}\). Note that the pair of modes \(b_{i}\) and \(c_{i}\) form a unit cell with intra and inter cell couplings \(i\frac{x^{2}}{\kappa_{1}}\) and \(i\frac{x^{2}}{\kappa_{2}}\) respectively equivalent to the system in Fig. 1 under dissipative settings with \(\Gamma_{i}\) replaced by \(\frac{x^{2}}{\kappa_{i}}\). Figure 3: Circuit model consisting of resistively coupled LCR resonators for the realization of dissipatively coupled SSH model resulting in the dynamics described by Eq. (15). Figure 4: Schematic of the coupled oscillator system described by the Hamiltonian in Eq. (20) comprising of a bath of oscillators coupled with a system of otherwise non-interacting bosonic modes. Non-reciprocity, phase-dependent topological transitions and skin effect In the previous section, we briefly mentioned the breakdown of BBC in non-Hermitian systems. In particular, non-reciprocal (chiral) coupling between sub-lattice elements under OBC culminates in the skin effect which is the exponential localization of right and left eigenvectors at the lattice boundaries without any distinction between bulk and edge modes. In addition, the points in the parametric space corresponding to the closing of energy gap under PBC do not indicate the emergence of eigenmodes with eigenvalues \(-i\Gamma_{r}\) under OBC requiring a bi-orthogonal modification of the BBC. In the following, we discuss the construction of non-reciprocal couplings in DSSH lattice leading to phase-dependent topological transitions and skin effect. Consider now, a one-dimensional lattice of bosonic modes \(b\), \(c\) coupled with auxiliary modes \(a\) as depicted in Fig. 4 where we have now switched on the coherent coupling between \(b_{i}\) and \(c_{i}\) modes. The system is characterized by the Hamiltonian \[\tilde{\mathcal{H}}/\hbar=\mathcal{H}/\hbar+\sum_{i=1}^{N}[Gb_{i}^{\dagger}c_ {i}+h.c], \tag{22}\] where \(\mathcal{H}\) is given by Eq. (20) and \(G=|G|e^{i\alpha}\). Once again, in the weak coupling domain, that is, when \(\{g_{1},g_{2},|G|,\gamma\}\) are significantly less than the cavity leakage \(\kappa_{i}\) and setting \(\Delta_{b,i}=\Delta_{1}=\omega_{b,1}-\omega_{a,1}\), \(\Delta_{c,i}=\Delta_{2}=\omega_{c,1}-\omega_{a,1}\), \(\omega_{a,i}=\omega_{b,i}=\omega_{a,1}\), \(\kappa_{1}>\kappa_{2}\) and \(g_{1}=-g_{2}=g\), we can obtain an effective system between modes \(b_{i}\) and \(c_{i}\). The system under PBC translates into the following Bloch Hamiltonian in the frame rotating at \((\Delta_{1}+\Delta_{2})/2\) \[\mathcal{H}_{k}=\begin{pmatrix}\bar{\Delta}-i\Gamma_{r}&i\Gamma_{-}+i\Gamma_{ 2}e^{-i\tilde{k}}\\ i\Gamma_{+}+i\Gamma_{2}e^{i\tilde{k}}&-\bar{\Delta}-i\Gamma_{r}\end{pmatrix}, \tag{23}\] where \(\Gamma_{\pm}=\Gamma_{1}\mp|G|\sin\alpha-i|G|\cos\alpha\), \(\bar{\Delta}=(\Delta_{1}-\Delta_{2})/2\) and \(\Gamma_{r}=\gamma+\Gamma_{1}+\Gamma_{2}\). Notice, the conspicuous emergence of a purely dissipative form of non-reciprocal coupling between the subsystems when \(\alpha=\pi/2\). Note that the non-Hermitian system described by the aforementioned Hamiltonian does not follow BBC. To elucidate this in detail, let us begin by considering the system under OBC. Before expounding the analysis of the full system, it is worthwhile to explicate the properties of the system in the absence of \(c_{N}\) and to simplify the analysis, we set \(\bar{\Delta}=0\) for the remaining part of this section. When the lattice terminates in \(b_{N}\), the \(2N-1\) dimensional Hamiltonian of the system \(H_{broken}^{b}\) supports bi-orthogonal eigenstates with eigenvalue \(-i\Gamma_{r}\) of the form [61, 32] \[|R\rangle^{b}=N_{R}^{b}\sum_{n=0}^{N-1}\big{(}-\frac{\Gamma_{2}} {\Gamma_{+}}\big{)}^{N-n}b_{n+1}^{\dagger}|0\rangle\] \[|L\rangle^{b}=N_{L}^{b}\sum_{n=0}^{N-1}\big{(}-\frac{\Gamma_{2}} {\Gamma_{-}}\big{)}^{N-n}b_{n+1}^{\dagger}|0\rangle\,, \tag{24}\] where \(N_{R}\), \(N_{L}\) are normalization constants such that \(\langle L|R\rangle=1\) provided by \(N_{L}^{b_{n}}N_{R}^{b}=Z^{N+1}\frac{(Z^{-1}-1)}{1-Z^{N}}\), \(Z=\frac{\Gamma_{r}\Gamma_{r}}{\Gamma_{2}}\) and \[H_{broken}^{b}=\begin{bmatrix}-i\Gamma_{r}&i\Gamma_{-}&0&0&\dots&0\\ i\Gamma_{+}&-i\Gamma_{r}&i\Gamma_{2}&0&\dots&0\\ 0&i\Gamma_{2}&-i\Gamma_{r}&i\Gamma_{-}&\dots&0\\ 0&0&i\Gamma_{+}&-i\Gamma_{r}&\dots&0\\ 0&0&\dots&\dots&\dots&i\Gamma_{2}\\ 0&0&\dots&\dots&\dots&i\Gamma_{2}&-i\Gamma_{r}\end{bmatrix}_{2N-1}\,. \tag{25}\] This is due to destructive interference at \(c_{i}\) sites and observe that \(|R\rangle^{b}\) and \(|L\rangle^{b}\) can be written as a column matrix, for instance, \[|R\rangle^{b}=\begin{bmatrix}\big{(}-\frac{\Gamma_{2}}{\Gamma_{+}}\big{)}^{N} \\ 0\\ \big{(}-\frac{\Gamma_{2}}{\Gamma_{+}}\big{)}^{N-1}\\ \vdots\\ 0\\ -\frac{\Gamma_{2}}{\Gamma_{+}}\end{bmatrix}_{2N-1}\,, \tag{26}\] and clearly \(H_{broken}^{b}\,|R\rangle^{b}=-i\Gamma_{r}\,|R\rangle^{b}\). Notice that when \(\alpha=\pi/2\), Eq. (24) morphs into \[|R\rangle^{b}=N_{R}^{b}\sum_{n=0}^{N-1}\big{(}-\frac{\Gamma_{2}} {\Gamma_{1}+|G|}\big{)}^{N-n}b_{n+1}^{\dagger}|0\rangle\] \[|L\rangle^{b}=N_{L}^{b}\sum_{n=0}^{N-1}\big{(}-\frac{\Gamma_{2}} {\Gamma_{1}-|G|}\big{)}^{N-n}b_{n+1}^{\dagger}|0\rangle\,, \tag{27}\] which is, identical to the results for a coherently coupled SSH model with non-reciprocal intra-cell couplings. By the same token, one can construct eigenstates of complex energy \(-i\Gamma_{r}\) of the Hamiltonian \(H_{broken}^{c}\) of the lattice which is broken at the other end, i.e., when the lattice ends on either side with \(c_{i}\) sites as \[|R\rangle^{c}=N_{R}^{c}\sum_{n=1}^{N}\big{(}-\frac{\Gamma_{2}} {\Gamma_{-}}\big{)}^{N}c_{n}^{\dagger}|0\rangle\] \[|L\rangle^{c}=N_{L}^{c}\sum_{n=1}^{N}\big{(}-\frac{\Gamma_{2}} {\Gamma_{+}}\big{)}^{N}c_{n}^{\dagger}|0\rangle\,, \tag{28}\] where \(N_{L}^{c*}N_{R}^{c}=N_{L}^{b_{R}}N_{R}^{b}=Z^{N+1}\frac{(Z^{-1}-1)}{1-Z^{N}}\). In stark contrast to Hermitian systems where the absolute value square of the coeffcient of the column matrix in Eq. (26) would represent the probability of finding the excitation at the \(n^{th}\) unit cell, non-Hermitian systems necessitate a bi-orthogonally defined projection to the \(n^{th}\) unit cell. Subsequently, one can define a biorthogonal projection operator \(P_{n}\) to the \(n^{th}\) unit cell of the lattice as \(P_{n}=|b,n\rangle\,\langle b,n|+|c,n\rangle\,\langle c,n|\) where \(|b,n\rangle=b_{n}^{\dagger}|0\rangle\) and \(|c,n\rangle=c_{n}^{\dagger}|0\rangle\). For example, projecting the states in Eq. (24) on to the \(n^{th}\) unit cell provides \[\langle L|^{b}\,P_{n}\,|R\rangle^{b}=\frac{Z^{n+1}(Z^{-1}-1)}{1-Z^{N}}. \tag{29}\] It is, therefore, apparent that for \(|Z|<1\), the excitation is exponentially localized at the left edge (\(n=1\)) whereas \(|Z|>1\) localizes the state at the right edge \(n=N\). Similarly, when the lattice terminates with a \(c_{i}\) mode on either side (\(b_{1}\) is absent), one can obtain analogous results with the excitation localized at the right (left) edge for \(|Z|<1(|Z|>1)\) due to mirror symmetry. The results of the broken chain system can now be used to extract the physics of the full system in the thermodynamic limit (large \(N\)). Consider the Hamiltonian of the full system \[H_{full}=\begin{bmatrix}-i\Gamma_{r}&i\Gamma_{-}&0&0&\dots&0&0\\ i\Gamma_{+}&-i\Gamma_{r}&i\Gamma_{2}&0&\dots&0&0\\ 0&i\Gamma_{2}&-i\Gamma_{r}&i\Gamma_{-}&\dots&0&0\\ 0&0&i\Gamma_{+}&-i\Gamma_{r}&\dots&0&0\\ 0&0&\dots&\dots&\dots&i\Gamma_{2}&0\\ 0&0&\dots&\dots&i\Gamma_{2}&-i\Gamma_{r}&i\Gamma_{-}\\ 0&0&\dots&\dots&0&i\Gamma_{+}&-i\Gamma_{r}\end{bmatrix}_{2N}, \tag{30}\] and states \[|\psi\rangle_{R}^{\pm}=\frac{1}{\sqrt{2}}\begin{Bmatrix}\left[ \ket{R}^{b}\right]\pm\begin{bmatrix}0\\ 0\end{Bmatrix}\right\}\] \[|\psi\rangle_{L}^{\pm}=\frac{1}{\sqrt{2}}\begin{Bmatrix}\left[ \ket{L}^{b}\right]\pm\begin{bmatrix}0\\ 1\end{Bmatrix}\right\}. \tag{31}\] Clearly, \(\langle\psi_{L}^{\pm}|\psi\rangle_{R}^{\pm}=1\) and \(\langle\psi_{L}^{\pm}|\psi\rangle_{R}^{\pm}=0\) and it is straightforward to obtain \[H_{full}\,|\psi\rangle_{R}^{\pm}=\begin{bmatrix}\mp i\Gamma_{2} N_{R}^{c}\\ 0\\ \vdots\\ 0\\ -i\Gamma_{2}N_{R}^{b}\end{bmatrix}_{2N}-i\Gamma_{r}\,|\psi\rangle_{R}^{\pm}\] \[H_{full}^{\dagger}\,|\psi\rangle_{L}^{\pm}=\begin{bmatrix}\pm i \Gamma_{2}N_{L}^{c}\\ 0\\ \vdots\\ 0\\ \Gamma_{2}N_{L}^{b}\end{bmatrix}_{2N}+i\Gamma_{r}\,|\psi\rangle_{L}^{\pm}\,, \tag{32}\] where we have defined \(N_{R}^{b}=N_{R}^{c}=\begin{pmatrix}Z^{N_{R}^{c}}(Z^{\pm}-1)^{2}\\ 1-Z^{N_{R}^{b}}\end{pmatrix}^{1/2}\) and \(N_{L}^{b}=N_{L}^{c}=N_{R}^{b*}\). It is conspicuous from the above expression that \(|\psi\rangle_{R,L}^{\pm}\) represent bi-orthogonal eigenstates of \(H_{full}\) with complex energy \(-i\Gamma_{r}\) for large \(N\) for \(N_{R}^{b}\to 0\), _viz_, if \(|Z|<1\) (and not for \(|Z|\geq 1\)) as the normalization factors in the first part of the RHS of Eq. (32) approach zero. In other words, the states in Eq. (31) are the eigenstates of \(H_{full}\) with eigenvalue \(-i\Gamma_{r}\) (\(\Gamma_{r}\)-modes) for \[\sqrt{(\Gamma_{1}^{2}-|G|^{2})^{2}+4|G|^{2}\Gamma_{1}^{2}\cos^{2}\alpha}< \Gamma_{2}^{2}. \tag{33}\] It is worth noting that for \(\alpha=\pi/2\), the condition for bi-orthogonal edge modes modifies to \(\Gamma_{1}^{2}-|G|^{2}<\Gamma_{2}^{2}\). The Eq. (33) may be rewritten as \((\Gamma_{1}^{2}-A_{+})(\Gamma_{1}^{2}-A_{-})<0\), where \[A_{\pm}=-|G|^{2}\cos{(2\alpha)}\pm\sqrt{\Gamma_{2}^{4}-|G|^{4}\sin^{2}{(2 \alpha)}} \tag{34}\] for real values of the RHS of Eq. (34). Therefore, the \(\Gamma_{r}\)-modes of the system under OBC occur in the region where Figure 6: The phase diagram of the system as a function of \(\alpha\) and \(|G|\) for \(\Gamma_{2}\) = 2 characterized by the real positive values of \(A_{\pm}\) demarcating the topological boundaries. In particular, the region in red depicts the topologically trivial parametric domain. Figure 5: The absolute value of eigenvalues of \(H_{full}\) under OBC (blue) and PBC (green) for different values of \(\alpha\) and \(|G|\) with \(\Gamma_{2}=2\), \(\gamma=3\), the number of units cells \(N\) = 25. The vertical lines represent the points \(x_{\pm}\) obtained from Eq. (34) as \(x_{\pm}=\sqrt{A_{\pm}}\) for non-negative values of \(A_{\pm}\). The two states with complex energy \(-i\Gamma_{r}=-i(\gamma+\Gamma_{1}+\Gamma_{2})\) appear in the region \(\Gamma_{1}<x_{-}\) and \(\Gamma_{1}<x_{+}\) as isolated blue lines in the middle in (a), (b), whereas they appear in the region \(x_{-}<\Gamma_{1}<x_{+}\) in (d). \(\Gamma_{1}^{2}<A_{+}\) and \(\Gamma_{1}^{2}>A_{-}\). It is worth noting that the system does not incur \(\Gamma_{r}\)-modes for \(\Gamma_{2}^{\mathrm{d}}-|G|^{\mathrm{d}}\sin^{2}{(2\alpha)}<0\). In Fig. 5, we plot the absolute value of eigenvalues of the full system under OBC and PBC for different values of \(\alpha\) and \(|G|\) for \(\Gamma_{2}=2\) where we have defined \(x_{\mathrm{a}}=\sqrt{A_{\mathrm{a}}}\) for non-negative values of \(A_{\mathrm{a}}\). In Fig. 5 (a-b), for \(|G|=1\), we observe that \(A_{+}>0\) and \(A_{-}<0\) leading \(\Gamma_{r}\)-modes for \(|\Gamma_{1}|<x_{+}\) depicted by the two isolated blue lines. In stark contrast, for \(|G|=3\) and \(\alpha=\pi/4\), \(\Gamma_{2}^{\mathrm{d}}-|G|^{\mathrm{d}}\sin^{2}{(2\alpha)}<0\), leading to the conspicuous absence of \(\Gamma_{r}\)-modes as depicted in Fig. 5 (c). However, \(\alpha=\pi/2\), \(|G|=3\) (Fig. 5 (d)) renders \(A_{\mathrm{a}}>0\) which affords \(\Gamma_{r}\)-modes on either in the region \(x_{-}<\Gamma_{1}<x_{+}\), demonstrating phase dependent nature of topological transitions. Note that the green curves in Fig. 5 correspond to the absolute value of energy under OBC. Clearly, the points where the blue curves approach \(-i\Gamma_{r}\) do not match with that of the green curves as a consequence of the breakdown of BBC. In Fig. 6, we plot the phase (not to be confused with \(\alpha\)) diagram of the system as a function of \(\alpha\) and \(|G|\) depending on the real positive values of \(A_{\mathrm{a}}\), clearly demarcating the topological boundaries. For the region depicted in yellow, we have only the \(A_{+}\geq 0\), begetting \(\Gamma_{r}\)-modes modes for \(|\Gamma_{1}|<\sqrt{A_{+}}\) as displayed in Fig. 5 (a-b). The region in blue, however, provides \(A_{\mathrm{a}}\geq 0\), lending \(\Gamma_{r}\)-modes when \(\Gamma_{1}^{2}<A_{+}\) and \(\Gamma_{1}^{2}>A_{-}\) as demonstrated in Fig. 5 (d). In contrast, the region in red prohibits real positive values for \(A_{\mathrm{a}}\) and therefore does not lend itself to a topological description evident from Fig. 5 (c). To provide more substance to the above discussion, we plot, in Fig. 7 (a-b) the absolute value of \(N\) components (equally spaced between 0 and 1) of the vector \(V^{i}=\)[ \(\pi_{1}^{i}\)\(\pi_{2}^{i}\)\(...\)\(\pi_{N}^{i}\) ] for two different regions of Fig. 5(d) with \(i\in\{1,2,..N\}\) and \[\pi_{n}^{i}=\frac{\left\langle L_{i}^{full}\right|P_{n}\left|R_{i}^{full} \right\rangle}{\left\langle L_{i}^{full}\right|\!\!R_{i}^{full}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! lation of eigenstates at the boundaries of the system owing to the non-reciprocal nature of dissipative coupling. On the contrary, \(\alpha=0\) does not incur any non-reciprocity in coupling, culminating in the absence of skin effect as depicted in Fig. 8 (b). In stark contrast to Fig. 8 (a, b), the condition \(\alpha=\pi/2\) and \(|G|=\Gamma_{1}\) results in \(\Gamma_{+}\to 0\), i.e., extreme non-reciprocity and skin-effect. This is manifested in Fig. 8 (c), showcasing the remarkably high localization of the right eigenvectors at the left edge. ## VI Conclusions In conclusion, we considered SSH models with a dissipative form of coupling between the subsystems and discussed some of the interesting physics ensuing from such models. In particular, we provided two distinct schemes for the realization of DSSH models in the context of bosonic systems and electrical LCR resonators. We showed that a collection of resistively coupled LCR resonators mimic the topology of DSSH models by solving the Kirchhoff's equation for voltages. In the framework of bosonic systems, we observed that a system of non-interacting oscillators interacting with an engineered bath of modes possessing considerably small lifetimes compared to other system parameters is equivalent to a DSSH model. Further, by enabling the coherent interaction between the oscillators under consideration, we showed that the system affords non-reciprocal dissipative couplings eliciting topological transitions governed by the phase of the coherent interaction strength and skin effect. Note that our analyses are generic, relevant to a large class of systems, especially in microwave to optical settings and merits immediate realization in the experiments. ## VII Acknowledgements GSA and MOS acknowledge the support of the Air Force Office of Scientific Research [AFOSR award no FA9550-20-1-0366 DEF] and the Robert A. Welch Foundation (Grant No. A-1261, Grant no A-1943). JMPN acknowledges the support of the Herman F. Heep and Minnie Belle Heep Texas A&M University endowed fund held/administered by the Texas A&M Foundation. ## Appendix A Kirchhoff's equations for the circuit DSSH model We begin by considering the two blocks of LCR circuits, in other words, a dimer, coupled through a resistor as depicted in Fig. 9. After the choice of direction of currents as illustrated in the figure, we use the well-known Kirchoff's circuit laws to explicate the dimer dynamics. \[I_{R_{c1}}=I_{L_{1}}+I_{C_{1}}+I_{R_{1}}=-(I_{L_{2}}+I_{C_{2}}+I_{R_{2}}), \tag{10}\] \[V_{n}=-L_{n}\frac{dI_{L_{n}}}{dt}=-\frac{1}{C_{n}}\int_{0}^{t}I_{C_{n}}(t^{ \prime})dt^{\prime}=-I_{R_{1}}R_{1}, \tag{11}\] \[V_{1}-V_{2}=I_{R_{c1}}R_{c1}, \tag{12}\] where n=1,2. Employing Eq. (11) and Eq. (12) into Eq. (10), we obtain \[\ddot{V}_{i}+\frac{1}{C_{i}}\Big{(}\frac{1}{R_{c1}}+\frac{1}{R_{i}}\Big{)} \dot{V}_{i}+\frac{V_{i}}{L_{i}C_{i}}=\frac{\dot{V}_{j}}{R_{c1}C_{i}},\quad i \neq j. \tag{13}\] Upon redefining \(\omega_{i}=\frac{1}{\sqrt{L_{i}}C_{i}}\), \(\Gamma_{i}=\frac{1}{R_{c1}C_{i}}\) and \(\gamma_{i}=\frac{1}{R_{i}C_{i}}\) Eq. (13) reduces to \[\dot{V}_{i}+(\gamma_{i}+\Gamma_{i})V_{i}+\omega_{i}^{2}V_{i}=\Gamma_{i}\dot{ V}_{j},\quad i\neq j, \tag{14}\] The Eq. (14) can be further simplified if we assume \(V_{i}(t)=\frac{1}{2}u_{i}(t)e^{-i\omega_{i}t}+c.c\), where \(\omega_{0}=\frac{1}{2}(\omega_{1}+\omega_{2})\) and \(u_{i}(t)\) is a slowly varying envelope. In addition, we assume that \(k<<\omega_{i}\), \(\omega_{1}\) close to \(\omega_{2}\) and \(C_{1}=C_{2}\). Under these conditions, the Eq. (14) can be approximated to \[\begin{pmatrix}u_{1}\\ u_{2}\end{pmatrix}=-i\frac{1}{2}\begin{pmatrix}\omega_{1}-\omega_{2}-i(\gamma _{1}+\Gamma_{1})&i\Gamma_{1}\\ i\Gamma_{1}&\omega_{2}-\omega_{1}-i(\gamma_{2}+\Gamma_{1})\end{pmatrix} \begin{pmatrix}u_{1}\\ u_{2}\end{pmatrix}. \tag{15}\] Extending the analysis to the full system in Fig. 3, we obtain Eq. (15).
2309.04767
Mass effect on an elliptic PDE involving two Hardy-Sobolev critical exponents
We let $\Omega$ be a bounded domain of $\mathbb{R}^3$ and $\Gamma$ be a closed curve contained in $\Omega$. We study existence of positive solutions $u \in H^1_0\left(\Omega\right)$ to the equation $$ -\Delta u+hu=\lambda\rho^{-s_1}_\Gamma u^{5-2s_1}+\rho^{-s_2}_\Gamma u^{5-2s_2} \qquad \textrm{ in } \Omega $$ where $h$ is a continuous function and $\rho_\Gamma$ is the distance function to $\Gamma$. We prove existence of solutions depending on the regular part of the Green function of linear operator. We prove the existence of positive mountain pass solutions for this Euler-Lagrange equation depending on the mass which is the regular part of the Green function of the linear operator $-\Delta+h$.
El Hadji Abdoulaye Thiam
2023-09-09T11:43:31Z
http://arxiv.org/abs/2309.04767v1
# Mass effect on an elliptic PDE involving two Hardy-Sobolev critical exponents ###### Abstract. We let \(\Omega\) be a bounded domain of \(\mathbb{R}^{3}\) and \(\Gamma\) be a closed curve contained in \(\Omega\). We study existence of positive solutions \(u\in H^{1}_{0}\left(\Omega\right)\) to the equation \[-\Delta u+hu=\lambda\rho_{\Gamma}^{-s_{1}}u^{5-2s_{1}}+\rho_{\Gamma}^{-s_{2}}u ^{5-2s_{2}}\qquad\text{ in }\Omega\] where \(h\) is a continuous function and \(\rho_{\Gamma}\) is the distance function to \(\Gamma\). We prove existence of solutions depending on the regular part of the Green function of linear operator. We prove the existence of positive mountain pass solutions for this Euler-Lagrange equation depending on the mass which is the regular part of the Green function of the linear operator \(-\Delta+h\). **Key Words**: Two Hardy-Sobolev critical exponents; Green function; Positive mass; Mountain Pass solution; Curve singularity. ## 1. Introduction In this paper, we are concerned with the mass effect on the existence of mountain pass solutions of the following nonlinear partial differential equation involving two Hardy-Sobolev critical exponents in \(\mathbb{R}^{3}\). More precisely, letting \(h\) be a continuous function and \(\lambda\) be a real parameter, we consider \[\begin{cases}-\Delta u(x)+hu(x)=\lambda\frac{u^{5-2s_{1}}(x)}{\rho_{\Gamma}^{ s_{1}}(x)}+\frac{u^{5-2s_{2}}(x)}{\rho_{\Gamma}^{s_{2}}(x)}&\text{ in }\Omega\\ \\ u(x)>0\qquad\text{ and }\qquad u(x)=0&\text{ on }\partial\Omega,\end{cases} \tag{1.1}\] where \(\rho_{\Gamma}(x):=\inf_{y\in\Gamma}|y-x|\) is the distance function to the curve \(\Gamma\) and for \(0<s_{2}<s_{1}<2\), \(2^{*}_{s_{1}}:=6-2s_{1}\) and \(2^{*}_{s_{2}}:=6-2s_{2}\) are two critical Hardy-Sobolev exponents. To study the equation (1.1), we consider the following non-linear functional \(\Psi:H^{1}_{0}(\Omega)\to\mathbb{R}\) defined by: \[\Psi(u):=\frac{1}{2}\int_{\Omega}|\nabla u|^{2}dx+\frac{1}{2}\int_{\Omega}h(x) u^{2}dx-\frac{\lambda}{2^{*}_{s_{1}}}\int_{\Omega}\rho_{\Gamma}^{-s_{1}}(x)|u|^{2^{* }_{s_{1}}}dx-\frac{1}{2^{*}_{s_{2}}}\int_{\Omega}\rho_{\Gamma}^{-s_{2}}(x)|u|^ {2^{*}_{s_{2}}}dx. \tag{1.2}\] Then there exists a positive constant \(r>0\) and \(u_{0}\in H^{1}_{0}(\Omega)\) such that \(\|u_{0}\|_{H^{1}_{0}(\Omega)}>r\) and \[\inf_{\|u\|_{H^{1}_{0}(\Omega)}=r}\Psi(u)>\Psi(0)\geq\Phi(u_{0}),\] see for instance the paper of the author [[7], Lemma 4.5]. Then the point \((0,\Psi(0))\) is separated from the point \((u_{0},\Psi(u_{0}))\) by a ring of mountains. Set \[c^{*}:=\inf_{P\in\mathcal{P}}\,\max_{v\in P}\Psi(v), \tag{1.3}\] where \(\mathcal{P}\) is the class of continuous paths in \(H^{1}_{0}(\Omega)\) connecting \(0\) to \(u_{0}\). Since \(2^{*}_{s_{2}}>2^{*}_{s_{1}}\), the function \(t\longmapsto\Psi(tv)\) has the unique maximum for \(t\geq 0\). Furthermore, we have \[c^{*}:=\inf_{u\in H^{1}_{0}(\Omega),u\geq 0,u\neq 0}\,\max_{t\geq 0}\Psi(tu).\] Due to the fact that the embedding of \(H^{1}_{0}(\Omega)\) into the weighted Lebesgue spaces \(L^{2^{*}_{s_{1}}}(\rho_{\Gamma}^{-si}dx)\) is not compact, the functional \(\Psi\) does not satisfy the Palais-Smale condition. Therefore, in general \(c^{*}\) might not be a critical value for \(\Psi\). To recover compactness, we study the following non-linear problem: let \(x=(y,z)\in\mathbb{R}\times\mathbb{R}^{2}\) and consider \[\left\{\begin{aligned} -\Delta u&=\lambda\frac{u^{2^{*}_{z_{1}}- 1}(x)}{|z|^{s_{1}}}+\frac{u^{2^{*}_{z_{2}}-1}}{|z|^{s_{2}}}&\text{in } \mathbb{R}^{3}\\ u(x)&>0&\text{in }\mathbb{R}^{3}.\end{aligned}\right. \tag{1.4}\] To obtain solutions of (1.4), we consider the functional \(\Phi:\mathcal{D}^{1,2}(\mathbb{R}^{N})\) defined by \[\Phi(u):=\frac{1}{2}\int_{\mathbb{R}^{3}}|\nabla u|^{2}dx-\frac{\lambda}{2^{*} _{z_{1}}}\int_{\mathbb{R}^{3}}|z|^{-s_{1}}|u|^{2^{*}_{z_{1}}}dx-\frac{1}{2^{*} _{z_{2}}}\int_{\mathbb{R}^{3}}|z|^{-s_{2}}|u|^{2^{*}_{z_{2}}}dx. \tag{1.5}\] Next, we define \[\beta^{*}:=\inf_{u\in D^{1,2}(\mathbb{R}^{3}),u\geq 0,u\neq 0}\max_{t\geq 0} \Phi(tu).\] Then we get compactness provided \[c^{*}<\beta^{*},\] see Proposition 4.3 in [7]. Therefore the existence, symmetry and decay estimates of non-trivial solution \(w\in\mathcal{D}^{1,2}(\mathbb{R}^{3})\) of (1.4) play an important role in problem (1.1). Then we have the following results. **Proposition 1.1**.: _Let \(0\leq s_{2}<s_{1}<2\), \(\lambda\in\mathbb{R}\). Then equation_ \[\left\{\begin{aligned} -\Delta u&=\lambda\frac{u^{2^{*}_{z_{1} }-1}(x)}{|z|^{s_{1}}}+\frac{u^{2^{*}_{z_{2}}-1}}{|z|^{s_{2}}}& \text{in }\mathbb{R}^{3}\\ u(x)&>0&\text{in }\mathbb{R}^{3} \end{aligned}\right. \tag{1.6}\] _has a positive ground state solution \(w\in\mathcal{D}^{1,2}(\mathbb{R}^{3})\) depending only on \(|y|\) and \(|z|\). Moreover_ \[\frac{C_{1}}{1+|x|}\leq w(x)\leq\frac{C_{2}}{1+|x|}\qquad\text{ in }\mathbb{R}^{3}. \tag{1.7}\] _Moreover, for \(|x|=|(t,z)|\leq 1\), we have_ \[|\nabla w(x)|+|x||D^{2}w(x)|\leq C_{2}|z|^{1-s_{1}} \tag{1.8}\] _and if \(|x|=|(t,z)|\geq 1\), we have_ \[|\nabla w(x)|+|x||D^{2}w(x)|\leq C_{2}\max(1,|z|^{-s_{1}})|x|^{1-N}. \tag{1.9}\] Next, we let \(G(x,y)\) be the Dirichlet Green function of the operator \(-\Delta+h\), with zero Dirichlet data. It satisfies \[\left\{\begin{aligned} -\Delta_{x}G(x,y)+h(x)G(x,y)=0& \text{for every }x\in\Omega\setminus\{y\}\\ G(x,y)=0&\text{for every }x\in\partial\Omega.\end{aligned}\right. \tag{1.10}\] In addition there exists a continuous function \(\mathbf{m}:\Omega\to\mathbb{R}\) and a positive constant \(c>0\) such that \[G(x,y)=\frac{c}{|x-y|}+c\,\mathbf{m}(y)+o(1)\qquad\text{ as }x\to y. \tag{1.11}\] We call the function \(\mathbf{m}:\Omega\to\mathbb{R}\) the _mass_ of \(-\Delta+h\) in \(\Omega\). We note that \(-\mathbf{m}\) is occasionally called the _Robin function_ of \(-\Delta+h\) in the literature. Then our main result is the following. Then we have **Theorem 1.2**.: _Let \(0\leq s_{2}<s_{1}<2\) and \(\Omega\) be a bounded domain of \(\mathbb{R}^{3}\). Consider \(\Gamma\) a smooth closed curve contained in \(\Omega\). Let \(h\) be a continuous function such that the linear operator \(-\Delta+h\) is coercive. We assume that there exists \(y_{0}\in\Gamma\) such that_ \[m(y_{0})>0. \tag{1.12}\] _Moreover there exists \(u\in H^{1}_{0}(\Omega)\setminus\{0\}\) non-negative solution of_ \[-\Delta u(x)+hu(x)=\lambda\frac{u^{5-2s_{1}}(x)}{\rho^{5_{1}}_{\Gamma}(x)}+ \frac{u^{5-2s_{2}}(x)}{\rho^{5_{2}}_{\Gamma}(x)}\qquad\text{ in }\Omega.\] In contrast to the case \(N\geq 4\) (see [7] for more details), the existence of solution does not depend on the local geometry of the singularity but on the location of the curve \(\Gamma\). Besides in the study of Hardy-Sobolev equations in domains with interior singularity for the Three dimensional case, the effect of the mass plays an important role in the existence of positive solutions. For Hardy-Sobolev inequality on Riemannian manifolds with singularity a point, Jaber [3] proved the existence of positive solutions when the mass is positive. We refer also to [4] for existence of mountain pass solution to a Hardy-Sobolev equation with an additional perturbation term. For the Hardy-Sobolev equations on domains with singularity a curve, we refer to the papers of the author and Fall [1] and the author and Ijaodoro [2]. We also suggest to the interested readers the nice work of Schoen-Yau [5] and [6] for more details related to the positive mass theorem. We also mention that this paper is the \(3\)-dimensional version of the work of they author [7]. The proof of Theorem 1.2 relies on test function methods. Namely we build appropriate test functions allowing to compare \(c^{*}\) and \(\beta^{*}\). Near the concentration point \(y_{0}\in\Gamma\), the test function is similar to the test function in the case \(N\geq 4\) but away from it is replaced with the regular part of the Green function which makes apear the mass, see Section 3. ## 2. Tool Box We consider the function \[\mathcal{R}:\mathbb{R}^{3}\setminus\{0\}\to\mathbb{R},\qquad x\mapsto\mathcal{ R}(x)=\frac{1}{|x|}\] which satisfies \[-\Delta\mathcal{R}=0\qquad\text{ in }\mathbb{R}^{3}\setminus\{0\}. \tag{2.1}\] We denote by \(G\) the solution to the equation \[\begin{cases}-\Delta_{x}G(y,\cdot)+hG(y,\cdot)=0&\text{ in }\Omega\setminus\{y\}. \\ G(y,\cdot)=0&\text{ on }\partial\Omega,\end{cases} \tag{2.2}\] and satisfying \[G(x,y)=\mathcal{R}(x-y)+O(1)\qquad\text{ for }x,y\in\Omega\text{ and }x\neq y. \tag{2.3}\] We note that \(G\) is proportional to the Green function of \(-\Delta+h\) with zero Dirichlet data. We let \(\chi\in C_{\infty}^{\infty}(-2,2)\) with \(\chi\equiv 1\) on \((-1,1)\) and \(0\leq\chi<1\). For \(r>0\), we consider the cylindrical symmetric cut-off function \[\eta_{r}(t,z)=\chi\left(\frac{|t|+|z|}{r}\right)\qquad\qquad\text{ for every }(t,z)\in\mathbb{R}\times\mathbb{R}^{2}. \tag{2.4}\] It is clear that \[\eta_{r}\equiv 1\quad\text{ in }Q_{r},\qquad\eta_{r}\in H^{1}_{0}(Q_{2r}), \qquad|\nabla\eta_{r}|\leq\frac{C}{r}\quad\text{ in }\mathbb{R}^{3}.\] For \(y_{0}\in\Omega\), we let \(r_{0}\in(0,1)\) such that \[y_{0}+Q_{2r_{0}}\subset\Omega. \tag{2.5}\] We define the function \(M_{y_{0}}:Q_{2r_{0}}\to\mathbb{R}\) given by \[M_{y_{0}}(x):=G(y_{0},x+y_{0})-\eta_{r}(x)\frac{1}{|x|}\qquad\text{ for every }x\in Q_{2r_{0}}. \tag{2.6}\] It follows from (2.3) that \(M_{y_{0}}\in L^{\infty}(Q_{r_{0}})\). By (2.2) and (2.1), \[|-\Delta M_{y_{0}}(x)+h(x)M_{y_{0}}(x)|\leq\frac{C}{|x|}=C\mathcal{R}(x)\qquad \text{ for every }x\in Q_{r_{0}},\] whereas \(\mathcal{R}\in L^{p}(Q_{r_{0}})\) for every \(p\in(1,3)\). Hence by elliptic regularity theory, \(M_{y_{0}}\in W^{2,p}(Q_{r_{0}/2})\) for every \(p\in(1,3)\). Therefore by Morrey's embdding theorem, we deduce that \[\|M_{y_{0}}\|_{C^{1,\varrho}(Q_{r_{0}/2})}\leq C\qquad\text{ for every }\varrho\in(0,1). \tag{2.7}\] In view of (1.11), the mass of the operator \(-\Delta+h\) in \(\Omega\) at the point \(y_{0}\in\Omega\) is given by \[\mathbf{m}(y_{0})=M_{y_{0}}(0). \tag{2.8}\] Next, we have the following result which will be important in the sequel. **Lemma 2.1**.: _Consider the function \(v_{\varepsilon}:\mathbb{R}^{3}\setminus\{0\}\to\mathbb{R}\) given by_ \[v_{\varepsilon}(x)=\varepsilon^{-1}w\left(\frac{x}{\varepsilon}\right).\] _Then there exists a constant \(\textbf{c}>0\) and a sequence \((\varepsilon_{n})_{n\in\mathbb{N}}\) (still denoted by \(\varepsilon\)) such that_ \[v_{\varepsilon}(x)\to\frac{\textbf{c}}{|x|}\qquad\text{ and }\qquad\nabla v_{ \varepsilon}(x)\to-\textbf{c}\frac{x}{|x|^{3}}\qquad\text{ for all most every }x\in\mathbb{R}^{3}\] _and_ \[v_{\varepsilon}(x)\to\frac{\textbf{c}}{|x|}\qquad\text{ and }\qquad\nabla v_{ \varepsilon}(x)\to-\textbf{c}\frac{x}{|x|^{3}}\qquad\text{ for every }x\in\mathbb{R}^{3}\setminus\{z=0\}. \tag{2.9}\] Proof.: By Proposition 1.1, we have that \((v_{\varepsilon})\) is bounded in \(C^{2}_{loc}(\mathbb{R}^{3}\setminus\{z=0\})\). Therefore by Arzela-Ascolli's theorem \(v_{\varepsilon}\) converges to \(v\) in \(C^{1}_{loc}(\mathbb{R}^{3}\setminus\{z=0\})\). In particular, \[v_{\varepsilon}\to v\qquad\text{ and }\qquad\nabla v_{\varepsilon}\to\nabla v \qquad\text{ almost every where on }\mathbb{R}^{3}.\] It is plain, from (1.7), that \[0<\frac{C_{1}}{\varepsilon+|x|}\leq v_{\varepsilon}(x)\leq\frac{C_{2}}{ \varepsilon+|x|}\qquad\text{ for almost every }x\in\mathbb{R}^{3}. \tag{2.10}\] By (1.4), we have \[-\Delta v_{\varepsilon}(x)=\lambda\varepsilon^{2-s_{1}}\frac{v_{\varepsilon}^ {5-2s_{1}}(x)}{|z|^{s_{1}}}+\varepsilon^{2-s_{2}}\frac{v_{\varepsilon}^{5-2s_ {2}}(x)}{|z|^{s_{2}}}\qquad\text{ in }\mathbb{R}^{3}. \tag{2.11}\] Newt, we let \(\varphi\in C^{\infty}_{c}\left(\mathbb{R}^{3}\setminus\{0\}\right)\). We multiply (2.11) by \(\varphi\) and integrate by parts to get \[-\int_{\mathbb{R}^{3}}v_{\varepsilon}\Delta\varphi dx=\lambda \varepsilon^{2-s_{1}}\int_{\mathbb{R}^{3}}\frac{v_{\varepsilon}^{5-2s_{1}}(x)} {|z|^{s_{1}}}\varphi(x)dx+\varepsilon^{2-s_{2}}\int_{\mathbb{R}^{3}}\frac{v_{ \varepsilon}^{5-2s_{2}}(x)}{|z|^{s_{2}}}\varphi(x)dx.\] By (2.10) and the dominated convergence theorem, we can pass to the limit in the above identity and deduce that \[\Delta v=0\qquad\quad\text{ in }\mathcal{D}^{\prime}\left(\mathbb{R}^{3} \setminus\{0\}\right).\] In particular \(v\) is equivalent to a function of class \(C^{\infty}\left(\mathbb{R}^{3}\setminus\{0\}\right)\) which is still denoted by \(v\). Thanks to (2.10), by Bocher's theorem, there exists a constant \(\textbf{c}>0\) such that \(v(x)=\frac{\textbf{c}}{|x|}.\) The proof of the lemma is thus finished. We finish this section by the following estimates. Thanks to the decay estimates in Proposition 1.1, we have **Lemma 2.2**.: _There exists a constant \(C>0\) such that for every \(\varepsilon,r\in(0,r_{0}/2)\) and for \(s\in(0,2)\), we have_ \[\int_{Q_{r/\varepsilon}}|\nabla w|^{2}dx\leq C\max\left(1,\frac{ \varepsilon}{r}\right),\qquad\int_{Q_{r/\varepsilon}}|w|^{2}dx\leq C\max\left( 1,\frac{r}{\varepsilon}\right), \tag{2.12}\] \[\int_{Q_{r/\varepsilon}}w|\nabla w|dx\leq C\max\left(1,\log\frac{r}{ \varepsilon}\right), \tag{2.13}\] \[\int_{Q_{r/\varepsilon}}|\nabla w|dx\leq C\max\left(1,\frac{r}{\varepsilon} \right),\qquad\int_{Q_{r/\varepsilon}}|w|dx\leq C\max\left(1,\frac{r^{2}}{ \varepsilon^{2}}\right) \tag{2.14}\] _and_ \[\varepsilon^{2}\int_{Q_{r/\varepsilon}}|z|^{-s}|x|^{2}w^{2_{*}^{*}}dx+ \varepsilon\int_{Q_{4r/\varepsilon}\setminus Q_{r/\varepsilon}}|z|^{-s}w^{2_ {*}^{*}-1}dx+\int_{\mathbb{R}^{3}\setminus Q_{r/\varepsilon}}|z|^{-s}w^{2_{*}^ {*}}dx=o(\varepsilon). \tag{2.15}\] ## 3. Proof of the main result Given \(y_{0}\in\Gamma\subset\Omega\subset\mathbb{R}^{3}\), we let \(r_{0}\) as defined in (2.5). For \(r\in(0,r_{0}/2)\), we consider \(F_{y_{0}}:Q_{r}\to\Omega\) parameterizing a neighborhood of \(y_{0}\) in \(\Omega\), with the property that \(F_{y_{0}}(0)=y_{0}\), \[\rho_{\Gamma}(F_{y_{0}}(x))=|z|,\qquad\text{ for all }x=(y,z)\in Q_{r}. \tag{3.1}\] Moreover in these local coordinates, we have \[g_{ij}(x)=\delta_{ij}+O(|x|) \tag{3.2}\] and \[\sqrt{|g|}(x)=1+\langle A,z\rangle+O\left(|x|^{2}\right), \tag{3.3}\] where \(A\in\mathbb{R}^{2}\) is the vector curvature of \(\Gamma\) and \(|g|\) stands for the determinant of \(g\), see [1] for more details related to this parametrization. Next, for \(\varepsilon>0\), we consider \(u_{\varepsilon}:\Omega\to\mathbb{R}\) given by \[u_{\varepsilon}(y):=\varepsilon^{-1/2}\eta_{r}(F_{y_{0}}^{-1}(y))w\left( \frac{F_{y_{0}}^{-1}(y)}{\varepsilon}\right).\] We can now define the test function \(\Psi_{\varepsilon}:\Omega\to\mathbb{R}\) by \[\Psi_{\varepsilon}\left(y\right)=u_{\varepsilon}(y)+\varepsilon^{1/2}\mathbf{ c}\,\eta_{2r}(F_{y_{0}}^{-1}(y))M_{y_{0}}(F_{y_{0}}^{-1}(y)). \tag{3.4}\] It is plain that \(\Psi_{\varepsilon}\in H^{1}_{0}(\Omega)\) and \[\Psi_{\varepsilon}\left(F_{y_{0}}(x)\right)=\varepsilon^{-1/2}\eta_{r}(x)w \left(\frac{x}{\varepsilon}\right)+\varepsilon^{1/2}\mathbf{c}\,\eta_{2r}(x)M _{y_{0}}(x)\qquad\text{ for every }x\in\mathbb{R}^{N}.\] To alleviate the notations, we will write \(\varepsilon\) instead of \(\varepsilon_{n}\) and we will remove the subscript \(y_{0}\), by writing \(M\) and \(F\) in the place of \(M_{y_{0}}\) and \(F_{y_{0}}\) respectively. We define \[\widetilde{\eta}_{r}(y):=\eta_{r}(F^{-1}(y)),\qquad V_{\varepsilon}(y):=v_{ \varepsilon}(F^{-1}(y))\qquad\text{ and }\qquad\widetilde{M}_{2r}(y):=\eta_{2r}(F^{-1}(y))M(F^{-1}(y)),\] where \(v_{\varepsilon}(x)=\varepsilon^{-1}w\left(\frac{x}{\varepsilon}\right).\) With these notations, (3.4) becomes \[\Psi_{\varepsilon}(y)=u_{\varepsilon}(y)+\varepsilon^{\frac{1}{2}}\mathbf{c} \,\widetilde{M}_{2r}(y)=\varepsilon^{\frac{1}{2}}V_{\varepsilon}(y)+ \varepsilon^{\frac{1}{2}}\mathbf{c}\,\widetilde{M}_{2r}(y). \tag{3.5}\] In the sequel we define \(\mathcal{O}_{r,\varepsilon}\) as \[\lim_{r\to 0}\frac{\mathcal{O}_{r,\varepsilon}}{\varepsilon}=0.\] Then we have the following. **Lemma 3.1**.: _We have_ \[\int_{\Omega}|\nabla\Psi_{\varepsilon}|^{2}dy+\int_{\Omega}h|\Psi_{ \varepsilon}|^{2}dy= \int_{\mathbb{R}^{3}}|\nabla w|^{2}dx+\pi\varepsilon\mathbf{m}(y_{0}) \mathbf{c}^{2}+\mathcal{O}_{r}(\varepsilon), \tag{3.6}\] _as \(\varepsilon\to 0\)._ Proof.: Recalling (3.5), direct computations give \[\int_{F(Q_{2r})\setminus F(Q_{r})}|\nabla\Psi_{\varepsilon}|^{2}dy =\int_{F(Q_{2r})\setminus F(Q_{r})}|\nabla\left(\widetilde{\eta} _{r}u_{\varepsilon}\right)|^{2}dy+\varepsilon\mathbf{c}^{2}\int_{F(Q_{2r}) \setminus F(Q_{r})}|\nabla\widetilde{M}_{2r}|^{2}dy\] \[+2\varepsilon^{1/2}\mathbf{c}\int_{F(Q_{2r})\setminus F(Q_{r})} \nabla\left(\widetilde{\eta}_{r}u_{\varepsilon}\right)\cdot\nabla\widetilde{M} _{2r}dy\] \[=\varepsilon\int_{F(Q_{2r})\setminus F(Q_{r})}|\nabla\left( \widetilde{\eta}_{r}V_{\varepsilon}\right)|^{2}dy+\varepsilon\mathbf{c}^{2} \int_{F(Q_{2r})\setminus F(Q_{r})}|\nabla\widetilde{M}_{2r}|^{2}dy\] \[+2\varepsilon\mathbf{c}\int_{F(Q_{2r})\setminus F(Q_{r})} \nabla\left(\widetilde{\eta}_{r}V_{\varepsilon}\right)\cdot\nabla\widetilde{M} _{2r}dy. \tag{3.7}\] By (2.4), \(\eta_{r}v_{\varepsilon}=\eta_{r}\varepsilon^{-1}w(\cdot/\varepsilon)\) is cylindrically symmetric. Therefore by the change variable \(y=F(x)\) and using (3.2), we get \[\varepsilon\int_{F(Q_{2r})\setminus F(Q_{r})}|\nabla\left(\widetilde {\eta}_{r}V_{\varepsilon}\right)|^{2}dy =\varepsilon\int_{Q_{2r}\setminus Q_{r}}|\nabla\left(\eta_{r}v_{ \varepsilon}\right)|_{g}^{2}\sqrt{g}dx\] \[=\varepsilon\int_{Q_{2r}\setminus Q_{r}}|\nabla\left(\eta_{r}v_{ \varepsilon}\right)|^{2}dx+O\left(\varepsilon r^{2}\int_{Q_{2r}\setminus Q_{r }}|\nabla\left(\eta_{r}v_{\varepsilon}\right)|^{2}dx\right). \tag{3.8}\] By computing, we find that \[\varepsilon\int_{Q_{2r}\setminus Q_{r}}|\nabla\left(\eta_{r}v_{ \varepsilon}\right)|^{2}dx \leq\varepsilon\int_{Q_{2r}\setminus Q_{r}}|\nabla v_{\varepsilon }|^{2}dx+\varepsilon\int_{Q_{2r}\setminus Q_{r}}v_{\varepsilon}^{2}|\nabla \eta_{r}|^{2}dx+2\varepsilon\int_{Q_{2r}\setminus Q_{r}}v_{\varepsilon}| \nabla v_{\varepsilon}||\nabla\eta_{r}|dx\] \[\leq\varepsilon\int_{Q_{2r}\setminus Q_{r}}|\nabla v_{\varepsilon }|^{2}dx+\frac{C}{r^{2}}\varepsilon\int_{Q_{2r}\setminus Q_{r}}v_{\varepsilon }^{2}dx+\frac{C}{r}\varepsilon\int_{Q_{2r}\setminus Q_{r}}v_{\varepsilon}| \nabla v_{\varepsilon}|dx\] \[=\int_{Q_{2r/\varepsilon}\setminus Q_{r/\varepsilon}}|\nabla w|^ {2}dx+C\frac{\varepsilon}{r^{2}}\int_{Q_{2r/\varepsilon}\setminus Q_{r/ \varepsilon}}w^{2}dx+\frac{C}{r}\varepsilon\int_{Q_{2r/\varepsilon}\setminus Q _{r/\varepsilon}}w|\nabla w|dx.\] From this and (2.12) and (2.13), we get \[O\left(\varepsilon r^{2}\int_{Q_{2r}\setminus Q_{r}}|\nabla\left(\eta_{r}v_{ \varepsilon}\right)|^{2}dx\right)=\mathcal{O}_{r}(\varepsilon).\] We replace this in (3.8) to have \[\varepsilon\int_{F(Q_{2r})\setminus F(Q_{r})}|\nabla\left(\widetilde{\eta}_{r }V_{\varepsilon}\right)|^{2}dy=\varepsilon\int_{Q_{2r}\setminus Q_{r}}| \nabla(\eta_{r}v_{\varepsilon})|^{2}dx+\mathcal{O}_{r}(\varepsilon). \tag{3.9}\] We have the following estimates \[0\leq v_{\varepsilon}\leq C|x|^{-1}\quad\text{ for }x\in\mathbb{R}^{3} \setminus\{0\}\qquad\text{ and }\qquad|\nabla v_{\varepsilon}(x)|\leq C|x|^{-2}\quad\text{ for }|x|\geq\varepsilon, \tag{3.10}\] which easily follows from (1.7), (3.2) and (2.1). By these estimates, (3.2), (3.3) and (2.7) together with the change of variable \(y=F(x)\), we have \[\varepsilon\int_{F(Q_{2r})\setminus F(Q_{r})}\nabla\left( \widetilde{\eta}_{r}V_{\varepsilon}\right)\cdot\nabla\widetilde{M}_{2r}dy= \varepsilon\int_{Q_{2r}\setminus Q_{r}}\nabla\left(\eta_{r}v_{ \varepsilon}\right)\cdot\nabla Mdx\] \[+O\left(\varepsilon\int_{Q_{2r}\setminus Q_{r}}|\nabla v_{ \varepsilon}|dx+\frac{\varepsilon}{r}\int_{Q_{2r}\setminus Q_{r}}v_{ \varepsilon}dx\right)\] \[= \varepsilon\int_{Q_{2r}\setminus Q_{r}}\nabla\left(\eta_{r}v_{ \varepsilon}\right)\cdot\nabla Mdx+\mathcal{O}_{r}(\varepsilon).\] This with (3.9), (2.7) and (3.7) give \[\int_{F(Q_{2r})\setminus F(Q_{r})}|\nabla\Psi_{\varepsilon}|^{2}dy =\varepsilon\int_{Q_{2r}\setminus Q_{r}}|\nabla\left(\eta_{r}v_{ \varepsilon}\right)|^{2}dx+\varepsilon\mathbf{c}^{2}\int_{Q_{2r}\setminus Q _{r}}|\nabla(\eta_{2r}M)|^{2}dx\] \[+2\varepsilon\mathbf{c}\int_{Q_{2r}\setminus Q_{r}}\nabla \left(\eta_{r}v_{\varepsilon}\right)\cdot\nabla Mdx+\mathcal{O}_{r}( \varepsilon).\] Thanks to Lemma 2.1 and (3.10), we can thus use the dominated convergence theorem to deduce that, as \(\varepsilon\to 0\), \[\int_{Q_{2r}\setminus Q_{r}}|\nabla\left(\eta_{r}v_{\varepsilon}\right)|^{2}dx =\mathbf{c}^{2}\int_{Q_{2r}\setminus Q_{r}}|\nabla\left(\eta_{r}\mathcal{R} \right)|^{2}dx+o(1). \tag{3.11}\] Similarly, we easily see that \[\int_{Q_{2r}\setminus Q_{r}}\nabla\left(\eta_{r}v_{\varepsilon}\right)\cdot \nabla Mdx=\mathbf{c}\int_{Q_{2r}\setminus Q_{r}}\nabla\left(\eta_{r}\mathcal{ R}\right)\cdot\nabla Mdx+o(1)\qquad\text{ as }\varepsilon\to 0.\] This and (3.11), then give \[\int_{F(Q_{2r})\setminus F(Q_{r})}|\nabla\Psi_{\varepsilon}|^{2}dy =\varepsilon\mathbf{c}^{2}\int_{Q_{2r}\setminus Q_{r}}|\nabla\left( \eta_{r}\mathcal{R}\right)|^{2}dx+\varepsilon\mathbf{c}^{2}\int_{Q_{2r} \setminus Q_{r}}|\nabla M|^{2}dx\] \[+2\varepsilon\mathbf{c}^{2}\int_{Q_{2r}\setminus Q_{r}}\nabla \left(\eta_{r}\mathcal{R}\right)\cdot\nabla Mdx+\mathcal{O}_{r}(\varepsilon)\] \[=\varepsilon\mathbf{c}^{2}\int_{Q_{2r}\setminus Q_{r}}|\nabla( \eta_{r}\mathcal{R}+M)|^{2}dx+\mathcal{O}_{r}(\varepsilon). \tag{3.12}\] Since the support of \(\Psi_{\varepsilon}\) is contained in \(Q_{4r}\) while the one of \(\eta_{r}\) is in \(Q_{2r}\), it is easy to deduce from (2.7) that \[\int_{\Omega\setminus F(Q_{2r})}|\nabla\Psi_{\varepsilon}|^{2}dy=\varepsilon \mathbf{c}^{2}\int_{F(Q_{4r})\setminus F(Q_{2r})}|\nabla\widetilde{M}_{2r}|^{ 2}dy=\mathcal{O}_{r}(\varepsilon)\] and from Lemma 2.2, that \[\int_{\Omega\setminus F(Q_{r})}h|\Psi_{\varepsilon}|^{2}dy=\varepsilon \mathbf{c}^{2}\int_{F(Q_{4r})\setminus F(Q_{r})}h|\eta_{r}V_{\varepsilon}+ \widetilde{M}_{2r}|^{2}dy=\mathcal{O}_{r}(\varepsilon).\] Therefore by (3.12), we conclude that \[\int_{\Omega\setminus F(Q_{r})}|\nabla\Psi_{\varepsilon}|^{2}dy +\int_{\Omega\setminus F(Q_{r})}h|\Psi_{\varepsilon}|^{2}dy\] \[=\varepsilon\mathbf{c}^{2}\int_{Q_{2r}\setminus Q_{r}}|\nabla( \eta_{r}\mathcal{R}+M)|^{2}dx+\varepsilon\mathbf{c}^{2}\int_{Q_{2r}\setminus Q _{r}}h(\cdot+y_{0})|\eta_{r}\mathcal{R}+M|^{2}dx+\mathcal{O}_{r}(\varepsilon).\] Recall that \(G(x+y_{0},y_{0})=\eta_{r}(x)\mathcal{R}(x)+M(x)\) for ever \(x\in Q_{2r}\) and that by (2.2), \[-\Delta_{x}G(x+y_{0},y_{0})+h(x+y_{0})G(x+y_{0},y_{0})=0\qquad\text{ for every }x\in Q_{2r}\setminus Q_{r}.\] Therefore, by integration by parts, we find that \[\int_{\Omega\setminus F(Q_{r})}|\nabla\Psi_{\varepsilon}|^{2}dy+\int_{\Omega \setminus F(Q_{r})}h|\Psi_{\varepsilon}|^{2}dy=\mathbf{c}^{2}\int_{\partial( Q_{2r}\setminus Q_{r})}(\eta_{r}\mathcal{R}+M)\frac{\partial(\eta_{r} \mathcal{R}+M)}{\partial\overline{\nu}}\sigma(x)+\mathcal{O}_{r}(\varepsilon),\] where \(\overline{\nu}\) is the exterior normal vectorfield to \(Q_{2r}\setminus Q_{r}\). Thanks to (2.7), we finally get \[\int_{\Omega\setminus F(Q_{r})}|\nabla\Psi_{\varepsilon}|^{2}dy+ \int_{\Omega\setminus F(Q_{r})}h|\Psi_{\varepsilon}|^{2}dy =-\varepsilon\mathbf{c}^{2}\int_{\partial Q_{r}}\mathcal{R}\frac {\partial\mathcal{R}}{\partial\nu}d\sigma(x)-\varepsilon\mathbf{c}^{2}\int_{ \partial Q_{r}}M\frac{\partial\mathcal{R}}{\partial\nu}d\sigma(x)+\mathcal{O} _{r}(\varepsilon), \tag{3.13}\] where \(\nu\) is the exterior normal vectorfield to \(Q_{r}\). Next we make the expansion of \(\int_{F(Q_{r})}|\nabla\Psi_{\varepsilon}|^{2}dy\) for \(r\) and \(\varepsilon\) small. First, we observe that, by Lemma 2.2 and (2.7), we have \[\int_{F(Q_{r})} |\nabla\Psi_{\varepsilon}|^{2}dy=\int_{F(Q_{r})}|\nabla u_{ \varepsilon}|^{2}dy+\varepsilon\mathbf{c}^{2}\int_{F(Q_{r})}|\nabla M|^{2}dy +2\varepsilon^{1/2}\mathbf{c}\int_{F(Q_{r})}\nabla u_{\varepsilon}\cdot \nabla\widetilde{M}_{2r}dy\] \[=\int_{Q_{r/\varepsilon}}|\nabla w|^{2}dx+O\left(\varepsilon^{2} \int_{Q_{r/\varepsilon}}|x|^{2}|\nabla w|^{2}dx+\varepsilon^{2}\int_{Q_{r/ \varepsilon}}|\nabla w|dx\right)+\mathcal{O}_{r}(\varepsilon)=\int_{Q_{r/ \varepsilon}}|\nabla w|^{2}dx+\mathcal{O}_{r}(\varepsilon).\] By integration by parts and using (2.15), we deduce that \[\int_{F(Q_{r})}|\nabla\Psi_{\varepsilon}|^{2}dy =\int_{\mathbb{R}^{3}}|\nabla w|^{2}dx+\int_{\partial Q_{r/ \varepsilon}}w\frac{\partial w}{\partial\nu}d\sigma(x)+\mathcal{O}_{r}(\varepsilon)\] \[=\int_{\mathbb{R}^{3}}|\nabla w|^{2}dx+\varepsilon\int_{\partial Q _{r}}v_{\varepsilon}\frac{\partial v_{\varepsilon}}{\partial\nu}d\sigma(x)+ \mathcal{O}_{r}(\varepsilon). \tag{3.14}\] Now (3.10), (2.9) and the dominated convergence theorem yield, for fixed \(r>0\) and \(\varepsilon\to 0\), \[\int_{\partial Q_{r}}v_{\varepsilon}\frac{\partial v_{\varepsilon}} {\partial\nu}d\sigma(x) =\int_{\partial B^{2}_{\mathbb{R}^{2}}(0,r)}\int_{-r}^{r}v_{ \varepsilon}(t,z)\nabla v_{\varepsilon}(t,z)\cdot\frac{z}{|z|}d\sigma(z)dt+2 \int_{B^{2}_{\mathbb{R}^{2}}}v_{\varepsilon}(r,z)\partial_{t}v_{\varepsilon}( r,z)dz\] \[=\mathbf{c}^{2}\int_{\partial B^{2}_{\mathbb{R}^{2}}(0,r)}\int_{- r}^{r}\mathcal{R}(t,z)\nabla\mathcal{R}(t,z)\cdot\frac{z}{|z|}d\sigma(z)dt+2 \mathbf{c}^{2}\int_{B^{2}_{\mathbb{R}^{2}}}\mathcal{R}(r,z)\partial_{t} \mathcal{R}(r,z)dz+o(1)\] \[=\mathbf{c}^{2}\int_{\partial Q_{r}}\mathcal{R}\frac{\partial \mathcal{R}}{\partial\nu}d\sigma(x)+o(1). \tag{3.15}\] Moreover (2.14) implies that \[\int_{F(Q_{r})}h\Psi_{\varepsilon}^{2}dy=\mathcal{O}_{r}(\varepsilon).\] From this together with (3.14) and (3.15), we obtain \[\int_{F(Q_{r})}|\nabla\Psi_{\varepsilon}|^{2}dy+\int_{F(Q_{r})}h\Psi_{ \varepsilon}^{2}dy=\int_{\mathbb{R}^{3}}|\nabla w|^{2}dx+\mathbf{c}^{2} \varepsilon\int_{\partial Q_{r}}\mathcal{R}\frac{\partial\mathcal{R}}{ \partial\nu}d\sigma(x)+\mathcal{O}_{r}(\varepsilon).\] Combining this with (3.13), we then have \[\int_{\Omega}|\nabla\Psi_{\varepsilon}|^{2}dy+\int_{\Omega}h\Psi_{ \varepsilon}^{2}dy=\int_{\mathbb{R}^{3}}|\nabla w|^{2}dx-\varepsilon\mathbf{c }^{2}\int_{\partial Q_{r}}M\frac{\partial\mathcal{R}}{\partial\nu}d\sigma(x) +\mathcal{O}_{r}(\varepsilon)+o\left(\varepsilon\right). \tag{3.16}\] Recalling that \(\mathcal{R}(x)=\frac{1}{|x|}\), we have \[\int_{\partial Q_{r}}\frac{\partial\mathcal{R}}{\partial\nu}\,d\sigma(x)=- \int_{\partial Q_{r}}\frac{x\cdot\nu(x)}{|x|^{3}}\,d\sigma(x)=\int_{B_{ \mathbb{R}^{2}}(0,r)}\frac{-2r}{r^{2}+|z|^{2}}\,dz-2\pi\int_{-r}^{r}\frac{r^{ 3}}{r^{2}+t^{2}}dt=-\pi^{2}(1+r^{2}).\] Since (recalling (2.8)) \(M(y)=M(0)+O(r)=\mathbf{m}(y_{0})+O(r)\) in \(Q_{2r}\), we get (3.6). This then ends the proof. We finish by the following expansion **Lemma 3.2**.: \[\frac{\lambda}{2_{s_{1}}^{*}}\int_{\Omega}\rho_{\Gamma}^{-s_{1}}| \Psi_{\varepsilon}|^{2_{s_{1}}^{*}}dy +\frac{1}{2_{s_{2}}^{*}}\int_{\Omega}\rho_{\Gamma}^{-s_{2}}|\Psi _{\varepsilon}|^{2_{s_{2}}^{*}}dy=\frac{\lambda}{2_{s_{1}}^{*}}\int_{\mathbb{ R}^{3}}|z|^{-s_{1}}|w|^{2_{s_{1}}^{*}}dx\] \[+\frac{1}{2_{s_{2}}^{*}}\int_{\mathbb{R}^{3}}|z|^{-s_{2}}|w|^{2_{ s_{2}}^{*}}dx+\varepsilon\pi^{2}\,\mathbf{c}^{2}\,\mathbf{m}(y_{0})+\mathcal{O}_{r}( \varepsilon).\] Proof.: Let \(p>2\). Then there exists a positive constant \(C(p)\) such that \[||a+b|^{p}-|a|^{p}-pab|a|^{p-2}|\leq C(p)\left(|a|^{p-2}b^{2}+|b|^{p}\right) \qquad\text{ for all }a,b\in\mathbb{R}.\] As a consequence, we obtain, for \(s\in(0,2)\), that \[\int_{\Omega} \rho_{\Gamma}^{-s}|\Psi_{\varepsilon}|^{2_{s}^{*}}dy=\int_{F(Q_{ r})}\rho_{\Gamma}^{-s}|u_{\varepsilon}+\varepsilon^{\frac{1}{2}}\widetilde{M}_{2r}|^{2 _{s}^{*}}dy+\int_{F(Q_{4r})\setminus F(Q_{r})}\rho_{\Gamma}^{-s}|W_{ \varepsilon}+\varepsilon^{\frac{1}{2}}\widetilde{M}_{2r}|^{2_{s}^{*}}dy\] \[=\int_{F(Q_{r})}\rho_{\Gamma}^{-s}|u_{\varepsilon}|^{2_{s}^{*}}dy +2_{s}^{*}\mathbf{c}\varepsilon^{1/2}\int_{F(Q_{r})}\rho_{\Gamma}^{-s}|u_{ \varepsilon}|^{2_{s}^{*}-1}\widetilde{M}_{2r}dy\] \[\quad+O\left(\int_{F(Q_{4r})}\rho_{\Gamma}^{-s}|\eta_{r}u_{ \varepsilon}|^{2_{s}^{*}-2}\left(\varepsilon^{1/2}\widetilde{M}_{2r}\right)^{2 }dy+\int_{F(Q_{4r})}\rho_{\Gamma}^{-s}|\varepsilon^{1/2}\widetilde{M}_{2r}|^ {2_{s}^{*}}dy\right)\] \[\quad+O\left(\int_{F(Q_{4r})\setminus F(Q_{r})}\rho_{\Gamma}^{-s}| u_{\varepsilon}|^{2_{s}^{*}}dy+2_{s}^{*}\mathbf{c}\varepsilon^{1/2}\int_{F(Q_{4r}) \setminus F(Q_{r})}\rho_{\Gamma}^{-s}|u_{\varepsilon}|^{2_{s}^{*}-1}\widetilde{ M}_{2r}dy\right). \tag{3.17}\] By Holder's inequality and (3.3), we have \[\int_{F(Q_{4r})}\rho_{\Gamma}^{-s}|\eta u_{\varepsilon}|^{2^{*}_{s}-2} \left(\varepsilon^{1/2}\widehat{\beta}_{r}\right)^{2}dy \leq\varepsilon\|u_{\varepsilon}\|_{L^{2^{*}_{s}}(F(Q_{4r});\rho^ {-s})}^{2^{*}_{s}-2}\|\widetilde{M}_{2r}\|_{L^{2^{*}_{s}}(F(Q_{4r});\rho_{ \Gamma}^{-s})}^{2^{*}_{s}-2}\] \[=\varepsilon\|w\|_{L^{2^{*}_{s}}(Q_{4r};|z|^{-s}\sqrt{|g|})}^{2^{* }_{s}-2}\|\widetilde{M}_{2r}\|_{L^{2^{*}_{s}}(F(Q_{4r});\rho_{\Gamma}^{-s})}^{2 ^{*}_{s}-2}\] \[\leq\varepsilon(1+Cr)\|\widetilde{M}_{2r}\|_{L^{2^{*}_{s}}(F(Q_{4 r});\rho_{\Gamma}^{-s})}^{2}=\mathcal{O}_{r}(\varepsilon). \tag{3.18}\] Furthermore, since \(2^{*}_{s}>2\), by (2.7), we easily get \[\int_{F(Q_{4r})}\rho_{\Gamma}^{-s}|\varepsilon^{1/2}\widetilde{M}_{2r}|^{2^{* }_{s}}dy=o(\varepsilon). \tag{3.19}\] Moreover by change of variables and (2.15), we also have \[\int_{F(Q_{4r})\setminus F(Q_{r})}\rho_{\Gamma}^{-s}|u_{ \varepsilon}|^{2^{*}_{s}}dy+2^{*}_{s}\mathbf{c}\varepsilon^{1/2}\int_{F(Q_{4r })\setminus F(Q_{r})}\rho_{\Gamma}^{-s}|u_{\varepsilon}|^{2^{*}_{s}-1} \widetilde{M}_{2r}dy\\ \leq C\int_{Q_{4r/\varepsilon}\setminus Q_{r/\varepsilon}}|z|^{- s}|w|^{2^{*}_{s}}dx+C\varepsilon\int_{Q_{4r/\varepsilon}\setminus Q_{r/ \varepsilon}}|z|^{-s}|w|^{2^{*}_{s}-1}dx=o(\varepsilon).\] By this, (3.17), (3.19) and (3.18), it results \[\int_{\Omega}\rho_{\Gamma}^{-s}|\Psi_{\varepsilon}|^{2^{*}_{s}}dy=\int_{F(Q_{ r})}\rho_{\Gamma}^{-s}|u_{\varepsilon}|^{2^{*}_{s}}dy+2^{*}_{s}\mathbf{c} \varepsilon^{1/2}\int_{F(Q_{r})}\rho_{\Gamma}^{-s}|u_{\varepsilon}|^{2^{*}_{ s}-1}\widetilde{M}_{2r}dy+\mathcal{O}_{r}(\varepsilon).\] We define \(B_{\varepsilon}(x):=M(\varepsilon x)\sqrt{|g_{\varepsilon}|}(x)=M(\varepsilon x )\sqrt{|g|}(\varepsilon x)\). Then by the change of variable \(y=\frac{F(x)}{\varepsilon}\) in the above identity and recalling (3.3), then by oddness, we have \[\int_{\Omega}\rho_{\Gamma}^{-s}|\Psi_{\varepsilon}|^{2^{*}_{s}}dy =\int_{Q_{r/\varepsilon}}|z|^{-s}w^{2^{*}_{s}}\sqrt{|g_{ \varepsilon}|}dx+2^{*}_{s}\mathbf{c}\int_{Q_{r/\varepsilon}}|z|^{-s}|w|^{2^{* }_{s}-1}B_{\varepsilon}dx+\mathcal{O}_{r}(\varepsilon)\] \[=\int_{Q_{r/\varepsilon}}|z|^{-s}w^{2^{*}_{s}}dx+2^{*}_{s} \mathbf{c}\int_{Q_{r/\varepsilon}}|z|^{-s}|w|^{2^{*}_{s}-1}B_{\varepsilon}dx+ \mathcal{O}_{r}(\varepsilon)\] \[\quad+O\left(\varepsilon^{2}\int_{Q_{r/\varepsilon}}|z|^{-s}|x|^{ 2}w^{2^{*}_{s}}dx\right)\] \[=\int_{\mathbb{R}^{3}}|z|^{-s}|w|^{2^{*}_{s}}dx+2^{*}_{s} \mathbf{c}\int_{Q_{r/\varepsilon}}|z|^{-s}|w|^{2^{*}_{s}-1}B_{\varepsilon}dx\] \[\quad+O\left(\int_{\mathbb{R}^{3}\setminus Q_{r/\varepsilon}}|z| ^{-s}w^{2^{*}_{s}}dx+\varepsilon^{2}\int_{Q_{r/\varepsilon}}|z|^{-s}|x|^{2}w^ {2^{*}_{s}}dx\right)+\mathcal{O}_{r}(\varepsilon).\] By (2.15) we then have \[\int_{\Omega}\rho_{\Gamma}^{-s}|\Psi_{\varepsilon}|^{2^{*}_{s}}dy=\int_{ \mathbb{R}^{3}}|z|^{-s}|w|^{2^{*}_{s}}dx+2^{*}_{s}\varepsilon\mathbf{c}\int_{Q _{r/\varepsilon}}|z|^{-s}|w|^{2^{*}_{s}-1}B_{\varepsilon}(x)dx+\mathcal{O}_{r }(\varepsilon). \tag{3.20}\] Therefore for \(0<s_{2}<s_{1}<2\), we have \[\frac{\lambda}{2^{*}_{s_{1}}}\int_{\Omega}\rho_{\Gamma}^{-s_{1}}|\Psi_{ \varepsilon}|^{2^{*}_{s_{1}}}dy+\frac{1}{2^{*}_{s_{2}}}\int_{\Omega}\rho_{ \Gamma}^{-s_{2}}|\Psi_{\varepsilon}|^{2^{*}_{s_{2}}}dy=\frac{\lambda}{2^{*}_{ s_{1}}}\int_{\mathbb{R}^{3}}|z|^{-s_{1}}|w|^{2^{*}_{s_{1}}}dx+\frac{1}{2^{*}_{s_{2}}} \int_{\mathbb{R}^{3}}|z|^{-s_{2}}|w|^{2^{*}_{s_{2}}}dx\] \[\quad+\varepsilon\mathbf{c}\lambda\int_{Q_{r/\varepsilon}}|z|^{-s_ {1}}|w|^{2^{*}_{s_{1}}-1}B_{\varepsilon}(x)dx+\varepsilon\mathbf{c}\int_{Q_{r/ \varepsilon}}|z|^{-s_{2}}|w|^{2^{*}_{s_{2}}-1}B_{\varepsilon}(x)dx+\mathcal{O} _{r}(\varepsilon).\] We multiply (1.4) by \(B_{\varepsilon}\in\mathcal{C}^{1}(\overline{Q_{r}})\) and we integrate by parts to get \[\lambda\int_{Q_{r/\varepsilon}}|z|^{-s_{1}}|w|^{2^{*}_{s_{1}}-1}B_ {\varepsilon}dx+\int_{Q_{r/\varepsilon}}|z|^{-s_{2}}|w|^{2^{*}_{s_{2}}-1}B_{ \varepsilon}dx =\int_{Q_{r/\varepsilon}}\nabla w\cdot\nabla B_{\varepsilon}dx-\int_{ \partial Q_{r/\varepsilon}}B_{\varepsilon}\frac{\partial w}{\partial\nu}d \sigma(x)\] \[=\int_{Q_{r/\varepsilon}}\nabla w\cdot\nabla B_{\varepsilon}dx- \int_{\partial Q_{r}}B_{1}\frac{\partial v_{\varepsilon}}{\partial\nu}d \sigma(x).\] Since \(|\nabla B_{\varepsilon}|\leq C\varepsilon\), by Lemma 2.1 and (2.7), we then have \[\varepsilon\int_{Q_{r/\varepsilon}}\nabla w\cdot\nabla B_{\varepsilon}dx=O\left( \varepsilon^{2}\int_{Q_{r/\varepsilon}}|\nabla w|dx\right)=\mathcal{O}_{r}( \varepsilon).\] Consequently, on the one hand, \[\lambda\varepsilon\int_{Q_{r/\varepsilon}}|z|^{-s_{1}}|w|^{2^{*}_{s_{1}}-1}B_ {\varepsilon}dx+\varepsilon\int_{Q_{r/\varepsilon}}|z|^{-s_{2}}|w|^{2^{*}_{s _{2}}-1}B_{\varepsilon}dx=-\varepsilon\int_{\partial Q_{r}}B_{1}\frac{ \partial v_{\varepsilon}}{\partial\nu}d\sigma(x)+\mathcal{O}_{r}(\varepsilon).\] On the other hand by Lemma 2.1, (2.7) and the dominated convergence theorem, we get \[\int_{\partial Q_{r}}B_{1}\frac{\partial v_{\varepsilon}}{\partial\nu}d\sigma (x)=\mathbf{c}\int_{\partial Q_{r}}B_{1}\frac{\partial\mathcal{R}}{\partial \nu}d\sigma(x)+o(1)=\mathbf{c}M(0)\int_{\partial Q_{r}}\frac{\partial\mathcal{ R}}{\partial\nu}d\sigma(x)+O(r)+o(1),\] so that \[\lambda\varepsilon c\int_{Q_{r/\varepsilon}}|z|^{-s_{1}}|w|^{2^{*}_{s_{1}}-1} B_{\varepsilon}dx+\varepsilon c\int_{Q_{r/\varepsilon}}|z|^{-s_{2}}|w|^{2^{*}_{s_{2}}-1 }B_{\varepsilon}dx=-\varepsilon\mathbf{c}^{2}M(0)\int_{\partial Q_{r}}\frac{ \partial\mathcal{R}}{\partial\nu}d\sigma(x)+\mathcal{O}_{r}(\varepsilon).\] It then follows from (3.20) that \[\frac{\lambda}{2^{*}_{s_{1}}}\int_{\Omega}\rho_{r}^{-s_{1}}|\Psi_{ \varepsilon}|^{2^{*}_{s_{1}}}dy+\frac{1}{2^{*}_{s_{2}}}\int_{\Omega}\rho_{r}^ {-s_{2}}|\Psi_{\varepsilon}|^{2^{*}_{s_{2}}}dy=\frac{\lambda}{2^{*}_{s_{1}}} \int_{\mathbb{R}^{3}}|z|^{-s_{1}}|w|^{2^{*}_{s_{1}}}dx\] \[+\frac{1}{2^{*}_{s_{2}}}\int_{\mathbb{R}^{3}}|z|^{-s_{2}}|w|^{2^{ *}_{s_{2}}}dx-\varepsilon\mathbf{c}^{2}M(0)\int_{\partial Q_{r}}\frac{ \partial\mathcal{R}}{\partial\nu}d\sigma(x)+\mathcal{O}_{r}(\varepsilon).\] Finally, recalling that \(\mathcal{R}(x)=\frac{1}{|x|}\), we have \[\int_{\partial Q_{r}}\frac{\partial\mathcal{R}}{\partial\nu}\,d\sigma(x)=- \int_{\partial Q_{r}}\frac{x\cdot\nu(x)}{|x|^{3}}\,d\sigma(x)=\int_{B_{k^{2}}(0,r)}\frac{-2r}{r^{2}+|z|^{2}}\,dz-2\pi\int_{-r}^{r}\frac{r^{3}}{r^{2}+t^{2}} dt=-\pi^{2}(1+r^{2}).\] Since \(M(0)=\mathbf{m}(y_{0})\), see (2.8), the proof of the lemma is thus finished. Now we are in position to complete the proof of our main result. Proof.: **of Theorem 1.2** Combining Lemma 3.1 and Lemma 3.2 and recalling (1.2) and (1.5), we have \[J\left(tu_{\varepsilon}\right)=\Psi(tw)+\mathcal{M}_{r,\varepsilon}(tw), \tag{3.21}\] for some function \(\mathcal{M}:\mathcal{D}^{1,2}(\mathbb{R}^{N})\to\mathbb{R}\) satisfying \[\mathcal{M}_{r,\varepsilon}(w)=-\frac{\varepsilon}{2}c^{2}\pi^{2}m(y_{0})+ \mathcal{O}_{r,\varepsilon}.\] Since \(2^{*}_{s_{2}}>2^{*}_{s_{1}}\), \(\Psi(tu_{\varepsilon})\) has a unique maximum, we have \[\max_{t\geq 0}\Psi(tw)=\Psi(w)=\beta^{*}.\] Therefore, the maximum of \(J(tu_{\varepsilon})\) occurs at \(t_{\varepsilon}:=1+o_{\varepsilon}(1)\). Thanks to assumption (1.12), we have \[\mathcal{M}_{r,\varepsilon}(w)<0.\] Therefore \[\max_{t\geq 0}J(tu_{\varepsilon}):=J(t_{\varepsilon}u_{\varepsilon})\leq\Psi(t _{\varepsilon}w)+\varepsilon^{2}\mathcal{G}(t_{\varepsilon}w)\leq\Psi(t_{ \varepsilon}w)<\Psi(w)=\beta^{*}.\] We thus get the desired result.
2302.00115
On Memory Codelets: Prefetching, Recoding, Moving and Streaming Data
For decades, memory capabilities have scaled up much slower than compute capabilities, leaving memory utilization as a major bottleneck. Prefetching and cache hierarchies mitigate this in applications with easily predictable memory accesses or those with high locality. In other applications like sparse linear algebra or graph-based applications, these strategies do not achieve effective utilization of memory. This is the case for the von Neumann model of computation, but other program execution models (PXM) provide different opportunities. Furthermore, the problem is complicated by increasing levels of heterogeneity and devices' varying memory subsystems. The Codelet PXM presented in this paper provides a program structure that allows for well-defined prefetching, streaming, and recoding operations to improve memory utilization and efficiently coordinate data movement with respect to computation. We propose the Memory Codelet, an extension to the original Codelet Model, to provide users these functionalities in a well-defined manner within the Codelet PXM.
Dawson Fox, Jose Monsalve Diaz, Xiaoming Li
2023-01-31T21:41:04Z
http://arxiv.org/abs/2302.00115v1
# On Memory Codelets: Prefetching, Recoding, Moving and Streaming Data ###### Abstract. For decades, memory capabilities have scaled up much slower than compute capabilities, leaving memory utilization as a major bottleneck. Prefetching and cache hierarchies mitigate this in applications with easily predictable memory accesses or those with high locality. In other applications like sparse linear algebra or graph-based applications, these strategies do not achieve effective utilization of memory. This is the case for the own Neumann model of computation, but other program execution models (PXM) provide different opportunities. Furthermore, the problem is complicated by increasing levels of heterogeneity and devices' varying memory subsystems. The Codelet PXM presented in this paper provides a program structure that allows for well-defined prefetching, streaming, and recoding operations to improve memory utilization and efficiently coordinate data movement with respect to computation. We propose the Memory Codelet, an extension to the original Codelet Model, to provide users these functionalities in a well-defined manner within the Codelet PXM. Codelets, Program Execution Models, Sequential Codelet Model, Heterogeneity, Memory Recode, Near Memory Compute + Footnote †: journal: JPMP - E:HET 2021; February 25 - March 01, 2023, Montreal, Canada systems are also implementing unified shared memory to maintain automatic coherence between accelerator and host memories. Orchestration of programs in the offloading model is a manual process. The user must coordinate scheduling of data and kernels into the different accelerators. From the perspective of the host, a kernel is just a device function initiated through the driver API, and the user or system must guarantee that data needed for its execution is appropriately allocated and initialized prior to the execution of the kernel. A kernel itself does not provide any information that explicitly mentions the relationship about the memory it requires to perform its computation. Therefore, the user relies on manual memory management and system responses to page faults or cache faults to coordinate memory needed for the execution of a kernel. As a consequence, the scheduler does not have any knowledge of memory operations, or how a kernel depends on specific data, instead, memory and compute must be organized in a meticulous way that respects such dependencies. As data movement grows more important for achieving high performance, the burden on the programmer grows greater. Existing alternatives to the offloading model allow kernels/tasks to have unrestricted behavior and lack clear definitions of the data consumed and produced, which does not aid in predicting memory accesses or utilizing the memory systems better. The common alternative is heterogeneous tasking models (Bahdan et al., 2015; Bahdan et al., 2015; Chen et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2018). Kernels are represented as tasks that are connected by dependencies forming a graph. However, tasks are not side-effect free, as they are part of a unified shared memory system, allowing any pointer within the task to reference any part of the memory. Furthermore, models often use dependencies only to represent control flow dependencies, not data dependencies. Thus, although memory locations (e.g. variables or pointers) are used to name dependencies, these are not necessarily used to make decisions about memory orchestration, nor do they represent all the set of memory locations that can be accessed by the task. Take for example an OpenMP target region that uses the depend clause to define dependencies, and the map clause to define data movement. The depend clause only determines the producer-consumer relationship of tasks, while the map clause only determines memory movement operations between host and device. However, a task may contain pointers not defined in the depend clause, for example when dealing with global variables. This is worse when a unified shared memory system spans across host and device, since pointers in the tasks may interact with any part of the subsystem (Li et al., 2018). Such freedom makes it hard to predict task latency and side effects. As a result, performing operations such as streaming, data recoding, or using heterogeneous systems with memory accelerators, are not well differentiated in the program description and execution, and must be managed by the user. Saying it differently, the scheduler is not in charge of orchestrating memory and its relationship to compute. Restricting tasks to always define dependencies helps. Such an approach is used in the OpenMP Cluster model (Li et al., 2017; Li et al., 2018), allowing the runtime to perform smart memory management. However, this does not yet resolve issues with complex memory hierarchies found in heterogeneous systems. Previous work has demonstrated that the Codelet Model can be an effective programming model for heterogeneous systems (Li et al., 2017; Li et al., 2018; Li et al., 2018). However, Codelets being non-preemptive can be both a strength and a weakness. Without the ability to interrupt Codelets, data locality becomes essential when designing a high performance and efficient program. If required data is not local to a Compute Unit (CU) when a Codelet is executed, the processor effectively stalls while the data is fetched. With proper management, on the other hand, Codelets' atomic nature can ensure that computation is performed while the necessary data is local and that the data need only be local for as short a time as possible. This missing capability is the key to realize the Codelet model's full potential on extreme scale heterogeneous systems. Prior work has suggested percolation as an important way to improve performance through the memory wall (Li et al., 2018). Our proposal of the Memory Codelet provides exactly such an explicit mechanism to orchestrate memory for compute Codelets. Memory Codelets will be executed on a near-memory architecture dedicated to Memory Codelets only, and would be able to prefetch data, stream data, and perform recode operations. More specific operations like pointer swizzling might also be handled by Memory Codelets. On systems with less conventional memory hierarchies or with multiple devices, Memory Codelets would also be able to explicitly move data throughout the memory hierarchy (for example, to local scratchpad memory) and coordinate reads, writes, streams, and recode operations to the various devices. ### Contributions The contributions of this paper are described as follows: * Conceptualization of Memory Codelets and their integration into the Codelet Model * Integration of Memory Codelets into the Codelet Model, the Sequential Codelet Model, and their Abstract Machines * Simple examples that demonstrate the use of Memory Codelets through Sequential Codelet Model semantics ## 2. Background Program Execution Models (PXMs) as a concept are effectively a holistic system view that offers clear and well-defined behavior at all levels of the system from the hardware to the software. As this is often difficult to achieve without significant resources and end-to-end design, PXMs are often relegated to the domain of software runtimes in practice; however, they still offer the user/developer an organization of execution that can be relied upon and used to build effective programs for systems. This is especially important as the industry trends towards extreme heterogeneity, and various programming models and APIs might be used to craft a single high performing program. A well defined program execution model is a fundamental step towards hardware/software co-design (Bahdan et al., 2015). A more precise definition of PXMs can be found in (Chen et al., 2016; Li et al., 2017; Li et al., 2018). ### Codelet Model The Codelet Model is a dataflow-inspired PXM to organize computation with an accompanying abstract machine (Li et al., 2017; Li et al., 2018). It is both fine-grained and event driven, breaking programs into Codelets, non-preemptive portions of sequential computation with defined inputs and outputs. As such, Codelets are the quantum unit of scheduling of a Codelet-based program. Programs in the Codelet Model are described by Direct Acyclic Graphs, with nodes in the graph representing Codelets and directed arcs representing data and control dependencies between them. This allows the program to clearly define the ordering of Codelet execution only where necessary while permitting flexibility in Codelet scheduling otherwise. Codelets are also event driven, allowing dependencies to be seen as split phase transactions, or as activations based on external events. To benefit more from locality, Codelets are grouped into Threaded Procedures (TPs). The Codelet Abstract Machine defines the components of a system that executes Codelet Programs. It designates the roles of Compute Unit (CU) and Scheduling Unit (SU). the SU is responsible for creation of TPs and scheduling Codelets during runtime, while the CU is responsible for "firing" (executing) Codelets once their dependencies are fulfilled. For a more fleshed out description of the basic Codelet PXM, view prior publications (Han et al., 2017; Wang et al., 2018). Since CUs are general in the abstract machine, almost any computational architecture can be mapped to them, which enables the Codelet Model PXM to gracefully handle extreme heterogeneity. As long as Codelets contain computation that can feasibly be performed on at least one CU of the system or has a version that can be executed on that CU, heterogeneous scheduling in the Codelet Model implementation can enable dynamic heterogeneity as in (Han et al., 2017). The event-driven dependency-based scheduling can aid users to reason about synchronization between the Codelets of the program, whereas typical heterogeneous execution on conventional systems leaves synchronization and explicit data movement to the user. Such a burden can slow the development process and lead to hours of debugging. Furthermore, with systems increasingly having various specialized architectures, computation of individual tasks (or in this case, Codelets) may be drastically sped up. In some applications this may lead to increased strain on the memory architecture, which indicates that better memory management and data movement throughout the memory hierarchy is paramount to continue improving performance. ### Sequential Codelet Model The Sequential Codelet Model (SCM) (Han et al., 2017; Wang et al., 2018; Wang et al., 2018) is an extension of the original Codelet Model (Han et al., 2017; Wang et al., 2018). SCM is heavily inspired by instruction level parallelism (ILP) techniques in sequential computing architectures. In ILP, dataflow is used to discover dependencies in the instruction stream and allow parallel execution of independent instructions. In particular, SCM heavily draws from out-of-order execution that uses register names to respect true dependencies while removing anti- and output dependencies. In SCM, Codelet graphs are defined as a sequential stream of instructions where dependencies are registers. However, compared to traditional registers in sequential architectures, the dependencies between Codelets are larger in size. This is possible because the pipeline in charge of executing the Codelet program (i.e. Fetch, decode, execute, memory and write back) sits on top of the compute units. Therefore, registers are assigned to a location equivalent to the last level of cache (LLC). The execution stage of the pipeline is comprised of these CUs, which can be implemented as any architecture such as Streaming Multiprocessors (in GPGPUs), a single or multi core CPU system, an FPGA, or any other exotic/specialized architecture. Control flow instructions are also supported in SCM. These instructions allow dynamically defining more complex Codelet Graphs by using conditional and unconditional jumps, loops, and a subset of basic arithmetic operations. Additionally, memory operations are necessary to move data back and forth between the upper level memory (e.g. DRAM) and the register file. In this work we formalize this concept as Memory Codelets, which is equivalent to memory operations in sequential ISAs, yet more powerful thanks to the possibility of Memory Codelets to be user defined. A complete description of the Sequential Codelet Model and its realization in the SuperCodelet architecture can be found in (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). ## 3. Memory Codelets and the Codelet Abstract Machine Traditional instruction set architectures (ISA) group instructions according to their functionality. Such distinction also has an effect in the functional unit used in the pipeline of the architecture that implements the ISA. Among the different groups, memory instructions (e.g. load, and store operations) focus on the interaction and movement of data in and out of the compute pipeline. In addition to ISA instructions, current system architectures feature caching and prefetching mechanisms with the purpose of managing memory in the system, such that it increases performance while respecting a set of rules enforced by memory consistency and coherency models. Prefetching, for example, is an event driven mechanism that is triggered based on multiple access patterns across multiple memory requests. In addition to the ISA, conventional systems typically have multilevel cache hierarchies that are completely invisible from the software perspective and do not provide reconfigurability nor programmability in the memory hierarchy. The rigidity of the coordination of memory in the hierarchy does not allow enough flexibility to execute programs with uncommon memory access patterns at high performance. In comparison, the organization of programs in the Codelet Model gives a definite and specific plan of what data is consumed/produced by each Codelet, which provides the information necessary for prefetching (Kolte et al., 2017), recoding (Han et al., 2017; Wang et al., 2018), and streaming strategies that are well integrated into the abstract machine and provides customization of the memory management. By virtue of the Codelet Model PXM, these strategies would benefit most from being implemented at multiple levels of the system stack and employing hardware/software co-design. More benefits of the Codelet Model PXM in future extremely heterogeneous architectures are summarized in (Han et al., 2017). Memory Codelets in the Codelet Abstract Machine have a similar role for memory management as the prefetching mechanism does in conventional execution. Memory Codelets allow for data to be moved, manipulated, reorganized or streamed through different components in the system, aiming to serve compute Codelets such that their execution time is driven by arithmetic operations rather than memory accesses. Memory Codelets make memory management explicit, and they are particularly useful when memory access patterns cannot be recognized in the memory subsystem. This is the case for applications that cannot easily take advantage of caches and prefetching. Like traditional Codelets, Memory Codelets are event- and data-driven, and form independent nodes in the Codelet graph. However, instead of mapping to compute units that are heavily specialized for arithmetic operations, memory Codelets map to specialized units that emphasize throughput and low latency (Leskovsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017). ### Memory Codelet Abstract Machine Let us begin with a definition of an abstract machine that extends from the Sequential Codelet Model, and the original Codelet Model. Figure 1 shows the different components of the Memory Codelet Abstract Machine. We focus on a single level of machine hierarchy, while this can be extended to other levels of the machine abstraction. As in traditional Codelet Models, the architecture is made out of Compute Units (CUs) and Scheduling Units (SUs). Additionally, a Memory Codelet Unit (MCU) is included, that is in charge of execution of memory codelets. There are two aspects that make the Memory Codelet Unit different to any other compute unit. First, its compute capabilities are tailored for fast data transformation (e.g. (Beng et al., 2015; Krizhevsky et al., 2017; Krizhevsky et al., 2017)) and movement. Second, it can directly interact and communicate with the different memory storage components of the system, including local memory, external memory, and specialized memory structures (e.g. FIFO queues). Because the Codelet Model clearly defines Codelets' inputs and outputs, Memory Codelets can be leveraged to benefit from the static Codelet graph of the program being executed. The Codelet Graph defines a partial execution order of the Codelets based on their data dependencies. The actual execution order of Codelets depends heavily on the scheduling mechanism employed in the Codelet Model implementation; for example, DARTS (DARTS, 2018), a software runtime implementation of the Codelet Model, employs multiple scheduling mechanisms such as round robin and work stealing that the developer can choose for their program. These scheduling mechanisms always respect data dependencies to ensure correct program execution. ### Prefetching, Streaming, and Recoding with Memory Codelets When a Memory Codelet is executed, data does not need to always pass through the MCU silicon fabric. The data can be moved directly between different physical locations in the system (e.g. CU to CU or DRAM to a CU directly). Thus, the MCU does not have to be a bottle neck for memory transactions, but a coordinator for memory across the Codelet Abstract Machine. Since Memory Compute Units can be seen as traditional near memory compute architectures, data can be fetched to the MCU, transformed, and delivered to other physical locations. In keeping with the goals of a PXM, the MCU is designed to understand Codelet semantics for scheduling and dependencies management. This mechanism can be used to ensure data locality prior to the scheduling and execution compute Codelets. This can be an effective prefetching mechanism, especially given that memory Codelets are written by the developer who has knowledge of the access patterns of the program. Thus, Memory Codelets can perform the necessary preprocessing to determine the data needed for the compute Codelet, even if these operations are complex. For example, data does not need to be contiguous or respect a simple stride pattern. A Memory Codelet can perform pointer chasing across the graph to obtain the necessary properties from different nodes. With the Memory Codelet Unit being implemented as a near-memory processor, latency for memory accesses will be reduced compared to if a traditional compute core (CU) were to perform it. An example of this behavior can be seen below, where data is able to be reliably prefetched in a timely manner due to the dependencies clearly dictated by the program graph. The necessary input data for Codelet Comp1 can be prefetched from memory through Memory Codelet LoadData1_2048L, while Codelet Comp0 is being executed. Hence, overlapping compute of Comp0, prefetching of data into R2048L_3, followed by no DRAM memory access during execution of Comp1. This example is general and can be extended to various situations. Furthermore, we envision that with the help of DMA-like hardware, the Memory Codelet Unit should be able to reliably feed the necessary data to the Compute Units in the system. ``` 1//LoadataforComp,prefetchdataforComp1 2MEMCODLoadData0_2048L8L8248L_2,R648_5,R648_22; 3MEMCODLoadData1_2048L8L8248L_3,R648_7,R648_23; 4//CompdependsonData(R_2) 5//comp1dependsonComp(R_1)andData1(R_3) 6CODComp2_2048L8R248L_1,R2048L_2; 7CODComp1_2048L8248L_3,R2048L_1,R2048L_3; 8//storecomputationresult 9MEMCODStoreData_2048L82048L_3,R648_7,R648_23; ``` Listing 1: Sparse GEMM outer product partial prefetching example In programs where the access pattern within the register is known by producer and consumer, streaming can be fully overlapped by computation in a timely fashion. To further support this organization of execution, hardware-based FIFO queues could be added to the system as mentioned in (Krizhevsky et al., 2017; Krizhevsky et al., 2017). With hardware FIFOs available, the streaming could occur as early as allowed by the algorithm and the Codelet Model semantics without using on-CU resources (like local memory) that might be needed by other Codelets' execution. The FIFO queue can be used as a buffer between the memory loads performed by the Memory Codelet Unit and the consumer Codelet performing main computation. Hence, Figure 1. Memory Codelet Abstract Machine with possible heterogeneous CU architectures mentioned. traditional streaming can be implemented as a pipeline from a Memory Codelet to a compute Codelet via FIFO queue. In the example below, a smart streaming based outer product sparse GEMM application is shown. The Memory Codelets walk the compressed CSR and CSC formats and stream elements to the compute Codelet to perform the outer-product multiplications, producing partial matrices. As the partial result matrices are created, we can imagine that the register that connects spOuterMatMult and PartialsSum, R2048L_4, acts as a FIFO, such that they are streamed to the PartialsSum Codelet to form the complete result matrix. ``` 1//StreamchunksofmatrixtopSuperMattHult 2MREMODStreamCSCBlock_2048L R2048L_2,R648_6,R648_22; 3MREMODStreamCSCBlock_2048L R2048L_3,R648_7,R48_23; 4//Performouterproductmult 5//Streampartialresultmats.out 6CODspOuterMatMult_2048L R2048L_4,R2048L_2,R2048L_3; 7//Streaminpartialintricesandsum 8CODPartialsSum_2048L R648_8,R2048L_4 ``` Listing 2: Outer product sparse GEMM streaming Beyond this, Memory Codelets being used for prefetching and streaming can be extended into performing recode operations, similar to [13, 14, 37]. A recode operation could easily be crafted by pipelining both prefetching/streaming and preprocessing in the Memory Codelet Unit itself. This can be viewed in terms of batches of data: the CU will be performing main computation on the earliest batch of data while the Memory Codelet Unit is performing preprocessing on the next batch and the third batch is in flight. Though these applications of Memory Codelets are somewhat dependent on the implementation and the specific memory hierarchy structure of the architecture in use, their concepts are applicable in many cases and the extended Codelet Model provides a cohesive model of program execution. An example can be seen below where the same outer product computation is performed, but both matrices are in CSC form, so a Memory Codelet performs recoding to change the second matrix to CSR for ease of computation. This example maintains the same streaming format as the earlier example. ``` 1//FetchblockofB;recodeblockofCintoCSRformat; 2//streambothtoCU 3MEMCODGetHCSCBlock_2048L R2048L_2,R648_6,R648_22; 4#CCBO ConvertCCBlock_2048L R2048L_3,R648_7,R648_23; 5//Dsoputerproductmult;streampartialmat.out 6CODspouterMatMult_2048L R2048L_4,R2048L_2,R2048L_3; 7//SumaStreamed-inmatrices,storeresult 8CODPartialsSum_2048L R648_8,R648L_ Listing 3: Outer product sparse GEMM streaming with recode operation ### Modification and Use of Conventional Memory Systems Modern memory subsystems are complex with a very large design space, and can be difficult to effectively utilize in non conventional applications with irregular memory behavior. Though Memory Codelets can certainly aid effective memory use strategies in Codelet programs, their effects are largely dependent on the memory hierarchy of the system and its protocols. Two non-exclusive paths can be targeted to improve performance of programs based on utilization of a system's memory hierarchy: modification of memory systems to include less-conventional components/strategies and tuning of programs to better their use of conventional components. #### 3.3.1. Tuned Use of Conventional Memory Systems If we assume that the program is executing on a conventional processing system with a 3-level cache hierarchy between the processor and DRAM, timeliness of prefetching becomes paramount due to the danger of data being evicted from the cache early. We speculate that this issue could be mitigated thanks to Memory Codelets and the static information present in the Codelet Graph; L1 and L2 use would be predictable based on the properties of the Codelets. Furthermore, increasing performance of the system based on prefetching can benefit from more targeted approaches, as in [29]. Publications such as [28]([27]) can illuminate the importance of eliminating pure miss cycles in the cache hierarchy. With this information improved scheduling mechanisms can be applied and Memory Codelets can be organized to improve bandwidth utilization. This is one of the major benefits of a software-programmable unit that can perform prefetching, combined with scheduling that is memory-aware. Though the memory hierarchy in use is detached from the Codelet Model semantics, in a conventional system the LLC can be thought of as Threaded Procedure memory. This indicates that ideally, before a Codelet's firing, the data it requires should be resident on that CU's L1 or L2 cache (depending on size). The wisdom in the citations above could then be applied conceptually as balancing the flow of data between TP memory and CU memory, with prediction made easier by the Codelet Program graph. #### 3.3.2. Modified Memory Systems Despite improving effective utilization on conventional memory systems, each memory hierarchy has its own drawbacks. In this case, consider the earlier distinction between LLC as TP memory and L1/L2 caches as CU local memory. Codelets have well defined input and output and we expect the input data to be local to the CU at the time of firing. This means in practice, a Codelet's "size" would be limited based on the size of local memory storage. However, this can be avoided through the use of streaming Codelets, especially with the aid of Memory Codelets and FIFO queues. This would be implemented best with FIFO queues in hardware, but this may require modification of the memory hierarchy and how data is moved through it. Codelet Model semantics and the configurability provided by Memory Codelets would allow hardware-based FIFO queues to coexist with typical memory hierarchies. In addition to hardware FIFO queues, a hierarchy of scratchpad memory units could replace the typical cache hierarchy, or more practically, the upper levels of it. This would allow the software to have more control over how data is prefetched in anticipation of firing Codelets and at what granularity data is moved between DRAM and the CUs. A scratchpad memory system would particularly aid in programs that typically achieve low performance on conventional systems due to low use of data loaded because of cache line size. In addition, it would avoid slowdowns in programs that stall often due to cache thrashing and cache line invalidations. It is a possibility that both to maintain coherence and improve performance throughout the memory hierarchy, Memory Codelets could be inserted into the Codelet Graph by the compiler based on analysis of the graph and the Codelets' dependencies, somewhat analogous to the strategy of (Codelets, 1992) in a different PXM. Fig. 2 shows an illustration of strategies to use Memory Codelets to improve a program that is hindered by typical cache protocols. In this Figure T5 represents the runtime of a program in a traditional architecture. T4 benefits from prefetching data. T3 is lowered by prefetching data and recoding it, such that the computation itself is faster. T2 uses streaming from a Memory Codelet to the computation Codelet via FIFO. Finally T1 combines all the approaches: Prefetching, recoding, and streaming. #### 3.3.3. Memory Codelets and Coherency Because Memory Codelets can allow for fine-grained control over data movement throughout the memory hierarchy of the system, they can help the Codelet Model to fulfill the role of providing coherency to the user at a software level. Most conventional systems with multiple compute units on a chip (such as multiple cores in a multiprocessor) have mechanisms to provide memory coherency throughout the memory hierarchy and between the compute units. Coherency at the hardware level allows the software to have a flat, one dimensional view of the memory without having to manage multiple copies of data. While this is certainly useful for programmers developing multi-threaded applications, the hardware mechanisms that provide the coherency can use up precious chip real estate and the cost can scale poorly as the number of compute units and physical memory units increase on a chip. Beyond this, enforcing the coherency model through cache line invalidation can reduce effective use of memory resources. The need for hardware coherency can be thought of as a crutch for programmers using the threading model for parallel programming; in other words, the threading model of computation is so ill-defined and prone to non-determinism that programmers would have great difficulty developing on a multithreaded system without hardware coherency (Kolomek and Kolomek, 2017). However, the situation depends entirely on the semantics of the Program Execution Model employed in the system. In the threading model of parallel programming, various threads can concurrently execute and access the entire memory space of the program. This places the entire burden of synchronization and avoiding data races on the programmer's shoulders, which is notoriously difficult (Kolomek and Kolomek, 2017). Furthermore, on even the most modestly heterogeneous systems, coherency generally becomes the responsibility of the programmer through explicit data movements between host and device. In the Codelet Model PXM, computation is broken down into pieces of computation that are partially ordered by data dependencies between them as expressed in the Codelet Graph. Through this mechanism, data races are avoided and the necessary synchronization is achieved. Moreover, because the program is broken down into Codelets and their specific input and output data, two or more Codelets that access the same data (with at least one of the accesses being a write/store) will only execute when no data dependency has been declared between them, which in a correctly written program indicates the program is intended to have non-deterministic behavior in that program section. In other words, Codelets are partially ordered based on their data dependencies. Readers may question the difference in difficulty between writing synchronization into a multithreaded program and correctly writing the dependencies in a Codelet Model program; the main difference lies in the finite behavior of Codelets and the data they access. Synchronization and orchestration of threads that can access any data in the address space is unwieldy and unmanageable, whereas finer-grained Codelets with well defined data access make the process more straightforward. As systems trend towards extreme heterogeneity, hardware coherency may not be feasible. The CXL 3.0 specification includes coherency mechanisms (Codelets, 2017) but no implementations have been released yet. While this is generally considered manageable in a CPU-GPU only system, with various accelerators or specialized architectures, each possibly having their own memory hierarchy it rapidly becomes less manageable. The burden is eased with a well defined PXM and Memory Codelets to provide clear functionality. This issue is discussed further with respect to chiplet-based systems in (Kolomek and Kolomek, 2017). ## 4. Conclusion In this paper we introduce the concept of Memory Codelet into the Codolet Model. We define an extended abstract machine with a Memory Codelet Unit that supports coordination of prefetching, streaming and memory scheduling operations, as well as data recoding operations. We demonstrate through the use of the Sequential Codelet Model and how Memory Codelets can be used in the context of an application. Future work includes implementation and testing of this work on heterogeneous architectures. Figure 2. A representation of how Memory Codelets could be used to accelerate programs with poor cache behavior. Blue and yellow boxes include loading memory in a way that bypasses cache (e.g. loading from scratchpad memory). Yellow boxes include recoding operations done by the MCU such that main computation is more regular.
2309.12663
Quantization of Length in Spaces with Position-Dependent Noncommutativity
We present a novel approach to quantizing the length in noncommutative spaces with positional-dependent noncommutativity. The method involves constructing ladder operators that change the length not only along a plane but also along the third direction due to a noncommutative parameter that is a combination of canonical/Weyl-Moyal type and Lie algebraic type. The primary quantization of length in canonical-type noncommutative space takes place only on a plane, while in the present case, it happens in all three directions. We establish an operator algebra that allows for the raising or lowering of eigenvalues of the operator corresponding to the square of the length. We also attempt to determine how the obtained ladder operators act on different states and work out the eigenvalues of the square of the length operator in terms of eigenvalues corresponding to the ladder operators. We conclude by discussing the results obtained.
Jishnu Aryampilly, Muthukumar Balasundaram, Aamir Rashid
2023-09-22T07:09:20Z
http://arxiv.org/abs/2309.12663v1
# Quantization of Length in Spaces with Position-Dependent Noncommutativity ###### Abstract We present a novel approach to quantizing the length in noncommutative spaces with positional-dependent noncommutativity. The method involves constructing ladder operators that change the length not only along a plane but also along the third direction due to a noncommutative parameter that is a combination of canonical/Weyl-Moyal type and Lie algebraic type. The primary quantization of length in canonical-type noncommutative space takes place only on a plane, while in the present case, it happens in all three directions. We establish an operator algebra that allows for the raising or lowering of eigenvalues of the operator corresponding to the square of the length. We also attempt to determine how the obtained ladder operators act on different states and work out the eigenvalues of the square of the length operator in terms of eigenvalues corresponding to the ladder operators. We conclude by discussing the results obtained. ## 1 Introduction In the expansive realm of fundamental physics, the notion of spacetime stands as a bedrock upon which our understanding of the universe is constructed. Conventionally, the principles of classical physics have offered a framework to describe spacetime where space and time coordinates exhibit commutativity, enabling precise measurements of position and duration. Nevertheless, emerging theories and frameworks have heralded a departure from this classical paradigm, revealing the intriguing prospects of a noncommutative spacetime. The idea of noncommutativity in the coordinates of spacetime can be attributed to Heisenberg [1], and it was subsequently developed further by Snyder [2] as a means to address the issue of divergences in quantum field theory. Noncommutative spacetime theory suggests that space and time coordinates do not commute but instead exhibit a fundamental uncertainty or noncommutativity relation. This revelation necessitates acknowledging that precise measurements of both position and time are inherently uncertain at infinitesimal scales. This departure from classical notions has sparked considerable interest and led to the formulation of various theoretical models attempting to capture the essence of noncommutative spacetime [3, 4, 5]. The growing interest in noncommutative spaces is connected to the prediction of noncommutative structures in string theory and loop quantum gravity [6, 7, 8, 9]. The literature on noncommutative theories is replete in the contexts of quantum field theories [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], quantum mechanics [21, 22, 23, 24, 25, 26, 27, 28, 29, 30] and gravity theories [31, 32, 33, 34, 35, 36]. In the case of noncommutative geometry by Alain Connes [37, 38], the spectral manifold is shown to have a geometric analog of the Heisenberg commutation relation involving the Dirac operator and Feynman slash of real scalar fields, leading to the quantization of volume [39]. The idea of length as an operator has already been discussed in canonical quantum gravity [40]. Loop quantum gravity has developed the rigorous construction of (spatial) geometrical operators, such as the area and the volume [41, 42]. Within the context of noncommutative spaces, the idea of length as an operator was proposed in [43], where it was shown to result in the quantization of length in the noncommutative space with the canonical/Weyl-Moyal type noncommutativity \[[\hat{x}^{\mu},\hat{x}^{\nu}]=i\theta^{\mu\nu} \tag{1}\] among the coordinates where \(\theta^{\mu\nu}\) is a constant and real antisymmetric matrix. The motivation behind such a proposal was the following. The noncommutativity of spatial coordinates is closely linked to the existence of a minimum length within a system. This minimum length is commonly associated with inherent uncertainties in distance measurements. Rather than associating the minimum length with uncertainties, it can be attributed to the minimum value of the quantized length. If length is taken as an operator with a spectrum of eigenvalues, then a set of ladder operators may exist to go from one eigenstate to another such that \[[\hat{L}^{2},\hat{a}]=\lambda\,\hat{a}, \tag{2}\] where \(\hat{L}^{2}\) is the operator corresponding to the square of length and \(\hat{a}\) and \(\hat{a}^{\dagger}\) being the ladder operators. If \(\hat{a}=\sum_{\mu}\alpha_{\mu}\,\hat{x}^{\mu}\), where \(\alpha_{\mu}\)'s are complex constants, then Eq.(2) leads to an eigenvalue equation which, in the case of 3-D with \(\theta^{12}=\theta^{13}=\theta^{23}=\theta\), gives three real eigenvalues and their corresponding eigenvectors. Two of the eigenvectors give the required ladder operators that change the length in the plane formed by them. The third eigenvector points in the direction normal to that plane and the length is not quantized along this normal direction [43]. In this paper, we follow an approach similar to the operator methods in the quantum harmonic oscillator and the angular momentum problems and apply it to the case of position-dependent noncommutativity. Position-dependent noncommutativity has been discussed before in [44, 45, 46, 47, 48]. In particular, the noncommutative parameter used in our approach is the following combination of the canonical/Weyl-Moyal type and Lie algebraic type: \[[\hat{x}^{\mu},\hat{x}^{\nu}]=i(\theta^{\mu\nu}+B^{\mu\nu}_{\ \ \rho}\hat{x}^{ \rho}), \tag{3}\] where \(\theta^{\mu\nu}\) corresponds to a constant and real antisymmetric matrix and \(B^{\mu\nu\rho}\) is real and completely antisymmetric. The paper is organized as follows. In Section 2, we establish an operator algebra of the length-square operator that allows for the raising and lowering of eigenvalues of \(\hat{L}^{2}\). In Section 3, we apply this approach in a 3-dimensional space. We construct a commutation relation between the operator \(\hat{L}^{2}\) corresponding to the square of length and the ladder operators analogous to the commutation relation between the Hamiltonian of the harmonic oscillator and its raising/lowering operator. We work with this commutation relation to obtain an eigenvalue equation and consequently construct a set of operators \(\hat{a}_{-}\), \(\hat{a}_{+}\) and \(\hat{b}\). Once we have obtained the operators, we adopt them to construct a ladder of states that constitute the eigenstates of the \(\hat{L}^{2}\) operator. In Section 4, we work out the eigenvalues of \(\hat{L}^{2}\) in terms of eigenvalues corresponding to the operators \(\hat{a}_{+}\hat{a}_{-}\) and \(\hat{b}\). The actions of \(\hat{a}_{-}\) and \(\hat{a}_{+}\) change not only the eigenvalues of \(\hat{L}^{2}\) but also the eigenvalues of \(\hat{b}\), and in this way, the length is quantized along the direction of \(\hat{b}\) also. In Section 5, by introducing another operator \(\hat{K}\) that commutes with the ladder operators, we investigate the system's degeneracy further. We conclude in Section 6. ## 2 Construction of Ladder Operators We establish an operator algebra in a manner that allows for the raising or lowering of eigenvalues of the operator \(\hat{L}^{2}\). We define the operator corresponding to the square of the distance as \[\hat{L}^{2}=g_{\mu\nu}\hat{x}^{\mu}\hat{x}^{\nu}, \tag{4}\] where \(g_{\mu\nu}\) is a constant symmetric metric of \(D\)-dimensional spacetime and Einstein's summation convention is used over the repeated indices \(\mu\) and \(\nu\) which take the values \(1,2,\ldots D\). The prescription to construct a set of ladder operators \(\{\hat{a}^{\mu}\}\) is that they satisfy the following commutation relation: \[[\hat{L}^{2},\hat{a}^{\mu}]=\lambda\,\hat{a}^{\mu}, \tag{5}\] where \(\lambda\) is a constant to be determined and \(\hat{a}^{\mu}\) is linearly related to \(\hat{x}^{\mu}\) as \[\hat{a}^{\mu}=U^{\mu}_{\ \nu}\,\hat{x}^{\nu}, \tag{6}\] where \(U^{\mu}_{\ \nu}\) is the transformation matrix. Substituting Eq.(4) and Eq.(6) into Eq.(5) and using Eq.(3) leads to the following operator equation: \[2i\,\theta_{\mu}^{\ \rho}\,U^{\sigma}_{\ \rho}\,\hat{x}^{\mu}\,+\,i\,U^{\sigma}_{ \ \rho}\,B_{\nu}^{\ \rho}_{\ \kappa}\,(\hat{x}^{\nu}\,\hat{x}^{\kappa}\,+\,\hat{x}^{ \kappa}\,\hat{x}^{\nu})=\lambda\,U^{\sigma}_{\ \mu}\,\hat{x}^{\mu}. \tag{7}\] Since \((\hat{x}^{\nu}\,\hat{x}^{\kappa}\,+\,\hat{x}^{\kappa}\,\hat{x}^{\nu})\) is symmetric and \(B_{\nu}^{\ \rho}_{\ \kappa}\) is antisymmetric under the exchange of \(\nu\) and \(\kappa\), the second term is zero which leads to the eigenvalue equation for the transformation matrix: \[2i\,\theta_{\mu}^{\ \rho}\,U^{\sigma}_{\ \rho}\,=\lambda\,U^{\sigma}_{\ \mu}\,. \tag{8}\] If \(\hat{X}^{\dagger}=\left(\hat{x}^{1},\hat{x}^{2},\ldots\hat{x}^{D}\right)\) and \(g\) is the matrix form of the metric tensor, then \[\hat{L}^{2}=\hat{X}^{\dagger}\,g\,\hat{X}. \tag{9}\] To relate \(\hat{L}^{2}\) to the ladder operators we define \(\hat{A}^{\dagger}=\left(\hat{a}^{1\dagger},\hat{a}^{2\dagger},\ldots\hat{a}^{ D\dagger}\right)\). Going by the analogy with harmonic oscillator and angular momentum problems, the ladder operators will be useful only if the length operator is related to number operators \((\hat{a}^{1})^{\dagger}\hat{a}^{1}\), \((\hat{a}^{2})^{\dagger}\hat{a}^{2},\ldots\). Therefore, we require \(\hat{L}^{2}\) in the following form: \[\hat{L}^{2}=\frac{1}{\gamma}\,\hat{A}^{\dagger}g\hat{A}=\frac{1}{\gamma}\, \hat{X}^{\dagger}U^{\dagger}\,g\,U\hat{X}, \tag{10}\] where \(\gamma\) is a constant. Comparing Eq.(9) and Eq.(10), we get following condition for the transformation matrix: \[U^{\dagger}\,g\,U=\gamma\,g. \tag{11}\] ## 3 The Length Operator in 3-D Space We attempt to apply our approach to a 3-dimensional space. For this case, the commutation relation that we use can therefore be defined as \[[\hat{x}^{i},\hat{x}^{j}]=i(\theta^{ij}+B^{ij}_{\ \ k}\,\hat{x}_{k}), \tag{12}\] We define the ladder operator as \[\hat{a}=\alpha_{k}\hat{x}^{k}, \tag{13}\] where Einstein's summation convention is implied and \(\alpha_{k}\)'s are complex constants to be determined. The operator corresponding to the square of the length, in this case, thus becomes \[\hat{L}^{2}=g_{ij}\hat{x}^{i}\hat{x}^{j} \tag{14}\] As discussed, we assume the following relation in analogy with the angular momentum operator in quantum mechanics \[[\hat{L}^{2},\hat{a}]=\lambda\hat{a}. \tag{15}\] Then, the substitution of Eq.(14) and Eq.(13) in Eq.(15) and using the commutator in Eq.(12) gives the relation \[2i\theta^{k}{}_{m}\alpha_{k}=\lambda\alpha_{m}, \tag{16}\] where \(\theta^{km}\) is a constant antisymmetric matrix and \(g_{ij}\) is assumed to be diag(1,1,1). We assume the entries of \(\theta^{km}\) to be \(\theta^{12}=\theta^{13}=\theta^{23}=\theta\), which leads Eq.(16) to give the nontrivial eigenvalues for \(\lambda=\pm 2\sqrt{3}\theta\). The third trivial solution is \(\lambda=0\). The set of values for \(\alpha_{i}\) corresponding to \(\lambda=-2\sqrt{3}\theta\) is worked out to be \((\alpha_{1},\,\alpha_{2},\,\alpha_{3})=(\rho\sigma,-\rho\sigma^{*}\rho)\), where \(\rho=e^{i\delta_{1}}\) and \(\sigma=-e^{i\pi/3}\). The operator \(\hat{a}\) corresponding to this negative \(\lambda\) is denoted by \(\hat{a}_{-}\). It is identified with the lowering operation by comparing Eq.(15) with a negative \(\lambda\) to an analogous relation in the harmonic oscillator problem. The lowering operator can then be expressed as \[\hat{a}_{-}=\rho\left[\sigma\hat{x}^{1}-\sigma^{*}\hat{x}^{2}+\hat{x}^{3} \right]. \tag{17}\] The eigenvector for the positive value \(\lambda=2\sqrt{3}\theta\) leads to the raising operator \(\hat{a}_{+}=(\hat{a}_{-})^{\dagger}\). The eigenvector corresponding to the third eigenvalue, that is, the trivial solution \(\lambda=0\), leads to the operator \(\hat{b}=\beta_{i}\,\hat{x}^{i}\) with \((\beta_{1},\beta_{2},\beta_{3})=(1,-1,1)\). With these denotations, we write the Hermitian conjugate of the basis \(\hat{A}\) in Eq.(10) as \(\hat{A}^{\dagger}=(\hat{a}_{+},\hat{a}_{-},\hat{b})\). The explicit values of \(\alpha_{i}\) in Eq.(16) and \(\beta_{i}\) have the properties such as \(\alpha_{i}\alpha^{i}=0\), \(\alpha_{i}\,\beta^{i}=\alpha_{i}^{*}\,\beta^{i}=0\) and \(\theta^{ij}\beta_{j}=0\). Essentially, the three eigenvalues for \(\lambda\) in Eq.(15) lead to following commutators \[[\hat{L}^{2},\hat{a}_{\pm}]=\pm 2\sqrt{3}\theta\hat{a}_{\pm}, \tag{18}\] and \[[\hat{L}^{2},\hat{b}]=0. \tag{19}\] Also, the commutation relations among the ladder operators \(\hat{a}_{-}\), \(\hat{a}_{+}\) and \(\hat{b}\) are obtained as \[[\hat{a}_{-},\hat{a}_{+}] = \sqrt{3}(3\theta+B\hat{b}), \tag{20}\] \[[\hat{b},\hat{a}_{\pm}] = \mp\sqrt{3}\,B\,\hat{a}_{\pm},\] (21) \[[\hat{a}_{+}\hat{a}_{-},\hat{a}_{+}] = \sqrt{3}(3\theta+B\hat{b}+\sqrt{3}B)\hat{a}_{+},\] (22) \[[\hat{a}_{+}\hat{a}_{-},\hat{a}_{-}] = -\sqrt{3}(3\theta+B\hat{b})\hat{a}_{-}, \tag{23}\] With these commutators, the operator form of the square of length is expressed as, \[\hat{L}^{2}=\frac{1}{3}[2\hat{a}_{+}\hat{a}_{-}+\sqrt{3}(3\theta+B\hat{b})+ \hat{b}^{2}]. \tag{24}\] The Eq.(18) leads to \([\hat{L}^{2},\hat{a}_{+}\hat{a}_{-}]=0\) and since \([\hat{L}^{2},\hat{b}]=0\), it is possible to construct a complete set of simultaneous eigenstates of \(\hat{L}^{2}\), \(\hat{a}_{+}\hat{a}_{-}\) and \(\hat{b}\). Eigenvalues of the Length-Square Operator Let us start with an eigenstate \(|n\rangle\) of the number operator \(\hat{a}_{+}\hat{a}_{-}\) and consider the action of the ladder operators on it. From Eq.(22) and Eq.(23), it is clear that the action of \(\hat{a}_{\pm}\) on \(|n\rangle\) is to raise/lower the eigenvalue of \(\hat{a}_{+}\hat{a}_{-}\). So, we define the actions of \(\hat{a}_{-}\), \(\hat{a}_{+}\) and \(\hat{b}\) respectively on the normalized state \(|n\rangle\) as \[\hat{a}_{-}|n\rangle = h_{1}(n)|n-1\rangle, \tag{25}\] \[\hat{a}_{+}|n\rangle = h_{2}(n)|n+1\rangle,\] (26) \[\hat{b}|n\rangle = g(n)|n\rangle. \tag{27}\] Considering the Hermitian conjugate of Eq.(25), we obtain \[\langle n|\hat{a}_{+}=\langle n-1|h_{1}^{*}(n). \tag{28}\] Thus, it can easily be shown that \[\langle n|\hat{a}_{+}\hat{a}_{-}|n\rangle=\langle n-1|h_{1}^{*}(n)h_{1}(n)|n- 1\rangle=|h_{1}(n)|^{2}. \tag{29}\] Similarly upon taking the Hermitian conjugate of Eq.(26), we can obtain \[\langle n|\hat{a}_{-}\hat{a}_{+}|n\rangle=|h_{2}(n)|^{2}. \tag{30}\] It can be further rewritten in terms of commutator to obtain the relation \(\langle n|\hat{a}_{-}\hat{a}_{+}|n\rangle=\langle n|(\hat{a}_{+}\hat{a}_{-}+[ \hat{a}_{-},\hat{a}_{+}])|n\rangle\) and using the commutator in Eq.(20), we obtain the relation \[\langle n|\hat{a}_{-}\hat{a}_{+}|n\rangle=\langle n|(\hat{a}_{+}\hat{a}_{-}+3 \sqrt{3}\theta+\sqrt{3}B\hat{b})|n\rangle. \tag{31}\] The above relation can be reexpressed using Eq.(29), Eq.(30) and Eq.(27) as \[|h_{2}(n)|^{2}=|h_{1}(n)|^{2}+3\sqrt{3}\theta+\sqrt{3}Bg(n) \tag{32}\] Considering the Hermitian conjugation of Eq.(25) with \(n+1\) in place of \(n\), we have \(\langle n+1|\hat{a}_{+}=\langle n|{h_{1}}^{*}(n+1)\). Therefore, \(\langle n|\hat{a}_{+}|n\rangle\) gives \({h_{1}}^{*}(n+1)\) on one hand but on the other hand it gives \(h_{2}(n)\), which leads to the relation \[|h_{2}(n)|^{2}=|h_{1}(n+1)|^{2}. \tag{33}\] Also, considering the action of \(\hat{b}\) on the state \(\hat{a}|n\rangle\) and by using the commutator in Eq.(21), we can find the relation between \(g(n+p)\) and \(g(n)\) for any integer \(p\) \[g(n+p)=g(n)-pB\sqrt{3}. \tag{34}\] The Eq.(18) implies that the eigenvalues of \(\hat{L}^{2}\) are decreased by \(\hat{a}_{-}\) by the amount \(2\sqrt{3}\theta\). This decrease cannot go on forever since \(\hat{L}^{2}\) will take negative values, which is unphysical. So, let us define a ground state of the system by \(|\overline{n}\rangle\) such that the action of the lowering operator \(\hat{a}_{-}\) on it gives \[\hat{a}_{-}|\overline{n}\rangle=0. \tag{35}\] Here, we have \(h_{1}(\overline{n})=0\) or \[|h_{1}(\overline{n})|^{2}=0. \tag{36}\] We can further extend our analysis using Eq.(32), Eq.(33) and Eq.(34) to compute values of \(h_{1}\) as \[|h_{1}(\overline{n}+1)|^{2}=[3\sqrt{3}\theta+\sqrt{3}Bg(\overline{n})], \tag{37}\] \[|h_{1}(\overline{n}+2)|^{2}=2[3\sqrt{3}\theta+\sqrt{3}Bg(\overline{n})]-3B^{2} \tag{38}\] and so on. The analysis can be further extended to obtain \[|h_{1}(\overline{n}+m)|^{2}=m[3\sqrt{3}\theta+\sqrt{3}B(g(\overline{n})-\frac{ (m-1)}{2}\sqrt{3}B)]. \tag{39}\] For a general \(n\), where \(n=\overline{n}+m\) and \(m\geq 0\), the above equation modifies to a function involving \(n\) and \(\overline{n}\). In a similar manner, the general form of (34) is, then, better expressed as a function of \(n\) and \(\overline{n}\) \[g(n)=g(\overline{n})-(n-\overline{n})\sqrt{3}B. \tag{40}\] where \(g(\overline{n})\) comes from the operator \(\hat{b}\) acting on the ground state, that is, \[\hat{b}|\overline{n}\rangle=g(\overline{n})|\overline{n}\rangle. \tag{41}\] Accordingly, Eq.(39) can be rewritten as \[|h_{1}(n)|^{2}\!=\!(n-\overline{n})[3\sqrt{3}\theta+\sqrt{3}B(g(\overline{n}) \!-\!\frac{(n-\overline{n}-1)}{2}\sqrt{3}B)]. \tag{42}\] We can also find using Eq.(33) that \[|h_{2}(n)|^{2}=(n-\overline{n}+1)[3\sqrt{3}\theta+\sqrt{3}B(g(\overline{n})\! -\!\frac{(n-\overline{n})}{2}\sqrt{3}B)] \tag{43}\] Since \(|h_{1}(n)|^{2}\) and \(|h_{2}(n)|^{2}\) cannot take negative values for any \(n\), we can infer that \[3\sqrt{3}\theta+\sqrt{3}Bg(\overline{n})\geq\frac{(n-\overline{n})}{2}3B^{2}. \tag{44}\] Suppose \(\tilde{n}\) is the maximum value \(n\) can take such that the above inequality stands valid. Then, we can fix a top-most state \(|\tilde{n}\rangle\) such that \[\hat{a}_{+}|\tilde{n}\rangle=0. \tag{45}\] Now, since \(h_{2}(\tilde{n})=0\) and \(\tilde{n}\geq\overline{n}\), using (43), we can express \(g(\overline{n})\) in terms of \(\tilde{n}\) and \(\overline{n}\). Thus, we obtain \[g(\overline{n})=\frac{(\tilde{n}-\overline{n})}{2}\sqrt{3}B-\frac{3\theta}{B}. \tag{46}\] Putting Eq.(46) back into Eq.(44), we get \(\tilde{n}\geq\overline{n}+m\). So essentially \(n=\overline{n},\ \overline{n}+1,\ \ldots,\tilde{n}\) and \(\tilde{n}=\overline{n},\ \overline{n}+1,\ldots.\) But there is no restriction on \(\overline{n}\), and it can take both positive and negative integer values. We may now proceed to solve for the eigenvalues of \(L^{2}\). The length-square operator in Eq.(24) leads to the following eigenvalue equation \[\hat{L}^{2}|n\rangle=\frac{1}{3}\big{(}2|h_{1}(n)|^{2}+3\sqrt{3}\theta+\sqrt{3 }Bg(n)+g^{2}(n)\big{)}|n\rangle. \tag{47}\] Using Eq.(42) and Eq.(40), we can reexpress the form of the eigenvalue of the length square operator as \[\hat{L}^{2}|n\rangle=\frac{1}{3}\big{(}[2(n-\overline{n})+1]3\sqrt{3}\theta+ \sqrt{3}Bg(\overline{n})+g^{2}(\overline{n})\big{)}|n\rangle. \tag{48}\] From the expression for \(g(\overline{n})\) deduced in Eq.(46), it is evident that \(g(\overline{n})\) is dependent on both \(\tilde{n}\) and \(\overline{n}\). In consequence, we also find that both \(|h_{1}(n)|^{2}\) and \(|h_{2}(n)|^{2}\) now depend on \(\overline{n}\) and \(\tilde{n}\) in addition to \(n\). But, \(\tilde{n}\) and \(\overline{n}\) do not necessarily take fixed values and could assume different values such that Eq.(44) is obeyed. In light of this, we now see that the state of the system should depend on \(n\), \(\tilde{n}\) and \(\overline{n}\). As a result, we can infer that we will require three indices to express the system's state, although the eigenvalues of \(\hat{L}^{2}\) depend only on \(n-\overline{n}\) and \(\tilde{n}-\overline{n}\). The state of the system can then better be represented as \(|\tilde{n},\overline{n},n\rangle\). Now, employing Eq.(46) in Eq.(48) and further simplifying, the eigenvalue of the length square operator thus emerges to be \[\hat{L}^{2}|\tilde{n},\overline{n},n\rangle=\big{(}[2n\!-\!\tilde{n}\!-\! \overline{n}]\sqrt{3}\theta\!+\!\frac{B^{2}}{4}(\tilde{n}-\overline{n})( \tilde{n}-\overline{n}+2)+\frac{3\theta^{2}}{B^{2}}\big{)}|\tilde{n}, \overline{n},n\rangle. \tag{49}\] Fundamentally, we have derived the eigenvalues of \(\hat{L}^{2}\) by expressing them in relation to the eigenvalues associated with \(\hat{a}_{+}\hat{a}_{-}\) and \(\hat{b}\). The length is quantized in all directions in contrast to [43], where the length was quantized only in a plane. It can be seen that if \(\theta\) is assigned the value of \(0\) in Eq.(49), the length-square equation reduces to the form of the angular momentum problem with the angular momentum quantum number \(l\) analogous to the value \(\frac{\tilde{n}-\overline{n}}{2}\) and \(\hbar\) analogous to \(B\). Note that \(n\) appears only in the first term which is square-bracketed in Eq.(49) and since \(n\) takes the values from \(\overline{n}\) to \(\tilde{n}\), this first term in the cases when \(n=\overline{n}\) and \(n=\tilde{n}\) respectively involves \(-[\tilde{n}-\overline{n}]\) and \([\tilde{n}-\overline{n}]\). Therefore, the minimum of \(\hat{L}^{2}\) occurs when \(n=\overline{n}\) since \(\tilde{n}\geq\overline{n}\), and this minimum is given by the equation \[\hat{L}^{2}|\tilde{n},\overline{n},\overline{n}\rangle=\big{(}-[\tilde{n}\!- \!\overline{n}]\sqrt{3}\theta\!+\!\frac{B^{2}}{4}(\tilde{n}-\overline{n})( \tilde{n}-\overline{n}+2)+\frac{3\theta^{2}}{B^{2}}\big{)}|\tilde{n}, \overline{n},\overline{n}\rangle. \tag{50}\] If \(\tilde{n}=\overline{n}\), the eigenvalue of \(\hat{L}^{2}\) is \(\frac{3\theta^{2}}{B^{2}}\). But the eigenvalue of \(\hat{L}^{2}\) can be lower than \(\frac{3\theta^{2}}{B^{2}}\) if \(\big{(}-[\tilde{n}-\overline{n}]\sqrt{3}\theta+\frac{B^{2}}{4}(\tilde{n}- \overline{n})(\tilde{n}-\overline{n}+2)\big{)}<0\) i.e., if \[\tilde{n}<\overline{n}+(\frac{4\sqrt{3}\theta}{B^{2}}-2). \tag{51}\] It is clear from Eq.(51) that if \((\frac{4\sqrt{3}\theta}{B^{2}}-2)\leq 0\), then the inequality is violated and there cannot be a minimum lower than \(\frac{3\theta^{2}}{B^{2}}\), i.e. if \(\frac{\theta}{B^{2}}\leq\frac{1}{2\sqrt{3}}\), then the minimum eigenvalue of \(\hat{L}^{2}\) is \(\frac{3\theta^{2}}{B^{2}}\). On the other hand, when \(\tilde{n}=\overline{n}+m\), where \(m>0\), the condition that the eigenvalue of \(\hat{L}^{2}\) should be greater than or equal to zero leads to a real inequality relationship between \(\tilde{n}\) and \(\overline{n}\) only if \(\frac{\theta}{B^{2}}\leq\frac{1}{4\sqrt{3}}\) which makes Eq.(51) inconsistent with \(\tilde{n}=\overline{n}+m\) for positive \(m\) and so \((-[\tilde{n}-\overline{n}]\sqrt{3}\theta+\frac{B^{2}}{4}(\tilde{n}-\overline{ n})(\tilde{n}-\overline{n}+2))\) cannot be less than zero in a consistent way. Therefore, the minimum eigenvalue of \(\hat{L}^{2}\) is \(\frac{3\theta^{2}}{B^{2}}\). This is also clear from Eq.(48) since the minimum of Eq.(46) is \(-\frac{3\theta}{B}\) and the minimum of \(n-\overline{n}\) is 0. ## 5 Degeneracy of States The dependency of the state of the system on more than one index can be better understood by the construction of another operator \(\hat{K}\) that commutes with \(\hat{a}_{\pm}\) and \(\hat{b}\). Since \([\hat{L}^{2},\hat{a}_{\pm}]=\pm 2\sqrt{3}\theta\hat{a}_{\pm}\) and \([\hat{b},\hat{a}_{\pm}]=\mp\sqrt{3}\,B\,\hat{a}_{\pm}\), the linear combination of \(\hat{L}^{2}\) and \(\hat{b}\), \(B\hat{L}^{2}+2\theta\hat{b}\), commutes with \(\hat{a}_{\pm}\). But with little changes, we construct the following linear combination \[\hat{K}=\hat{L}^{2}+\frac{2\theta}{B}\hat{b}+\sqrt{3}\theta+\frac{3\theta^{2}} {B^{2}} \tag{52}\] which also commutes with \(\hat{a}_{\pm}\). Since \(\hat{b}\) exhibits commutativity with \(\hat{L}^{2}\), we can summarise the commutator relations as \[[\hat{K},\hat{a}_{\pm}] =0, \tag{53}\] \[[\hat{K},\hat{b}] =0. \tag{54}\] These equations, along with Eq.(21), form a set of equations analogous to the operator algebra involving \(\hat{\cal L}^{2}\), \(\hat{\cal L}_{\pm}\) and \(\hat{\cal L}_{z}\) in the angular momentum problem in quantum mechanics. The eigenvalue of the operator \(\hat{K}\) acting on the state \(|\tilde{n},\overline{n},n\rangle\) can be evaluated from Eq.(49) and Eq.(52) as \[\hat{K}|\tilde{n},\overline{n},n\rangle=\left(\frac{B^{2}}{4}(\tilde{n}- \overline{n})(\tilde{n}-\overline{n}+2)+\sqrt{3}\,\theta\right)|\tilde{n}, \overline{n},n\rangle. \tag{55}\] It is evident that the eigenvalue of the operator \(\hat{K}\) is independent of \(n\). Also, its eigenvalues do not change if \(\tilde{n}\) and \(\overline{n}\) are changed, keeping \(\tilde{n}-\overline{n}\) fixed, resulting in huge degeneracy. The physical meaning of \(\hat{K}\) can be described in the following way. The Eq.(10) represents the invariance of the length operator in going from the basis \(\hat{X}\) to the basis \(\hat{A}\). Any constant eigenvalue of \(\hat{L}^{2}\) in Eq.(4) in the 3-D case would correspond to a sphere centered at the origin. In the basis \(\hat{A}\), the constant eigenvalue of \(\hat{L}^{2}\) in Eq.(24) would represent the same sphere centered at the origin. Writing \(\hat{K}\) in the form of the Eq.(24) leads to the expression \[\hat{K}=\frac{1}{3}[2\hat{a}_{+}\hat{a}_{-}+3\sqrt{3}\theta+\sqrt{3}B\hat{b}^{ \prime}+\hat{b}^{\prime\,2}], \tag{56}\] where \(\hat{b}^{\prime}=\hat{b}+\frac{3\theta}{B}\). If the constant eigenvalue of Eq.(24) represents a sphere centered at the origin, then the constant eigenvalue of Eq.(56) represents a sphere shifted along \(\hat{b}\) axis by the amount \(-3\theta/B\). In Figure 1, the sphere corresponding to a constant eigenvalue of \(\hat{L}^{2}\) is represented by the dashed circle centered at \(O\) and the shifted-sphere is represented as the shaded sphere centered at \(C\). The degenerate eigenstates of \(\hat{K}\) lie on the surface of this shifted sphere, but these states will have different eigenvalues for \(\hat{L}^{2}\) since \(\hat{L}^{2}\) is measured from the origin \(O\). While \(CA\) and \(CB\) corresponding to the eigenvalues of \(\hat{K}\) are equal, \(OA\) and \(OB\) corresponding to the eigenvalues of \(\hat{L}^{2}\) are not equal. For example, the states \(|\tilde{n},\overline{n},n\rangle=|8,2,n\rangle\) with different \(n\)'s have different eigenvalues for \(\hat{L}^{2}\) in Eq.(49), but they have the same eigenvalue for \(\hat{K}\) in Eq.(55). But the states \(|8,2,5\rangle\) and \(|9,3,6\rangle\) have the same eigenvalue for \(\hat{L}^{2}\) and the same eigenvalue for \(\hat{K}\). Such a common set of degenerate states of \(\hat{L}^{2}\) and \(\hat{K}\) lie on a circle perpendicular to the \(\hat{b}\) axis since the shift happens along the \(\hat{b}\) axis. In the basis \((\hat{x}^{1},\hat{x}^{2},\hat{x}^{3})\), the shifted sphere corresponds to the shift along all three directions. In other words, if \(d^{i}\) denotes the shift along the \(x^{i}\) direction, then defining the Figure 1: Degenerate states of operator \(\hat{K}\) lying on the shaded-sphere operator \(\hat{y}^{i}=\hat{x}^{i}+d^{i}\) such that \(B^{ij}_{\ k}d^{k}=\theta^{ij}\) in Eq.(12) would lead to the Lie structure \([\hat{y}^{i},\hat{y}^{j}]=iB^{ij}_{\ k}\hat{y}^{k}\). Although this structure would lead to the quantization of \((\hat{y}^{1})^{2}+(\hat{y}^{2})^{2}+(\hat{y}^{3})^{2}\), our length operator is different from this and in terms of \(\hat{y}^{i}\), it is \((\hat{y}^{1}-d^{1})^{2}+(\hat{y}^{2}-d^{2})^{2}+(\hat{y}^{3}-d^{3})^{2}\). While the quantization method using \(\hat{y}^{i}\) would employ raising and lowering operations along \(\hat{y}^{3}\) direction and keep the eigenvalues of \((\hat{y}^{1})^{2}+(\hat{y}^{2})^{2}+(\hat{y}^{3})^{2}\) fixed, it can be easily shown that \(\hat{y}_{\pm}=\hat{y}^{1}\pm i\hat{y}^{2}\) neither raises/lowers the eigenstates of \(\hat{L}^{2}\) nor it commutes with \(\hat{L}^{2}\), calling for the approach that starts with the Eq.(5) to look for the properly oriented \(\hat{a}_{\pm}\) in place of \(\hat{y}_{\pm}\). ## 6 Conclusion In conclusion, we have explored length quantization in the context of noncommutative spaces with position-dependent noncommutativity. Building upon the formalism similar to the quantum harmonic oscillator and angular momentum problems, we have constructed ladder operators and derived the operator corresponding to the length-square in terms of these ladder operators. This investigation has been specifically applied to the case of a 3-dimensional space where the noncommutativity parameter involved a combination of canonical/Weyl-Moyal type and Lie-type. We have found that the length quantization in this scenario leads to distinct, discrete eigenvalues for the length-square operator. The ladder operators drive the behavior of eigenvalues, resulting in the quantization of length not only within a plane but also along a direction normal to that plane. The derived ladder operators and their commutation relations have enabled us to construct a comprehensive operator for the square of length. This operator yields a structured ladder of eigenstates that are simultaneously eigenstates of both the length-square operator and certain combinations of ladder operators. Through this approach, we have given the formalism of how quantization occurs within spaces with position-dependent noncommutativity. Furthermore, we have identified the ground state and explored the maximum and minimum possible values of the quantum numbers associated with the ladder operators, thereby defining the range of valid eigenstates. The study of degenerate states constitutes a significant facet of our research inquiry. In our analysis, we have examined the implications of position-dependent noncommutativity on the degeneracy of states within the framework of the length quantization problem. By introducing an operator \(K\), we have unveiled the distinct conditions under which degenerate states emerge, shedding light on the intricate balance between spatial geometry and quantum behavior. The quantization of length in noncommutative spaces with position-dependent noncommutativity opens up new avenues for exploring the fundamental nature of spacetime and its implications for physical theories. Our findings may inspire further investigations into the intriguing interplay between geometry and quantum algebra in noncommutative spacetime settings. We have presented only the quantization of a bare fundamental ge ometric element, i.e., length. The construction of a field theory or mechanics with the underlying quantized geometric element is beyond the scope of this work.
2309.09525
Geometry of Banach algebra $\mA$ and the bidual of $L^1(G,\mA)$
This article is intended towards the study of the bidual of generalized group algebra $L^1(G,\mA)$ equipped with two Arens product, where $G$ is any locally compact group and $\mA$ is a Banach algebra. We show that the left topological center of $(L^1(G)\hat\otimes\mA)^{**}$ is a Banach $L^1(G)$-module if $G$ is abelian. Further it also holds permanance property with respect to the unitization of $\mA$. We then use this fact to extend the remarkable result of A.M Lau and V. Losert\cite{Lau-losert}, about the topological center of $L^1(G)^{**}$ being just $L^1(G)$, to the reflexive Banach algebra valued case using the theory of vector measures. We further explore pseudo-center of $L^1(G,\mA)$ for non-reflexive Banach algebras $\mA$ and give a partial characterization for elements of pseudo-center using the Cohen's factorization theorem. In the running we also observe few consequences when $\mA$ holds the Radon-Nikodym property and weak sequential completeness.
Lav Kumar Singh
2023-09-18T07:02:11Z
http://arxiv.org/abs/2309.09525v1
# Geometry of Banach algebra \(\mathcal{A}\) and the Bidual of \(L^{1}(G)\hat{\otimes}\mathcal{A}\) ###### Abstract. This article is intended towards the study of the bidual of generalized group algebra \(L^{1}(G,\mathcal{A})\) equipped with two Arens product, where \(G\) is any locally compact group and \(\mathcal{A}\) is a Banach algebra. We show that the left topological center of \((L^{1}(G)\hat{\otimes}\mathcal{A})^{**}\) is a Banach \(L^{1}(G)\)-module if \(G\) is abelian. Further it also holds permanance property with respect to the unitization of \(\mathcal{A}\). We then use this fact to extend the remarkable result of A.M Lau and V. Losert[6], about the topological center of \(L^{1}(G)^{**}\) being just \(L^{1}(G)\), to the reflexive Banach algebra valued case using the theory of vector measures. We further explore pseudo-center of \(L^{1}(G,\mathcal{A})\) for non-reflexive Banach algebras \(\mathcal{A}\) and give a partial characterization for elements of pseudo-center using the Cohen's factorization theorem. In the running we also observe few consequences when \(\mathcal{A}\) holds the Radon-Nikodym property and weak sequential completeness. Key words and phrases: Banach algebras, Arens regularity, projective tensor product, Topological Center, Group algebra 2010 Mathematics Subject Classification: 47B10, 46B28, 46M05 This research was supported by the Institute post-doctoral fellowship provided to the author by the Indian Institute of Science Education and Research-Bhopal(India) and avoid approximate idenitity arguments at each step. Many of the techniques in the proof are motivated from Lau and Losert's methods in [6], with significant generalization to the vector valued setting. In the last section we give a nice application of Cohen's factorization theorem to give a partial characterization of elements in pseudo-center(defined in section 2). Finally we end the article with a weak sequential completeness(WSC) like property for Banach space \(\mathcal{A}\) which have RNP and are WSC. For studying the vector valued functions spaces, it is inevitable to encounter Bochner integrals and vector measures and hence we revisit the basics in the next section. ## 2. Preliminaries to Vector valued functions and integration Let \((S,\mathscr{A},\mu)\) be a measure space and \(X\) be a Banach space. A function \(h:S\to X\) is said to be \(\mu\)-_simple_ if \(h=\sum_{i=1}^{n}\chi_{A_{i}}x_{i}\), where \(A_{i}\in\mathscr{A}\) for each \(i\) such that \(\mu(A_{i})<\infty\) and \(x_{1},x_{2},..,x_{n}\in X\). Now if a function \(f:S\to X\) is said to be \(\mu\)-_strongly measurable_ if there exists a sequence of \(\mu\)-simple functions converging pointwise to \(f\) almost everywhere. The Pettis measurability theorem states that \(f\) is \(\mu\)-strongly measurable if and only if it is \(\mu\)-Borel measurable and \(\mu\)-essentially separably valued (see [8] and [3] for more details). For each \(\mu\)-simple function \(h=\sum_{i=1}^{r}\chi_{A_{i}}x_{i}\), we define \(\int hd\mu=\sum_{i=1}^{r}\mu(A_{i})x_{i}\). A \(\mu\)-strongly measurable function \(f:S\to X\) is said to be _Bochner integrable_ with respect to \(\mu\) if there exists a sequence of \(\mu\)-simple functions \(f_{n}:S\to X\) such that \(\lim_{n}\int||f(s)-f_{n}(s)||d\mu\to 0\). And the Bochner integral is given by \[\int fd\mu=\lim_{n}\int f_{n}d\mu\] For \(1\leq p<\infty\) we denote by \(L^{p}(S,X)\) the collection of equivalance classes of \(\mu\)-strongly measurable functions \(f\) which are \(\mu\)-almost everywhere equal and \(\int||f(s)||^{p}d\mu<\infty\). \(L^{p}(S,X)\) becomes a Banach space with respect to the norm \(||f||_{L^{p}(S,X)}=\left(\int||f(s)||^{p}d\mu\right)^{1/p}\). The space \(L^{\infty}(S,X)\) is the collection of equivalence classes of functions which are essentially bounded in the natural sense. \(L^{\infty}(S,X)\) is also a Banach space with respect to the norm \[||f||_{L^{\infty}(S,X)}=\inf\{M\ :\ \mu\left(\{s:\ ||f(s)||\leq M\ \text{almost everywhere}\}\right)=0\}\] **Definition 1**.: A function \(F:\mathscr{A}\to X\) is called an \(X\)_-valued measure_ on \(S,\mathscr{A}\) if it is countably additive in the sense that for any disjoint sequence \(\{A_{i}\}\) of sets in \(\mathscr{A}\), we have \(F(\cup A_{i})=\sum F(A_{i})\), where convergence on the right hand side is given in norm. The _variation_ of an \(X\)-valued measure is the map \(||F||:S\rightarrow[0,\infty]\) given by \[||F||(A)=sup\Big{\{}\sum_{i=1}^{n}||F(A_{i})||\ :\ A_{i}\in\mathscr{A},\cup_{i=1}^ {n}A_{i}=A\ \text{and}\ A_{i}\cap A_{j}=\phi\ \text{for}\ i\neq j,n\in\mathbb{N}\Big{\}}\] We say that \(F\) has bounded variation if \(||F||(S)<\infty\). Further, \(F\) is said to be absolutely continuous with respect to \(\mu\) if for any \(A\in\mathscr{A}\) with \(\mu(A)=0\), we have \(F(A)=0\). We denote the Banach space of all \(X\)-valued measures on \((S,\mathscr{A})\) of bounded variation by \(M(S,X)\) with respect to the variation norm \(||F||=||F||(S)\). Further, \(B(S,X)\) denote the closed subspace of \(M(S,X)\) consisting of \(X\)-valued measures of bounded variation which are aboslutely continuous with respect to \(\mu\). A \(X\)-valued measure \(F\) is said to be _regular_ if \(\varphi\mu\) is a regular measure for each \(\phi\in X^{*}\). Equivalently for every set \(E\in\mathscr{A}\) and \(\epsilon>0\), there exists closed set \(A\in\mathscr{A}\) contained in \(E\) and an open set \(B\in\mathscr{A}\) containing \(E\) such that \(||F(E\setminus A)||<\epsilon\) and \(||F(B\setminus E)||<\epsilon\). We denote the Banach subspace of all regular measures in \(M(S,X)\) by \(M_{r}(S,X)\). Clearly \(B(S,X)\) is a subspace of \(M_{r}(S,X)\) if \(X\) has RNP. **Theorem 1**.: _[_8_, Section 5.3]_ _If \((S,\mathscr{B}(S),\mu)\) is a compact Hausdorff measure space and \(X\) is a Banach space then, \(C(S,X)^{*}\) can be isometrically identified with \(M_{r}(S,X^{*})\) of all regular measures of bounded variation on Borel susbsets of \(S\)._ Proof.: An element \(\phi\) of \(C(S,X)^{*}\) gives rise to a family of \(X^{*}\)-valued measures on \(S\) in the following way. Fixing \(\xi\in X\), one can define a linear functional \(L_{\phi,\xi}\) on \(C(S)\) by sending the function \(f\) on \(S\), to the value of \(\phi\) on the function \(S\to X\) given by \(s\mapsto f(x)\xi\). \[L_{\phi,\xi}(f)=\phi\left(f\otimes\xi\right).\] From the usual Riesz theorem, there is then a measure \(\mu_{\phi,\xi}\) defined on the Borel subsets of \(S\) satisfying \[L_{\xi,\phi}(f)=\int_{S}f\,d\mu_{\phi,\xi}.\] Corresponding to each pair \(\phi\in C(S,X)\) and \(\xi\in X\), we have obtained a measure on \(S\).. Now define a map \(\mu_{\phi}\) from the Borel subsets of \(S\) to \(X^{*}\) as follows: for any Borel subset \(E\) of \(S\), define \(m_{\phi}(E)\) to be the linear functional on \(X\) given by \[\mu_{\phi}(E)(\xi)=\int_{E}\,d\mu_{\phi,\xi}.\] The map \(\mu_{\phi}\) is regular \(X^{*}\)-valued measure on \(S\). Since the functions of the form \(x\mapsto f(x)\xi\), with \(f\in C(S)\) and \(\xi\in X\), are dense in \(C(S,X)\), it is easy to show that \(\phi\) is uniquely determined by \(m_{\phi}\). Conversely, starting with a \(X^{*}\)-valued regular measure, one can show that it must be \(m_{\phi}\) for some \(\phi\) in \(C(X,Y)^{*}\). Thus, \(\phi\mapsto m_{\phi}\) is an isometric isomorphism with respect to the variation norm. Notice that for any \(\phi\in L^{1}(S,X)\), the function \(F:\mathscr{A}\to X\) defined as \[F(A)=\int_{A}\phi(s)d\mu(s)\] for each \(A\in\mathscr{A}\) is a well defined vector measure of bounded variation which is absolutely continuous with respect to \(\mu\) and \(||F||(A)=\int_{A}||\phi(s)||d\mu(s)\). It turns out that if \(X\) has a nice geometric property called Radon-Nikodym Property(RNP), then all vector measures of bounded variation that is absolutely continuous with respect to \(\mu\) arises in the same fashion. **Definition 2**.: A Banach space \(X\) is said to have the Radon-Nikodym Property(RNP) with respect to a measure space \((S,\mathscr{A},\mu)\), if for every \(X\)-valued measure of \(F\) of bounded variation on \((S,\mathscr{A})\) that is absolutely continuous with respect to \(\mu\), there exists a function \(\phi\in L^{1}(S,X)\) such that \[F(A)=\int_{A}\phi d\mu,\ A\in\mathscr{A}\] Thus, if \(X\) has RNP then \(B(S,X)\) can be isometrically identified with \(L^{1}(G,X)\). **Definition 3**.: A bounded operator \(T:L^{1}(S)\to X\) is said to representable if there exists a function \(\phi\in L^{\infty}(X,S)\) such that \[Tf=\int_{S}f\phi d\mu,\ f\in L^{1}(S,X)\] **Theorem 2**.: _[_3_, Th. 1.3.10]_ _Let \((S,\mathscr{A},\mu)\) be a \(\sigma\)-finite measure space and \(X\) be a Banach space, and let \(1\leq p<\infty\) and \(\frac{1}{p}+\frac{1}{q}=1\). The following assertions are equivalenet._ 1. \(X^{*}\) _has RNP with respect to_ \((S,\mathscr{A},\mu)\) 2. The mapping \(g\mapsto\phi_{g}\) establishes an isometric isomorphism of Banach space \[L_{q}(S,X^{*})\simeq(L^{p}(S,X))^{*}\] where \(\langle\phi_{g},f\rangle=\int\langle g(s),f(s)\rangle\,d\mu(s)\) for each \(f\in L^{p}(S,X)\). **Remark 2.1**.: _If \(X^{*}\) does not have the RNP w.r.t \((S,\mathscr{A},\mu)\), then \(g\to\phi_{g}\) is still an isometry onto a norming subspace of \(L^{p}(S,X)^{*}\), but it may not be surjective in general._ **Theorem 3**.: _[_3_, Th. 1.3.15]_ _Let \((S,\mathscr{A},\mu)\) be a \(\sigma\)-finite measure space. For a Banach space \(X\), the following assertions are equivalent._ 1. \(X\) _has_ \(\operatorname{RNP}\) _with respect to_ \((S,\mathscr{A},\mu)\)_._ 2. _Every bounded linear operator_ \(T:L^{1}(S)\to X\) _is representable._ It is a standard fact that if \(X\) is reflexive or is a separable dual space, then \(X\) posess RNP with respect to any \(\sigma\)-finite measure space. The fourier algebra \(A(G)\) has \(\operatorname{RNP}\) iff \(G\) is compact. The algebra of Trace class operators on a Hilbert space \(\operatorname{Space}\) has the \(\operatorname{RNP}\). Dual of a \(C^{*}\)-algebra \(\mathcal{A}\) have \(\operatorname{RNP}\) iff and only if \(\mathcal{A}\) is Scattered. Pre-dual of a Von-neumann algebra has \(\operatorname{RNP}\) if and only if it is a direct sum of type-\(I\) factors (see [2]). Space \(c_{0},\ell^{\infty},C[0,1]\) and \(L^{1}([0,1])\) does not have \(\operatorname{RNP}\). Proof of these facts can be found in any standard text on geometry of Banach spaces. The \(\operatorname{RNP}\) defined above is dependent is relative to a measure space. Interestingly \(\operatorname{RNP}\) is more well behaved with respect to \(\sigma\)-finite measure spaces. **Theorem 4**.: _[_3_, Th. 1.3.26]_ _For a Banach space \(X\) the following are equivalent._ 1. \(X\) _has_ \(\operatorname{RNP}\) _with respect to_ \([0,1]\)_._ 2. \(X\) _has_ \(\operatorname{RNP}\) _with respect to any_ \(\sigma\)_-finite measure space._ In view of the above theorem, we will say that \(X\) has \(\operatorname{RNP}\) without refererring to a measure space whenever the measure space involved is \(\sigma\)-finite. **Definition 4**.: A Banach space \(X\) is said to be _Weakly Sequentially Complete_(WSC), if every weakly Cauchy sequence \(\{\phi_{n}\}_{n=1}^{\infty}\) of functions in \(X\) converges weakly to some element in \(X\). It is a well known and easy to prove fact that \(L^{1}(S)\) is \(\operatorname{WSC}\) for any measure space \(S\). All reflexive spaces are \(\operatorname{WSC}\) by default. If \(X\) is \(\operatorname{WSC}\) and \(\mu\) is a finite measure, then \(L^{1}(S,X)\) is also \(\operatorname{WSC}\)[15, Th. 11]. The fourier algebra \(A(G)\) is \(\operatorname{WSC}\) for any locally compact Hausdorff group. A \(C^{*}\) algebra is \(\operatorname{WSC}\) if and only if is finite dimensional. Now, if \(\mathcal{A}\) is a Banach algebra and \(G\) is any locally compact Hausdorff group, then \(L^{1}(G,\mathcal{A})\) can be given an algebra structure through convolution \(f*g(t)=\int_{G}f(s)g(s^{-1}t)d\mu(s)\), with respect to which \(L^{1}(G,\mathcal{A})\) becomes a Banach algebra. This generalized group algebra can be identified with the projective tensor product \(L^{1}(G)\hat{\otimes}\mathcal{A}\) (see [5] for the proof). We shall make use of the Cohen's factorization theorem as stated below, in coming sections. **Theorem 5** (Cohen's Factorization).: Let \(\mathscr{A}\) be a Banach algebra and \(\mathscr{K}\) be a left Banach \(\mathscr{A}\)-module and suppose that \(\mathscr{A}\) has a left approximate identity \(\{e_{\beta}\}_{\beta\in B}\) that is also a left approximate identity for \(\mathscr{K}\) bounded by some constant \(\delta\geq 1\). Then for any \(x_{o}\in\mathscr{K}\) and any \(\epsilon>0\), there exists an \(a\in\mathscr{A}\) and \(x\in\mathscr{K}\) such that \(x_{o}=ax\), \(||a||\leq\delta\) and \(||x-x_{o}||\leq\epsilon\). ## 3. Topological Center of \(L^{1}(G,\mathcal{A})^{**}\) as \(L^{1}(g)\)-module. Given any Banach algebra \(A\), due to Hahn-Banach, we have an isometric embedding \(J:A\to A^{**}\) as Banach spaces. For each \(a\in A\) and \(f\in A^{*}\), we define two functionals \(f_{a},\,a\,f\in A^{*}\) as \[f_{a}(b)=f(ab)\;,\;af(b)=f(ba)\text{ for all }b\in A\] Further, for each \(f\in A^{*}\) and \(m\in A^{**}\) we define \(f_{m},mf\in A^{*}\) as \[f_{m}(a)=m(_{a}f)\,\ mf(a)=m(f_{a})\ \forall a\in A\] Now, the two Arens product on \(A^{**}\) are defined as \[m\raisebox{0.86pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6. 5pt}{0.4pt}}}(f)=m(_{n}f)\,\ m\diamond n(f)=n(f_{m})\] for each \(m,n\in A^{**}\) and \(f\in A^{*}\). It is a trivial fact that \(A^{**}\) becomes a Banach algebra with respect both these Arens product \(\raisebox{0.86pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6. 5pt}{0.4pt}}}\) and \(\diamond\) and the two Arens prodcuct agree on \(J(A)\), and the embedding \(J\) becomes an algebra Homomorphism with respect to both these products. We denote by \(Z(A^{**})\) the (left) topological center- \[Z(A^{**}) =\{m\in A^{**}\ :\ m\raisebox{0.86pt}{\hbox{\rule{0.4pt}{6.5pt} \rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}n=m\diamond n\ \forall n\in A^{**}\}\] \[=\{m\in L^{1}(G,\mathcal{A})^{**}\ :\ n\mapsto m\raisebox{0.86pt}{ \hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}n\ \text{is}\ w^{*}\ \text{continuous}\} \tag{3.1}\] Clearly, \(Z(A^{**})\) is a Banach algebra itself naturally. For Banach algebras \(\mathcal{A}\) and \(\mathcal{B}\), their direct sum \(\mathcal{A}\oplus\mathcal{B}\) is a Banach algebra when equipped with co-ordinate wise multiplication \((a,b)\cdot(c,d)=(ac,bd)\) and norm \(||(a,b)||=||a||+||b||\). Further we have identification for dual Banach space \((A\oplus B)^{*}=A^{*}\oplus_{\infty}B^{*}\) and \[((\mathcal{A}\oplus\mathcal{B})^{**},\raisebox{0.86pt}{\hbox{ \rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}})=(\mathcal{ A}^{**},\raisebox{0.86pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt} {0.4pt}}})\oplus(\mathcal{B}^{**},\raisebox{0.86pt}{\hbox{\rule{0.4pt}{6.5pt} \rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}})\] \[((\mathcal{A}\oplus\mathcal{B})^{**},\diamond)=(\mathcal{A}^{**}, \diamond)\oplus(\mathcal{B}^{**},\diamond)\] The following permanence property is elementary in nature but will help us significantly in calculating the topological center. **Theorem 6**.: Let \(\mathcal{A}\) and \(\mathcal{B}\) be Banach algebras. Then \(A\oplus B\) is SAI if and only if both \(\mathcal{A}\) and \(\mathcal{B}\) are SAI. Recall that the minimal unitization of a non-unital Banach algebra \(\mathcal{A}\) is \(\tilde{\mathcal{A}}=\mathcal{A}\oplus\mathbb{C}\) equipped with norm \(||(a,\alpha)||=||a||+|\alpha|\) and multiplication \((a,\alpha)(b,\beta)=(ab+\beta a+\alpha b,\alpha\beta)\). If Banach algebra \(\mathcal{A}\) and \(\mathcal{B}\) are such that \(\mathcal{A}\) is a right Banach \(\mathcal{B}\)-module, i.e \(\exists\) continuous bilinear map \(\mathfrak{m}:\mathcal{A}\times\mathcal{B}\to\mathcal{A}\) of norm one giving an algebraic module structure, then we form a direct sum Banach algebra \(\mathcal{A}\bar{\oplus}\mathcal{B}\) with norm \(||(a,b)||=||a||+||b||\) and multiplication \((a,b)(c,d)=(ac+bc+ad,bd)\). We can consider \(L^{1}(G,\mathcal{A})\) as right \(L^{1}(G)\)-module naturally by \(\mathfrak{m}:L^{1}(G,\mathcal{A})\times L^{1}(G)\to L^{1}(G,\mathcal{A})\) defined as \(\mathfrak{m}(\phi,F)=\phi*F\). **Theorem 7**.: The map \(\theta:L^{1}(G,\tilde{\mathcal{A}})\to L^{1}(G,\mathcal{A})\bar{\oplus}L^{1}(G)\) is an isometric isomorphism of algebras, where \(\theta\) is defined as \(\theta(\phi)=(\pi_{1}\phi,\pi_{2}\phi)\) for each \(\phi\in L^{1}(G,\mathcal{A})\). Proof.: Notice that \[||\theta(\phi)|| =||\pi_{1}\phi||+||\pi_{2}\phi||\] \[=\int||\pi_{2}\phi(t)||dt+\int||\pi_{2}\phi(t)||dt\] \[=\int||\phi(t)||dt\] \[=||\phi||_{L^{1}(G,\tilde{\mathcal{A}})}\] i.e \(\theta\) is isometry. \(\theta\) is surjective can be seen easily. Further, for any two \(\phi_{1},\phi_{2}\in L^{1}(G,\tilde{\mathcal{A}})\), notice that \[\phi_{1}*\phi_{2}(t)=\int\phi_{1}(s)\phi_{2}(s^{-1}t)dt\] \[=\lim_{\beta}m\mathfrak{D}\mathfrak{n}_{\beta}(Ff)\] (here we use \[m\in Z(L^{1}(G,\mathcal{A})^{**})\]) \[=\langle_{n}(F),m\rangle \tag{3.3}\] Thus we see that \(\mathfrak{m}^{***}(m,F)\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5 pt}\rule{6.5pt}{6.5pt}}}n(f)=\mathfrak{m}^{***}(m,F)\circ n(f)\) for all \(f\in L^{1}(G,\mathcal{A})^{*}\). Hence, \(\mathfrak{m}^{***}(m,F)\in Z(L^{1}(G,\mathcal{A})^{**})\). Further it is easy to verify that \(\mathfrak{m}^{***}(m,F_{1}*F_{2})=\mathfrak{m}^{***}(\mathfrak{m}^{***}(m,F_{ 1}),F_{2})\) holds for each \(m\in Z(L^{1}(G,\mathcal{A})^{**})\) and \(F_{1},F_{2}\in L^{1}(G)\). Thus, the restriction of \(m^{***}\) to \(Z(L^{1}(G,\mathcal{A})^{**})\times L^{1}(G)\) gives the required \(L^{1}(G)\)-module structure on \(Z(L^{1}(G,\mathcal{A})^{**})\). **Theorem 9**.: If \(G\) is a locally compact abelian group and \(\mathcal{A}\) is a Banach algebra, then \[Z(L^{1}(G,\tilde{\mathcal{A}})^{**})\cong Z(L^{1}(G,\mathcal{A})^{**})\tilde{ \oplus}L^{1}(G)\] is an isometric isomorphism of algebras. Proof.: Consider the double adjoint map of \(\theta\) \[\theta^{**}:L^{1}(G,\tilde{\mathcal{A}})^{**}\to L^{1}(G,\mathcal{A})^{**} \oplus L^{1}(G)^{**}\] Clearly \(\theta^{**}\) is an isometric isomorphism of Banach spaces, because \(\theta\) is. We claim that the restriction of \(\theta^{**}\) to \(Z(L^{1}(G,\tilde{\mathcal{A}})^{**})\) gives us the isometric isomorphism of algebras \(Z(L^{1}(G,\tilde{\mathcal{A}})^{**})\) and \(Z(L^{1}(G,\mathcal{A})^{**})\tilde{\oplus}L^{1}(G)\). To see this, let \(\tilde{m}\in Z(L^{1}(G,\mathcal{A})^{**})\), \(n\in L^{1}(G,\mathcal{A})^{**}\) and \(f\in L^{1}(G,\mathcal{A})^{*}\). It is easy to verify that \[{}_{(n,0)}(f,0)(\phi,\psi)={}_{n}f(\phi)+{}_{n}f(\psi)\hskip 56.905512pt \forall(\phi,\psi)\in L^{1}(G,\mathcal{A})\tilde{\oplus}L^{1}(G) \tag{3.4}\] Thus, \[\tilde{m}\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5 pt}\rule{6.5pt}{6.5pt}}}(n,0)(f,0) =\tilde{m}\left({}_{(n,0)}(f,0)\right)\] \[=\lim_{\alpha}{}_{(n,0)}(f,0)(\pi_{1}\theta(\tilde{m}_{\alpha}), \pi_{2}\theta(\tilde{m}_{\alpha}))\] \[=\pi_{1}\theta^{**}(\tilde{m})\raisebox{-1.72pt}{\hbox{\rule{0.4pt }{6.5pt}\rule{6.5pt}{6.5pt}}}n(f)+{}_{n}f(\pi_{2}\theta^{**}(\tilde{m})) \hskip 56.905512pt\text{(using $eq.\hskip 5.6905512pt(\ref{eq:m})$)} \tag{3.5}\] Similarly, one can prove that \[\tilde{m}\circ(n,0)(f,0)=\pi_{1}\theta^{**}(\tilde{m})\circ n(f)+{}_{n}f(\pi_ {2}\theta^{**}(\tilde{m})) \tag{3.6}\] Since, \(\tilde{m}\in Z(L^{1}(G,\tilde{\mathcal{A}})^{**})\), from eq. (3.5) and eq. (3.6) we deduce that \(\pi_{1}\theta^{**}\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5 pt}\rule{6.5pt}{6.5pt}}}n=\pi_{1}\theta^{**}\circ n\) for each \(n\in L^{1}(G,\mathcal{A})^{**}\) i.e \(\pi_{1}\theta^{**}(\tilde{m})\in Z(L^{1}(G,\mathcal{A})^{**})\). Thus, \(\pi_{1}\theta^{**}(\tilde{m})\in Z(L^{1}(G,\mathcal{A})^{**})\). And similarly, one can show that \(\pi_{2}\theta^{**}(\tilde{m})\in Z(L^{1}(G)^{**})=L^{1}(G)\). Hence, \(\theta^{**}_{|Z(L^{1}(G,\mathcal{A})^{**})}\) is a well defined isometric linear map into \(Z(L^{1}(G,\mathcal{A})^{**})\oplus L^{1}(G)\). Further, it can be easily verified that this map is surjective and is a Homomorphism with respect to \(\tilde{\oplus}\) structure on the right side. **Corollary 3.1**.: _For a locally compact abelian group \(G\) and a Banach algebra \(\mathcal{A}\), the group algebra \(L^{1}(G,\mathcal{A})\) is SAI if and only if \(L^{1}(G,\tilde{\mathcal{A}})\) is SAI._ **Definition 3.2**.: _For \(L^{1}(G,\mathcal{A})\), the (left) topological pseudo-center is defined as_ \[Z_{s}(L^{1}(G,\mathcal{A})^{**}) = \{m\in L^{1}(G,\mathcal{A})^{**}\ :\ m\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5 pt}}}n(f)=m\circ n(f)\ \forall f\in L^{\infty}(G,\mathcal{A}^{*}),n\in L^{1}(G,\mathcal{A})^{**}\}\] \[= \{m\in L^{1}(G,\mathcal{A})^{**}\ :\ n\mapsto m\raisebox{-1.72pt}{\hbox{ \rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5pt}}}n\text{ is $\sigma(L^{1}(G,\mathcal{A})^{**},L^{\infty}(G,\mathcal{A}^{*}))$- continuous}\}\] Clearly, \(Z_{s}(L^{1}(G,\mathcal{A})^{**})=Z(L^{1}(G,\mathcal{A})^{**})\) if \(\mathcal{A}^{*}\) has RNP. In general the pseudo center is a bigger class. We can consider \(L^{1}(G,\mathcal{A})\) as a subalgebra of both \(L^{1}(G,(\mathcal{A}^{**},\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5 pt}{6.5pt}}}))\) and \(L^{1}(G,(\mathcal{A}^{**},\circ))\) naturally since \(\mathcal{A}\) is a subalgebra of both \((\mathcal{A}^{**},\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5 pt}}})\) and \((\mathcal{A}^{**},\circ)\). We notice the following consequence of \(\mathcal{A}^{*}\) having RNP.. **Theorem 10**.: Let \(\mathcal{A}\) be a Banach algebra such that \(\mathcal{A}^{*}\) has RNP, then \(\theta_{\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5pt}}}}:L^{1}(G,( \mathcal{A}^{**},\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5 pt}}}))\rightarrow\left(L^{1}(G,\mathcal{A})^{**},\raisebox{-1.72pt}{\hbox{ \rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5pt}}})\) and \(\theta_{\diamond}:L^{1}(G,(\mathcal{A}^{**},\circ))\rightarrow\left(L^{1}(G, \mathcal{A})^{**},\circ\right)\) are isometric homomorphisms, where \(\theta_{\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{6.5pt}}}}(\phi)(f)= \theta_{\diamond}(\phi)(f)=\int_{G}\left\langle f(t),\phi(t)\right\rangle d\mu(t)\) for all \(\phi\in L^{1}(G,\mathcal{A}^{**})\) and \(f\in L^{\infty}(G,\mathcal{A}^{*})\). Proof.: Since, \(\mathcal{A}^{*}\) has RNP, we have the dual identification \(L^{1}(G,\mathcal{A})^{*}=L^{\infty}(G,\mathcal{A}^{*})\). Clearly, \(\theta_{\underline{\textbf{0}}}\) and \(\theta_{\diamond}\) are well defined linear maps which are same at the Banach space level. Further, \(||\textbf{\theta}_{\underline{\textbf{0}}}(\phi)||\leq||\phi||_{L^{1}(G, \mathcal{A}^{**})}\). Since, \(\phi\) is Bochner integrable, there exists a sequence of \(\mu\)-simple functions \(\{\psi_{n}\}\) in \(L^{1}(G,\mathcal{A}^{**})\) which converges pointwise to \(\phi\) almost everywhere such that \(\lim_{n}\int_{G}||\psi_{n}(t)-\phi(t)||d\mu(t)\to 0\). For \(\epsilon>0\), fix a natural number \(N\) such that \(\int_{G}||\psi_{N}t)-\phi(t)||d\mu(t)<\epsilon/3\). Let \(\psi_{N}=\sum_{i=1}^{r}\chi_{A_{i}}a_{i}^{**}\) for some disjoint \(A_{1},A_{2},..,A_{r}\in\mathscr{B}(G)\) with \(\mu(A_{i})<\infty\) for each \(i\) and \(a_{1}^{**},..,a_{r}^{**}\in\mathcal{A}^{**}\). For each \(i\), choose a \(b_{i}^{*}\in\mathcal{A}^{*}\) such that \(|a_{i}^{**}(b_{i}^{*})-||a_{i}^{**}|||<\epsilon/3\). Now define \(f:G\rightarrow\mathcal{A}^{*}\) as \(f=\sum_{i=1}^{r}\chi_{A_{i}}b_{i}^{*}\). Clearly \(f\) is a \(\mu\)-simple function and \(f\in L^{\infty}(G,\mathcal{A}^{*})\) such that \(||f||_{L^{\infty}(G,\mathcal{A}^{*})}\leq 1\). Now notice that \[\left|\int_{G}\left\langle f(t),\phi(t)\right\rangle d\mu(t)- \int_{G}||\phi(t)||d\mu(t)\right| = \left|\int_{G}\left\langle\sum_{i=1}^{r}\chi_{A_{i}}b_{i}^{*}, \phi(t)\right\rangle-||\phi(t)||d\mu(t)\right|\] \[\leq \sum_{i=1}^{r}\int_{A_{i}}\left|\langle b_{i}^{*},\phi(t)\rangle -||\phi(t)||d\mu(t)\right.\] \[\leq \sum_{i=1}^{r}\int_{A_{i}}\left|\langle b_{i}^{*},\phi(t)\rangle -||a_{i}^{**}|||\right|d\mu(t)+\sum_{i=1}^{r}\int_{A_{i}}|||a_{i}^{**}||-|| \phi(t)|||d\mu(t)\] \[\leq \sum_{i=1}^{r}\int_{A_{i}}|\langle b_{i}^{*},\phi(t)\rangle-a_{i} ^{**}(b_{i}^{*})|\,d\mu(t)+\sum_{i=1}^{r}\int_{A_{i}}\left|a_{i}^{**}(b_{i}^{ *})-||a_{i}^{**}|||\right|d\mu(t)+\frac{\epsilon}{3}\] \[< \sum_{i=1}^{r}\int_{A_{i}}||\phi(t)-a_{i}^{**}||d\mu(t)+\frac{ \epsilon}{3}+\frac{\epsilon}{3}\] \[= \int_{G}||\phi(t)-\psi_{n}(t)||d\mu(t)+\frac{2\epsilon}{3}\] \[< \epsilon\] Thus, \(\theta_{\underline{\textbf{0}}}(\phi)||=||\phi||_{L^{1}(G,\mathcal{A})}\) and \(\theta_{\underline{\textbf{0}}},\theta_{\diamond}\) are isometry. Now we prove that \(\theta_{\underline{\textbf{0}}}\) and \(\theta_{\diamond}\) are algebra homomorphisms. For \(\phi_{1},\phi_{2}\in L^{1}(G,(\mathcal{A}^{**},\boxdot))\) and \(f\in L^{\infty}(G,\mathcal{A}^{*})\), we have \[\theta_{\underline{\textbf{0}}}(\phi_{1}*\phi_{2})(f) = \int\left\langle\phi_{1}*\phi_{2}(t),f(t)\right\rangle d\mu(t)\] \[= \int\int\left\langle\phi_{1}(s),\int\theta_{\underline{\textbf{0} }(\phi_{2})(s^{-1}t)}(f(t))d\mu(t)\right\rangle d\mu(s)\] \[= \int\left\langle\phi_{1}(s),\theta_{\underline{\textbf{0}}(\phi_ {2})}f\right\rangle d\mu(s)\] \[= \theta_{\underline{\textbf{0}}}(\phi_{1})(\theta_{\underline{ \textbf{0}}(\phi_{2})}f)\] \[= \theta_{\underline{\textbf{0}}}(\phi_{1})\boxdot{\theta_{ \underline{\textbf{0}}}}(\phi_{2})(f)\] Thus, \(\theta_{\underline{\textbf{0}}}(\phi_{1}*\phi_{2})=\theta_{\underline{\textbf{0 }}}(\phi_{1})\boxdot{\theta_{\underline{\textbf{0}}}}(\phi_{2})\). Hence, \(\theta_{\underline{\textbf{0}}}\) is an homomorphism of algebras. Similarly, \(\theta_{\diamond}\) is a homomorphism. ## 4. The bidual of \(L^{1}(G,\mathcal{A})\) for compact group \(G\). In this section, hereafter we will assume \(G\) to a be a compact Hausdorff group with normalized Haar measure \(\mu\) and \(\mathcal{A}\) will denote a Banach algebra, unless stated otherwise. **Lemma 4.1**.: _If \(f\in L^{\infty}(G,\mathcal{A}^{*})\) and \(\phi\in L^{1}(G,\mathcal{A})\), then \(f_{\phi}(t)=\int L_{\phi(s)}(f(st))ds\) for all \(t\in G\)._ Proof.: Suppose \(\psi\in L^{1}(G,\mathcal{A})\). Then \[f_{\phi}(\psi) = f(\phi*\psi)\] \[= \int\left\langle f(t),\phi*\psi(t)\right\rangle dt\] \[= \int\left\langle f(t),\int\phi(s)\psi(s^{-1}t)ds\right\rangle dt\] \[= \int\int\left\langle f(st),\phi(s)\psi(t)\right\rangle dtds\] \[= \int\left\langle\int(f(st))_{\phi(s)}ds,\psi(t)\right\rangle dt\] Since, \(\psi\) was arbitrary, we conclude that \(f_{\phi}(t)=\int(f(st))_{\phi(s)}ds\) **Lemma 4.2**.: _If \(f\in L^{\infty}(G,\mathcal{A}^{*})\) and \(\phi\in L^{1}(G,\mathcal{A})\), we have \({}_{\phi}f(t)=\int{}_{\phi(t^{-1}s)}(f(s))ds\)_ Proof.: Proved similarly as previous lemma. We say that \(f\in L^{\infty}(G,X)\) is left uniformly continuous if \(||L_{a}(f)-f||\to 0\) as \(a\to e\), where \(L_{a}(f)(x)=f(ax)\). Similarly, \(f\) is said to be right uniformly continuous if \(||R_{a}(f)-f||\to 0\) as \(a\to e\). Let \(LUC(G,X),RUC(G,X)\) denote the collection of all left/right uniformly continuous functions in \(L^{\infty}(G,X)\). Clearly \(LUC(G,X),RUC(G,X)\) are closed subspaces. **Lemma 4.3**.: _Continuous functions are left/right uniformly continuous, i.e \(C(G,X)\subseteq LUC(G,X)\cap RUC(G,X)\) for any Banach space \(X\) and a compact group \(G\)._ Proof.: Proof runs ditto as in the case of scalar valued functions. See for instance [12, Prop. 2.6]. **Lemma 4.4**.: _Let \(G\) be a compact group and \(X\) be a Banach space. If \(f\in C(G,X)\) then \(||L_{z}(f)-f||_{L^{1}(G,X)}\) and \(||R_{z}(f)-f||_{L^{1}(G,X)}\) tends to \(0\) as \(z\to e\)._ Proof.: Fix a compact neighborhood \(V\) of \(e\). Let \(K=(supp(f))V^{-1}\cap V(supp(f))\). Then \(K\) is compact and \(L_{z}(f)\) is supported in \(K\) when \(z\in V\). Hence, \(||L_{z}(f)-f||_{L^{1}(G,X)}\leq\mu(G)||L_{z}f-f||_{L^{\infty}(G,X)}\to 0\) as \(z\to e\) **Lemma 4.5**.: _Let \(\mathcal{A}\) be any Banach algebra. If \(f\in L^{\infty}(G,\mathcal{A}^{*})\) and \(\phi\in C(G,A)\), then \(f_{\phi}\in LUC(G,\mathcal{A}^{*})\)._ Proof.: For any \(z\in G\), it is straighforward to verify that \(L_{z}(f_{\phi})=f_{\widetilde{L_{z}\phi}}\). For any \(t\in G\), \[||L_{z}(f_{\phi})(t)-f_{\phi}(t)|| = \left||\int(f(st))_{\widetilde{L_{z}\phi}-\phi-(s)}d\mu(s) \right||\] \[\leq ||f||_{L^{\infty}(G,\mathcal{A}^{*})}\int\widetilde{||(L_{z} \widetilde{\phi})}(s)-\phi(s)||d\mu(s)\] \[= ||f||_{L^{\infty}(G,\mathcal{A}^{*})}\int||\phi(sz^{-1})-\phi(s)||d\mu(s)\] Thus, \(||L_{z}(f_{\phi})-f_{\phi}||_{L^{\infty}(G,\mathcal{A}^{*})}\leq||f||_{L^{\infty }(G,\mathcal{A}^{*})}\left||R_{z^{-1}}\phi-\phi||_{L^{1}(G,\mathcal{A})}\right.\) and by previous lemma, \(f_{\phi}\) is left uniformly continuous. For \(f\in L^{\infty}(G,X)\) and \(t\in G\), write \((tf)(x)=f(xt)\). **Lemma 4.6**.: _Let \(m\in Z_{s}\) and \(f\in L^{\infty}(G,\mathcal{A}^{*})\), then \(f_{m}\in LUC(G,\mathcal{A}^{*})\) and \(f_{m}(t)(a)=\left\langle m,f^{(t,a)}\right\rangle\) for each \(t\in G\) and \(a\in A\), where \(f^{(t,a)}(s)={}_{a}(f(st))\)_ Proof.: Using the Goldstein's theorem and the denseness of compactly supported functions in \(L^{1}(G,\mathcal{A})\), we choose a net \(\{\phi_{\alpha}\}_{\alpha\in\wedge}\) of compactly supported functions converging to \(m\) in the \(weak^{*}\)-topology of \(L^{1}(G,A)^{**}\). Then, \[\left\langle n,f_{m}\right\rangle = \left\langle m\diamond n,f\right\rangle=\left\langle m\square n,f \right\rangle=\left\langle m,{}_{n}f\right\rangle\] \[= \lim_{\alpha}\left\langle\phi_{\alpha},{}_{n}f\right\rangle= \lim_{\alpha}\left\langle n,f_{\phi_{\alpha}}\right\rangle\] for each \(n\in L^{1}(G,\mathcal{A})^{**}\). Hence, \(\{f_{\phi_{\alpha}}\}\) converges weakly to \(f_{m}\). Using Mazur's lemma, we can choose a net of suitable convex combinations of \(\{\phi_{\alpha}\}_{\alpha\in\wedge}\) such that \(f_{\phi_{\alpha}}\) converges to \(f_{m}\) in norm. Thus we assume that \(f_{\phi_{\alpha}}\) converges to \(f_{m}\) in norm. But \(f_{\phi_{\alpha}}\in LUC(G,\mathcal{A}^{*})\) by previous lemma. Thus, it follows that \(f_{m}\in LUC(G,\mathcal{A}^{*})\). Further, for any \(t\in G\) and \(a\in\mathcal{A}\) \[f_{m}(t)(a) = \lim_{\alpha}\int L_{\phi_{\alpha}(s)}(f(st))(a)d\mu(s)\] \[= \lim_{\alpha}\int f(st)(\phi_{\alpha}(s)a)d\mu(s)\] \[= \lim_{\alpha}\int\left\langle f(st),\phi_{\alpha}(s)a\right\rangle d\mu(s)\] \[= \lim_{\alpha}\left\langle\phi_{\alpha},f^{(t,a)}\right\rangle\] \[= \left\langle m,f^{(t,a)}\right\rangle\] **Lemma 4.7**.: _Let \(\mathcal{A}\) be a unital Banach algebra. If \(m\in Z_{s}\) is such that \(m(f)=0\) for all \(f\in C(G,\mathcal{A}^{*})\) then \(m(f)=0\) for all \(f\in L^{\infty}(G,\mathcal{A}^{*})\). Further if \(\mathcal{A}^{*}\) has RNP then \(m=0\)._ Proof.: To see this, let \(f\in L^{\infty}(G,\mathcal{A}^{*})\). Using the lemma 4.6, for \(\epsilon>0\), we choose \(V\subset\{x:||f_{m}(x)-f_{m}(e)||<\epsilon\}\) such that \(V\) is open and relatively compact. Consider \(v=\frac{1_{V}}{\mu(V)}\otimes\mathds{1}_{A}\). One can easily see that \({}_{v}f\in C(G,\mathcal{A}^{*})\). Hence, \({}_{v}(f_{m})(\phi)=({}_{v}f)_{m}(\phi)=m({}_{\phi*v}f)=0\), because \({}_{\phi*v}f\in C(G,\mathcal{A}^{*})\) for each \(\phi\in L^{1}(G,\mathcal{A})\). Thus, \({}_{v}(f_{m})=0\). \[|m(f)| = |f_{m}(e)(\mathds{1}_{\mathcal{A}})-{}_{v}(f_{m})(e)(\mathds{1}_ {\mathcal{A}})|\] \[= \frac{1}{\mu(V)}\left|\int_{V}\left(f_{m}(e)(\mathds{1}_{\mathcal{ A}})-f_{m}(x)(\mathds{1}_{\mathcal{A}})\right)dx\right|\] \[\leq \frac{1}{\mu(V)}||f_{m}(e)-f_{m}(x)||\mu(V)\] \[= \epsilon\] Hence, \(m(f)=0\) for all \(f\in L^{\infty}(G,\mathcal{A}^{*})\), proving the first part of assertion. Now if \(\mathcal{A}^{*}\) has RNP then \(L^{\infty}(G,\mathcal{A}^{*})\) is the full dual space of \(L^{1}(G,\mathcal{A})\) and hence \(m=0\) Let \(S^{\infty}(G,\mathcal{A}^{*})\) denotes the closure of the space of all \(\mu\)-simple functions in \(L^{\infty}(G,\mathcal{A}^{*})\). Clearly \(C(G,\mathcal{A}^{*})\) is contained in \(S^{\infty}(G,\mathcal{A}^{*})\). For each \(\nu\in M_{r}(G,A)\) we define \(\varphi_{\nu}:S(G,\mathcal{A}^{*})\to\mathbb{C}\) such that for \(\sum_{i=1}^{r}\chi_{E_{i}}\otimes a_{i}^{*}\in S(G,\mathcal{A}^{*})\). \[\left\langle\varphi_{\nu},\sum_{i=1}^{r}\chi_{E_{i}}\otimes a_{i}^{*}\right \rangle=\sum_{i=1}^{r}\left\langle\nu(E_{i}),a_{i}^{*}\right\rangle\] It is an easy exercise to verify that this action is well defined(independent of representation of simple functions) and is linear. Further, it can be verified that \(||\varphi_{\nu}||=||\nu||\). Hence, \(M_{r}(G,A)\) sits inside \(S^{\infty}(G,\mathcal{A}^{*})^{*}\) isometrically. **Lemma 4.8**.: _Let \(\mathcal{A}\) be a reflexive Banach algebra. Suppose that \(m\in Z(L^{1}(G,\mathcal{A})^{**})\) and \(\nu\in M_{r}(G,\mathcal{A})\) be such that there exists a sequence \(\{\nu_{n}\}\in L^{1}(G,\mathcal{A})\) converging to \(\nu\) in the \(\sigma(M_{r}(G,\mathcal{A}),C(G,\mathcal{A}^{*}))\) topology. Then \(m\diamond n_{\nu}\in L^{1}(G,\mathcal{A})\) for any continuous extension \(n_{\nu}\) of \(\nu\) to \(L^{1}(G,\mathcal{A})^{*}\)._ Proof.: Let us first assume that \(u\in L^{1}(G,\mathcal{A})\). Then by theorem 1, restriction of \(m\) to \(C(G,\mathcal{A}^{*})\) is given by a measure \(\eta\in M_{r}(G,\mathcal{A})\). For \(f\in C_{0}(G,\mathcal{A}^{*})\) \[\left\langle m\overline{\omega}u,f\right\rangle =\left\langle m,{}_{u}f\right\rangle\] \[=\left\langle\eta,{}_{u}f\right\rangle\] Since, regular \(\mathcal{A}\)-valued measures are weakly compact in the sense that the associated operator \(C(K)\to X\) is weakly compact (see [8, Th. 5.2]), and space of weakly compact \(\mathcal{A}\)-valued measures form a Banach algebra with respect to the convolution (see [14, Th. 3.2]), we conclude that \(\left\langle m\overline{\omega}u,f\right\rangle=\left\langle\eta*u,f\right\rangle\) for all \(f\in C(G,\mathcal{A}^{*})\). Notice that \(m\overline{\omega}u\in Z(L^{1}(G,\mathcal{A})^{**})\). Thus, by lemma 4.7, \(m\overline{\omega}u=\eta*u\in L^{1}(G,\mathcal{A})\) (because \(\eta*u\) is a regular measure of bounded variation and \(\nu*u\ll\mu\)). Now if \(f\in L^{\infty}(G,\mathcal{A}^{*})\), then \(f_{m}\in LUC(G)\) by lemma 4.6 and hence \[\left\langle m\diamond n_{\nu},f\right\rangle =n(f_{m})\] \[=\lim_{n}\nu_{n}(f_{m})\] \[=\lim_{n}m\diamond\nu_{n}(f)\] \[=\lim_{n}m\overline{\omega}\nu_{n}(f)\] But \(m\overline{\omega}\nu_{n}\in L^{1}(G,\mathcal{A})\) as proved above and \(L^{1}(G,\mathcal{A})\) is WSC, hence \(m\diamond n_{\nu}\in L^{1}(G,\mathcal{A})\). **Corollary 4.9**.: _Let \(\mathcal{A}\) be a unital reflexive Banach algebra \(K\) be a closed subgroup of \(G\) such that \(G/K\) is metrizable, then \(m\diamond\mu_{k}\in L^{1}(G,\mathcal{A})\) where \(\mu_{k}\) is the \(\mathcal{A}\)-valued vector measure on Borel subsets of \(G\) such that \(\mu_{k}(E)=\frac{\mu(E\cap K)}{\mu(K)}\mathds{1}_{\mathcal{A}}\) for each Borel subset \(E\) of \(G\)._ Proof.: Since, \(G/K\) is metrizable, we can choose a decreasing sequence \(\{U_{n}\}\) of neighborhoods of \(K\) such that \(K=\cap_{n}U_{n}\). Let \(u_{n}=\frac{1}{\mu(U_{n})}\chi_{U_{n}}\mathds{1}_{\mathcal{A}}\). Clearly \(u_{n}\in L^{1}(G,\mathcal{A})\) and \(u_{n}\to\mu_{K}\) in the \(\sigma(M_{r}(G,\mathcal{A}),C(G,\mathcal{A}^{*}))\) topology. By previous lemma, \(m\diamond\mu_{k}\in L^{1}(G,\mathcal{A})\). For a subgroup \(K\) of \(G\), we say that \(f\in L^{\infty}(G,\mathcal{A}^{*})\) is right \(K\)_-periodic_ if \(kf=f\) for all \(k\in K\). **Lemma 4.10**.: _Let \(K\) be a compact subgroup of \(G\) and \(m\in Z\). If \(f\in L^{\infty}(G,\mathcal{A}^{*})\) is \(K\)-periodic then \(\left\langle m,f\right\rangle=\left\langle m\diamond\mu_{K},f\right\rangle\) for all \(f\in L^{\infty}(G,\mathcal{A}^{*})\)._ Proof.: Since, \(f\) is \(K\)-periodic, by lemma 4.6, \(f_{m}\) is also \(K\)-periodic. Hence, \[\left\langle m,f\right\rangle=f_{m}(e)(\mathds{1}_{\mathcal{A}})\] \[=\langle\mu_{k},f_{m}\rangle\] \[=\langle m\diamond\mu_{K},f\rangle\] Now, we have all the required tools to prove that for an abelian compact group \(G\) and a reflexive Banach algebra \(\mathcal{A}\), the generalized group algebra \(L^{1}(G,\mathcal{A})\) is left Strongly Arens irregular. **Theorem 11**.: Let \(G\) be a compact group and \(\mathcal{A}\) be a unital reflexive Banach algebra. Then, \[Z(L^{1}(G,\mathcal{A})^{**})=L^{1}(G,\mathcal{A}).\] Proof.: The inclusion \(L^{1}(G,\mathcal{A})\subset Z(L^{1}(G,\mathcal{A})^{**})\) holds trivially true. To prove the reverse inclusion, let \(m\in Z(L^{1}(G,\mathcal{A})^{**})\) and \(\nu_{m}\in M_{r}(G,\mathcal{A})\) denote its restriction to \(C(G,\mathcal{A}^{*})\). Due to lemma 4.6, it will be sufficient to show that \(\nu_{m}\in L^{1}(G,\mathcal{A})\). Let \(B\) be a compact subset of \(G\) such that \(\mu(B)=0\). Then we choose a decreasing sequence of open sets \(U_{n}\supset B\) such that \((\mu+|\nu_{m}|)(U_{n}\setminus B)\to 0\). By induction, we construct a sequence \(\{\phi_{n}\}\) in \(C(G)\) such that \(0\leq\phi_{n}\leq 1\), \(\phi_{n}(x)=1\) for \(x\in B\) and \(\phi_{n}(x)=0\) for \(x\notin U_{n}\cap V_{n-1}\) (where \(V_{0}=G\),\(V_{n}=\{y:\phi_{n}(y)\neq 0\}\), \(n=1,2,...\)). For each \(n\), \[d_{n}(x,y)=||x\phi_{n}-y\phi_{n}||_{\infty}\] defines a continuous pseudo metric on \(G\), and \(K=\cap_{n=1}^{\infty}K_{n}\), then \(G/K\) is metrizable and hence \(m\diamond\mu_{K}\in L^{1}(G,\mathcal{A})\) by corollary 4.9. Consequently, by lemma 4.10\(\langle\nu_{m},f\rangle=\langle m\diamond\mu_{K},f\rangle\) for each right \(K\)-periodic function \(f\in L^{\infty}(G,\mathcal{A}^{*})\). Since, \(\{V_{n}\}\) is decreasing, \(\mu(V_{n})\to\mu(B)=0\). Further, for each \(a^{*}\in\mathcal{A}^{*}\), the function \(\chi_{V_{N}}\otimes a^{*}\) is right \(K\)-periodic. Thus, \[\nu_{m,a^{*}}(V)=\langle\nu,\chi_{V_{n}}\otimes a^{*}\rangle=\langle m \diamond\mu_{K},\chi_{V_{n}}\otimes a^{*}\rangle\to 0\qquad(\because m \diamond\mu_{K}\in L^{1}(G,\mathcal{A}))\] Since, \(B\subset V_{n}\subset U_{n}\), and we have \(\nu_{m,a^{*}}(V_{n})\to\nu_{m,a^{*}}(B)\). Thus, \(\nu_{m,a^{*}}(B)=0\) for each \(a^{*}\in\mathcal{A}^{*}\). Hence, \(\nu_{m}(B)=0\) and due to regularity of \(\nu_{m}\), we conclude that \(\nu_{m}\ll\mu\), i.e \(\nu_{m}\in L^{1}(G,\mathcal{A})\) and required. **Corollary 4.11**.: _Let \(G\) be a compact abelian group and \(\mathcal{A}\) be any reflexive Banach algebra(not necessarily unital), then \(L^{1}(G,\mathcal{A})\) is SAI._ Proof.: The preceeding theorem combined with the corollary 3.1 proves the assertion. ## 5. Bidual of \(L^{1}(g,\mathcal{A})\) for non-reflexive Banach algebra \(\mathcal{A}\). As we have noticed in the previous sections that when \(\mathcal{A}^{*}\) does not have RNP, then the dual \(L^{1}(G,\mathcal{A})^{*}\) strictly contains a copy of \(L^{\infty}(G,\mathcal{A}^{*})\). This makes it difficult to access the topological center. It would be too demanding to expect \(L^{1}(G,\mathcal{A})\) to be SAI in such cases, even when \(\mathcal{A}\) itself is SAI. But pseudo-center seems to be accessible in certain cenarios. We shall see that in certain situations, the elements of pseudo center can be identified with \(Z(\mathcal{A}^{**})\)-valued measures. Let \(\mathcal{A}\) be a Banach algebra and \(a^{*}\in\mathcal{A}^{*}\) be a fixed element. For each \(m\in Z_{s}(L^{1}(G,\mathcal{A})^{**})\), there is an associated map \(\Delta_{m,a^{*}}:C(G)\to\mathcal{A}^{*}\) defined as \(\Delta_{m,a^{*}}(f)(a)=m(f\otimes_{a}a^{*})\). **Lemma 5.1**.: _Let \(G\) be compact Hausdorff group, \(\mathcal{A}\) be a Banach algebra,and \(m\in Z_{s}(L^{1}(G,\mathcal{A})^{**})\)be such that \(\Delta_{m,a^{*}}\) is compact for every \(a^{*}\in\mathcal{A}^{*}\), then the restriction of \(m\) to \(C(G,\mathcal{A}^{*})\) is a \(Z(\mathcal{A}^{**})\)-valued measure._ Proof.: Let \(m\in Z_{s}(L^{1}(G,\mathcal{A})^{**})\) be any any arbitrary element. We denote the restriction of \(m\) to \(C(G,\mathcal{A}^{*})\) by the measure \(\mu_{m}\in M_{r}(G,\mathcal{A}^{**})\). Due to regularity of \(\mu_{m}\), it would be sufficient to show that \(\mu_{m}(E)\in Z(\mathcal{A}^{**})\) for any open subset \(E\) of \(G\). Let \(E\) by an open susbset of \(G\). For any \(a^{*}\in\mathcal{A}^{*}\) and \(a^{**}\in\mathcal{A}^{**}\), we have \[\mu_{m}(E)\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt} \rule[6.5pt]{6.5pt}{0.4pt}}}a^{**}(a^{*})=\mu_{m}(E)({}_{a^{**}}a^{*})\] Let \(b^{*}={}_{a^{**}}a^{*}\). Invoking, the Riesz Representation theorem theorem 1, we have \[\mu_{m}(E)\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt} \rule[6.5pt]{6.5pt}{0.4pt}}}a^{**}(a^{*})=\mu_{m,b^{*}}\left(E\right)\] where \(\mu_{m,b^{*}}\) is the regular borel measure corresponding to the linear functional \(L_{m,b^{*}}:C(G)\to\mathbb{C}\) defined as \(L_{m,b^{*}}(f)=m(f\otimes b^{*})\). Then there exists a compactly supported continuous function \(f\) such that \(0\leq f\preceq\chi_{E}\) and \(\left|\mu_{m,b^{*}}(E)-\int fd\mu_{m,b^{*}}\right|<\epsilon\). But \(\int fd\mu_{m,b^{*}}=m(f\otimes b^{*})=m(f\otimes{}_{a^{**}}a^{*})\). Now notice that for any \(\phi\in L^{1}(G,\mathcal{A})\), we have \[(f\otimes{}_{a^{**}}a^{*})(\phi) =\int\left\langle f(s){}_{a^{**}}a^{*},\phi(s)\right\rangle dt\] \[=\left\langle{}_{a^{**}}a^{*},\int f(s)\phi(s)dt\right\rangle\] Using the module version of Cohen's factorization theorem theorem 5, we know that \(L^{1}(G)*L^{\infty}(G)=C_{lu}(G)\) and since compactly supported continuous functions are left/right uniformly continuous, we can choose \(g\in L^{1}(G)\) and \(h\in L^{\infty}(G)\) such that \(g*\check{h}=\check{f}\). Let \(n\) denotes any Hahn-Banach extension of \(g\otimes{}_{a^{**}}\) to \(L^{1}(G,\mathcal{A})^{**}\). Now for any any net \(\{a_{\gamma}\}_{\gamma\in\wedge}\) in \(\mathcal{A}\) converging to \(a^{**}\) in the \(w^{*}\) topology of \(\mathcal{A}^{**}\), we see that \(g\otimes a_{\gamma}\) converges to \(g\otimes{}_{a^{**}}\) in the \(\sigma(L^{1}(G,\mathcal{A})^{**},L^{\infty}(G,\mathcal{A}^{*}))\) topology and hence, \[{}_{n}(h\otimes a^{*})(\phi) =n((h\otimes a^{*})_{\phi})\] \[=\left\langle g\otimes{}_{a^{**}},(h\otimes a^{*})_{\phi}\right\rangle\] \[=\lim_{\gamma}\left\langle g\otimes a_{\gamma},(h\otimes a^{*})_{ \phi}\right\rangle\] \[=\lim_{\gamma}\int g(t)(h\otimes a^{*})_{\phi}(t)(a_{\gamma})dt\] \[=\lim_{\gamma}\int g(t)\int\left\langle h(st)a^{*},\phi(s)a_{ \gamma}\right\rangle dsdt\] \[=\lim_{\gamma}\left\langle a^{*},\left(\int g*\check{h}(s^{-1}) \phi(s)ds\right)a_{\gamma}\right\rangle\] \[=\lim_{\gamma}\left\langle a^{*},\left(\int f(s)\phi(s)ds\right) a_{\gamma}\right\rangle\] \[=\left\langle{}_{a^{**}}a^{*},\int f(s)\phi(s)ds\right\rangle\] Thus, we see that \({}_{n}(h\otimes a^{*})=f\otimes{}_{a^{**}}a^{*}\). And hence \[|\mu_{m}(E)\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt} \rule[6.5pt]{6.5pt}{0.4pt}}}a^{**}(a^{*})-m\raisebox{-1.72pt}{\hbox{\rule{0.4pt }{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}n(h\otimes a^{*})|<\epsilon\] Now we turn to the second Arens product and show that it is also in the arbitrarily small neighbourhood of \(m\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5 pt}{0.4pt}}}n(h\otimes a^{*})\). Notice that \[\mu_{m}(E)\diamond a^{**}(a^{*})=\lim_{\gamma}\mu_{m}(E)\raisebox{-1.72pt}{\hbox {\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}}a_{\gamma}(a^ {*})\] \[=\lim_{\gamma}\mu_{m,b^{*}_{\gamma}}(E)\] \[=\lim_{\gamma}\int\chi_{E}d\mu_{m,b^{*}_{\gamma}}\] where \(b^{*}_{\gamma}={}_{a_{\gamma}}a^{*}\). Since, \(\Delta_{m,a^{*}}\) is compact, its adjoint \(\Delta^{*}_{m,a^{*}}:A^{**}\to M_{r}(G)\) given by \(\Delta^{*}_{m,a^{*}}(a^{**})=\mu_{m,a^{**}\,a^{*}}\) satisfies the property that for any bounded net \(\{a^{**}_{\lambda}\}\) in \(\mathcal{A}^{**}\), the net \(\{\Delta^{*}_{m,a^{*}}(a^{**}_{\lambda})\}\) converges in norm. Hence, \(\lim_{\gamma}\int\chi_{E}d\mu_{m,b^{*}_{\lambda}}=\mu_{m,b^{*}_{\gamma}}(E)\) \[\left|\mu_{m}(E)\diamond a^{**}(a^{*})-\int fd\mu_{m,b^{*}}\right|\leq|\mu_{m,b ^{*}}|(U\setminus E)<\epsilon\] Thus, we see that \(|\mu_{m}(E)\diamond a^{**}(a^{*})-\mu_{m}(E)\raisebox{-1.72pt}{\hbox{\rule{0.4 pt}{6.5pt}\rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}\rule[6.5pt]{6.5pt}{0.4pt}}} \hskip-1.72pt\raisebox{-1.72pt}{\hbox{\rule{0.4pt}{6.5pt}\rule{0.4pt}{6.5pt} \rule{0.4pt}{6.5pt}\rule{6.5pt}{0.4pt}}}\hskip-1.72pt\raisebox{-1.72pt}{\hbox{ \rule{0.4pt}{6.5pt}\rule{0.4pt}{6.5pt}\rule{0.4pt}{6.5pt}\rule{0.4pt}{6.5pt} \rule{0.4pt}{6. for each borel subset \(E\) of \(G\) and each \(x^{*}\in X^{*}\). Since, functions of the type \(\chi_{E}\otimes x^{*}\) spans \(S^{\infty}(S,X^{*})\), we conclude that \(\phi_{n}\to\phi\) in \(\sigma(L^{1}(S,X),S^{\infty}(S,X^{*}))\)-topology.
2309.10074
Prominence Perceptions as a Heuristic in Contexts of Low Information
This study explores the concept of prominence as a candidate trait, understood as the perceived worthiness of attention candidates elicit from regular citizens in the context of low information elections. It proposes two dimensions of candidate prominence, political and public, operationalized as having held high visibility roles within the party and having social influence through social media presence. Employing a conjoint analysis experimental design, the study tests whether political and public prominence serve as heuristic mechanisms in low-information electoral settings by estimating conditional effects on respondents' self-assessed interest in politics, educational level and self-assessed ideological placement. The results contribute experimental evidence to support the hypothesis of differential heuristic choices by voters based on varying levels of perceived public and political prominence, conditional on voters' characteristics.
Esteban Villa-Turek
2023-09-18T18:41:30Z
http://arxiv.org/abs/2309.10074v1
# Prominence Perceptions as a Heuristic in Contexts of Low Information # Prominence Perceptions as a Heuristic in Contexts of Low Information Esteban Villa-Turek _Abstract_ - This study explores the concept of prominence as a candidate trait, understood as the perceived worthiness of attention candidates elicit from regular citizens in the context of low information elections. It proposes two dimensions of candidate prominence, political and public, operationalized as having held high visibility roles within the party and having social influence through social media presence. Employing a conjoint analysis experimental design, the study tests whether political and public prominence serve as heuristic mechanisms in low-information electoral settings by estimating conditional effects on respondents' self-assessed interest in politics, educational level and self-assessed ideological placement. The results contribute experimental evidence to support the hypothesis of differential heuristic choices by voters based on varying levels of perceived public and political prominence, conditional on voters' characteristics. ## Introduction Valence theory was initially introduced by Donald Stokes to account for all elements of electoral choice found empirically in the real world by having stable policy-oriented (_strong ideological focus_) choices and diffuse valence-oriented (_weak ideological focus_) choices (Stokes, 1963), which incorporates features of real life by embracing the fact that usually what drives voters' choices rests on potential performance or viability perceptions of the party or cadidate to address and manage the most important issues according to the voters (Sanders et al., 2011). Such perceptions include perceptions of successful past performance, competence, or viability and belong to the same cognitive factors that Stokes referred to, also called "fast and frugal heuristics" (Goldstein & Gigerenzer, 2002) in the psychology and decision making literature. These heuristics act as mental shortcuts taking advantage of the scarce information that may be available in most real-world contexts. For instance, voters may use the perceived competence of a party's leader as a mental shortcut to establish how likely the party will be to manage difficult situations in which all or nearly all citizens share similar goals (e.g., defending the country from foreign attack, or avoiding recessions. A similar role is played by the attitudes towards the perceived ability of a party or candidate to navigate the most pressing needs of the society according the the majority of voters. Party identification may be especially important because it can serve as a heuristic when employed as summary of issue stands and future policy actions, and/or as a sort of perceived competence track record bewteen parties or candiates (Sanders et al., 2011). This is the case particularly in contexts of low-information elections: "(...) these elections are the rule, not the exception, in American Politics. Yes. Citizens are frequently asked to weigh in on races where the most effective piece of information--partisanship--is unavailable. These races include prominent positions, such as mayor, but also less visible positions, such as court clerk, public defender, school board members, city council members, and local authority positions, such as port commissioner and fire commissioner. Additionally, political primaries require citizens to adjudicate between candidates who are indistinguishable on party lines." (Kam & Zechmeister, 2013, p. 971-972) The reliance on heuristics when voting, however, does not mean that valence driven electoral choices are less rational, much less irrational. In fact, the definition of rationality that this study employs aims at getting to a more nuanced understanding of rationality in uncertain contexts, where information is scarce and costly to obtain. This characterization of rationality finds grounding on the notion of low-information rationality (Simon, 1955) or, better yet, of models of ecological rationality (Goldstein & Gigerenzer, 2002), according to which the heuristic-driven choices may be rational because the heuristics are positively correlated with the outcome being predicted (Goldstein & Gigerenzer, 2002). Given that recognition is valid due the informational structure in which it is embedded, this study proposes a subjective perspective according to which the informational structure varies with each voter's particularities. To do so, it focuses on one aspect of recognition that has not been studied thus far regarding its implications for electoral races: a candidate's prominence. The study builds on Simon Munzert's (2018) novel approach to calculate politicians' importance using Wikipedia data, but only regarding the conceptualization he offers of political importance: "Political importance is considered to be the combination of _prominence_, subsuming characteristics that contribute to the popular perception of politicians, as well as _influence_, describing how well politicians are connected among their peers and their footprint in the political arena." (Munzert, 2018, p. 27). In this sense, prominence is a latent component of overall importance and can be thought of as the worthiness of general attention that a candidate elicits from the public (Munzert, 2018), which is crucial especially if it is clear that the a person's attention is inherently limited. The latter plus the real-world challenge of low information elections, where every possible cue of candidate viability can be a decisive factor, warrants the study of a candidate's prominence as an objective recognition object, building on previous research on name recognition (Kam & Zechmeister, 2013). Therefore, the study will ask the following research questions: RQ1: do perceived levels of prominence drive voter choices? RQ2: is there any difference between perceptions of public prominence and political prominence? RQ3: is the difference attributable to informational structures that vary depending on voter characteristics like political interest or ideological position? Theory Anthony Downs' _An Economic Theory of Democracy_ (Downs, 1957) introduced an economic approach to understand democratic political processes through rational-choice models. His theory is based on a series of important assumptions that include that all decisions are made centrally in the government, government has only two choices at a time, the choices are independent of each other, the framework is that of a two-party system, parties know what the preferences of all voters are, and voters know all possible governmental and party choices and their consequences (Downs, 1957, p. 54). The last assumption is key, as it implies perfect information based on which voters can calculate which party (or candidate) to support to maximize their preferences (Downs, 1957). However, in Chapter 5 Downs introduces the notion of uncertainty as "any lack of sure knowledge about the course of past, present, future, or hypothetical events" (Downs, 1957, p. 77). Downs recognizes that uncertainty is important, because it affects the confidence with which agents in the models make choices (Downs, 1957). In fact, Chapter 6 opens the door to a more plausible rational decision-making process that is not rational merely on the account of being based on perfect information but rather taken its context of low and costly information. Downs argues that uncertainty makes voters fall into different types relative to the confidence they have in their electoral choices (Downs, 1957). This chapter therefore introduces an important alternative of rational decision making taking into account different levels of access to information on the voter's side (Simon, 1955) and that later developed as valence theory, introduced by Donald Stokes to account for all elements of electoral choice found empirically in real-world contexts, as noted above (Stokes, 1963). This warranted the study of heuristics in the political realm based on psychological research about decision-making in low-information contexts and the apt use of heuristics to navigate the difficult access to information (Goldstein & Gigerenzer, 2002; Simon, 1955). Such is the case with the most basic and general heuristic, the recognition heuristic: "(...) heuristics (...)are (a) ecologically rational (i.e., they exploit structures of information in the environment), (b) founded in evolved psychological capacities such as memory and the perceptual system, (c) fast, frugal, and simple enough to operate effectively when time, knowledge, and compu- tational might are limited, (d) precise enough to be modeled computationally, and (e) powerful enough to model both good and poor reasoning. We introduce this program of fast and frugal heuristics here with perhaps the simplest of all heuristics: the recognition heuristic." (Goldstein & Gigerenzer, 2002, p. 75) These recognition-enabled inferences are ecologically valid because they are embedded in a particular informational structure (like low-information elections), where a lack of relevant information is systematically distributed and therefore strongly correlated, in either direction, with the criterion being predicted (Goldstein & Gigerenzer, 2002). Since the criterion being predicted is unknown, for example the endoment of a university or the size of a city in foreign country, the validity or strength of the recognition heuristic can be explained as a tripartite process in which a mediator is used as a source of information to infer the unknown criterion (Goldstein & Gigerenzer, 2002). Thus, the ecological validity is the relationship between the mediator and the unkown criterion and the surrogate validity is the relationship between the mediator and the mind of who infers (Goldstein & Gigerenzer, 2002). To clarify the latter and to express how good an inferential mechanism the recognition heuristic is, Goldstein & Gigerenzer (2002) conduct a recognition test in which all mentions of all German cities with more than 100,000 inhabitants appeared in the Chicago Tribuen from 1985 until 1987 and compare it with a similar study performed in Austria with mentions of the largest American cities in Die Zeit. The results can be found in Figure 1. **Figure 1 - "Ecological correlation, surrogate correlation, and recognition** The first value is for American cities and the German news- paper Die Zeit as mediator, and the second value is for German cities and the Chicago Tribune as mediator. Note that the recognition validity is expressed, for comparability, as a correlation (between the number of people who recognize the name of a city and its population)." (Goldstein & Gigerenzer, 2002, p. 86) In Figure 1, the first correlation value for each relationship represents the results of a test for American cities mentioned in a German news outlet, and the second represents the results of the test for German cities mentioned in an American news outlet. As can be seen, the ecological and surrogate correlations are stronger than the recognition correlation, which is the ultimate inferential task of interest. This seems natural as the recognition is mediated by the number of times the cities appeared in the newspaper, meaning that most of the participants recognized accurately the largest cities in the newspapers, the surrogate correlation, as it is is the only direct cognitive contact that the respondents had with any information regarding the cities. As the relationship grows more indirect, i.e. where people are not directly recognizing anything, the ecological correlation decreases, and although the final recognition correlation is the lowest of them all, partly because of the uncertainty surrounding it, the correlation coefficient is nevertheless noteworthy. In the realm of political science, valence attributes seem to drive voter choice more often than not. Empirical evidence has found support for the coexistence of both valence and spatial cognitive processes in voting, with the caveat that the direct effects of valence considerations on electoral choice tend to be stronger than their non-valence counterparts (Sanders et al., 2011). Such is the case, for example, of the recognition heuristic, which has been rightfully identified as a crucial driver of choice in low-information voting contexts (Kam & Zechmeister, 2013; Panagopoulos & Green, 2008). Furthermore, forecasts based on simple recognition heuristics have been found to be accurate in contexts of multiparty elections regarding smaller, more obscure political parties (Gaissmaier & Marewski, 2011), a finding that may well apply to contexts of low information elections in bipartisan systems regarding obscure candidates or nominces, like the United States. For instance, where party affiliation determines a significant part of electoral outcomes, candidates often find themselves juggling a wide array of attributes, both policy and non-policy related, in order to get elected (or reelected), despite belonging to either one of the principal political parties (Ansolabechere et al., 2001). Another example of a candidate's valence attribute driving voter support can be seen in gender among Democratic candidates, resulting in female Democratic candidates eliciting more support among liberal voters in low-information contexts (Mcdermott, 1997). More recently, studies on the effect of personalization strategies of candidates using social media have found that such strategies do elicit a higher awareness of a candidate conditional on voter characteristics, therefore increasing the likelihood of voters' heuristics on candidates' personal traits (McGregor, 2017). Regarding age, a recent study found significant effects of a candidate's age as heuristics when the candidate shares the same party affiliation and is closer in age to the voter, but more importantly the heuristic's effects vary with contextual and voter-specific characteristics (Webster Pierce, 2019). In a comparative setting, studies in Brazilian low-information elections seem to support the idea of voters using heuristics related to the personal qualification of the candidates, finding general support for candidates with the title of "doctor", but differential support (or lack thereof) when the candidate's title is "pastor" (Boas, 2014). In studying a specific form of the recognition heuristic, name recognition, Kam Zechmeister (2013) argue that there are two causal pathways through which name recognition influences voters' candidate support. The first causal pathway is the direct one and draws from the psychology and consumer marketing literature on mere exposure. In short, the direct causal pathway suggests that voters will tend to favor the candidates to which they have been exposed the most (Kam Zechmeister, 2013). The indirect causal pathway, on the other hand, draws from the literature on decision science and posits that the recognition of a candidate allows voters to make inductive inferences about the candidate (Kam Zechmeister, 2013). In the case of electoral processes where the recognition heuristic plays a role, the correlation between candidate recognition and the ecologically valid prediction is positive and the inferences made have been found to relate to the candidate's viability, rather than to her traits or experience (Kam Zechmeister, 2013). In fact name recognition seems to drive electoral choices in low information elections because recognition tends to translate into higher candidate support (Kam Zechmeister, 2013). Nevertheless, the more interesting finding relates to the plausible interplay between name recognition and other factors like incumbency, appearance, partisanship, ethnicity, prominence, etc. This is important, because, differently to what research on the recognition heuristic has contributed, electoral choices are _matter of taste or judgment_ and not strictly probabilistic inferences. This allows for the recognition heuristic to acquire a more compensatory nature in the presence of other cues that could potentially be more germane to the electoral choice at hand, thus diminishing the indirect effect of recognition on voter choice but not eliminating it (Kam & Zechmeister, 2013). This possibility opens the door for a more nuanced muti-dimensional apporach to study voter heuristics in which several cues about competing candidates are evaluated simultaneously, thus resembling the comlplex choices that voters actually face. Given the tripartite structure of the recognition heuristic, the study proposes a differential ecological validity conditoonal on voter characteristics. For instance, if a voter is very interested in politics, she will access a different informational structure with a distinct ecological validity than a voter who is not, which also implies that particular cues will have a differentiable effect on overall candidate support. To assess how voter characteristics may determine which cues have a larger effect in electoral choices, the study introduces a distinction of prominence as a component of a candidate's overall importance. Thus, one variant of prominence will be strictly political, aimed at signaling political expertise or competency by means of indicating if the candidate has held a more obscure public office before, like city council member (yielding _low_ political prominence) or governor (_high_ political prominence). The other variant will be more general and intended to signal prominence to the general public by indicating how many Twitter followers a candidate has. Methodology The study employs a conjoint analysis methodology hereinafter, following the description and motivations presented in Hainmueller at al. (2014). Conjoint analysis saw its birth in the early 1960s with an application of mathematical psychology that allowed for the measurement of simultaneous combinations of quantities from the same kind. It was later revisited as a way to measure consumer preferences and decision-making in the context of complex and multidimensional choice scenarios (Hainmueller et al., 2014). It has been widely used by marketing specialists to research consumption behavior, product development and preference formation, with diverse variations having been sparsely used in sociology as well, under the name of 'vignetttes' or 'factorial surveys' (Hainmueller et al., 2014). The inclusion of conjoint analysis into political science serves the need for a better way to infer causality between experimental manipulations and observed phenomena (the composite treatment cffects), as a tool to clearly identify causal cffects of individual elements of any kind of treatment in a survey experiment. This task is normally hard to achieve by the traditional survey design, which only allows the researcher to estimate causal effects as a whole, but not the individual and specific effects from single elements of the experimental manipulation (Hainmueller et al., 2014). The latter takes on particular importance when planning an experiment that will explore the effect of single changes on multidimensional choices, such as when the research simulates a hard choice setting for respondents to choose the hypothetical immigrant's profile that will be granted entry into the country or to select the hypothetical political candidate's profile for whom she would vote for (Hainmueller et al., 2014; Hainmueller & Hopkins, 2014). Other examples of conjoint analysis designs can be seen in Franchino & Zucchini (2015) with a similar political candidate experiment ran with undergraduate students in Milan; in Hainmueller et al. (2015) with a research on the Swiss population's attitude towards immigrants, which was later compared to a natural experiment caused by a referendum on the same subject with remarkably good results; or in Carnes and Lupu (2016), where in a comparative study, North American, British and Argentinian respondents were asked to choose between candidates, with the aim of measuring whether they disliked those who were working-class candidates (which they did not), to name just a few. Particularly of interest is the variation of the conjoint analysis technique that randomizes the display of the different treatments of interest, allowing for a decomposition of the composite treatment effects. By means of the identification of a causal quantity of interest, the average marginal component effect (the AMCE hereinafter), and by making a series of assumptions that necessary hold because of the experimental design itself, the AMCE can and will be nonparametrically identified from the conjoint data collected in the survey experiment (Hainmueller et al., 2014). It is noteworthy that the nonparametric nature of the estimation of the AMCE allows for the researcher to avoid resorting to assumptions of functional form, simplifying the statistical approach greatly, mainly because no assumptions of behavioral models of respondents need to be made in order to fit the observed data efficiently. Therefore, the conjoint analysis method does not need any assumption about the behavioral model under which respondents formed preferences and made their choices to allow for an efficient and, above all, unbiased AMCE identification (Hainmueller et al., 2014). A conjoint analysis has several other advantages as well, beyond the practical and convenient property of causal effect identification of individual components, as referred to above. As Hainmueller et al. (2014) explain, conjoint analysis provides, first, a sense of realism when presenting complex and multidimensional choice settings to respondents like the ones they would encounter in the real world. Second, it allows for a simple, cost-effective way of testing multiple hypotheses within the same experimental design. Third, and linked to the latter, it allows researchers to evaluate whether different theories have or lack explanatory power, by means of a single experiment with a single behavioral outcome that estimates the effect of multiple treatment elements at once. Fourth, the risk of social desirability bias in the respondents' stated choice preferences are significantly reduced since the respondents can justify their choices by means of any of the numerous other treatment elements simultaneously at play. And fifth, conjoint analysis can exploit its marketing forecasting potential for practical problems, such as policy design, if, for instance, it was used to predict the most popular policy elements combination in an upcoming reform (Hainmueller et al., 2014). By design, there are several assumptions to be made to estimate the \(\mathrm{AMCE}\) correctly, all of which are held by the design of the conjoint analysis experiment itself, or by testing with observed data (Hainmueller et al., 2014). Below are the most important and most pertinent ones for this research design, as explained by Hainmueller at al. (2014): 1. We first assume no carryover effects and stability, meaning that the current choice made by the respondent is not influenced by the last choice made by her, given the treatment effects presented in that choice task. She will always choose based on the same treatment when it appears, no matter what other treatments preceded the current task. It also means that potential outcomes remain stable across all possible choice tasks. 2. The second assumption is no profile-order effects. It allows researchers to ignore the order, if any, in which the different attributes are presented to the respondents, allowing for the former to simply pool information of interest across profiles for estimation purposes. This assumption helps boost the efficiency of the conjoint analysis. 3. The third assumption is randomization of profiles and implies that the outcomes are statistically independent of the profiles. By design, the conjoint analysis should present randomized attributes as profiles to the respondents and therefore the choices they make will not be systematically related to the profiles they see. Moreover, each level of each attribute must have a non-zero probability of being randomly presented in a profile (unless there is theoretical reason to define prohibited pairs of attributes that would not make sense in real life). Based on the latter assumptions the design allows for the estimation of the individual effect of any given treatment component, or AMCE. The goal is to understand how any given treatment component affects the probability of a profile being chosen while having under consideration that such individual effect may be -and usually is- different depending on the other attributes of the profile, which allows for the estimation of the marginal effect of the treatment attribute "averaged over the joint distribution of the remaining attributes" (Hainmueller et al., 2014, p. 10). We can also estimate interaction effects between treatment components, for instance income and public prominence of the candidates. This interaction effect is the average component interaction effect, \(\Lambda\)CIE, and can operate, as mentioned above, as the interaction of two treatment components, where the \(\Lambda\)CIE of the two treatment components of interest is the difference in percentage point estimates in average marginal component effects of the income level between a candidate with a high level of public prominence and a candidate with a low level of public prominence. Furthermore, the interaction effect can be estimated between any given treatment component and a characteristic of the respondent, like age or political ideology (Hainmueller et al., 2014). Finally, as estimation strategies it is possible to perform a simple difference in means or a linear regression (Hainmueller et al., 2014). The study assumes completely independent randomization of treatment components, that is, the candidate profiles can take on any combination of possible attributes, without any restriction or prohibited pairs. This way, it is possible to estimate the \(\Lambda\)MCE as the difference in means between the number of profiles where the treatment component occurred and the number of profiles where it did not occur or by fitting a linear regression "of the observed choice outcomes on the (...) dummy variables for the attribute of interest and looking at the estimated coefficient for the treatment level." (Hainmueller et al., 2014, p. 16). Furthermore, and very conveniently, we can also estimate the \(\Lambda\)MCE of all treatment components by simply regressing the outcome variable on the sets of dummy variables for every attribute level (excluding the baselines) and thus the AMCE can be interpreted as the average change in the probability of a given profile being preferred whenever the given profile displays the attribute level of interest instead of the baseline attribute level (Hainmueller et al., 2014). This way, it is possible to estimate not only the effect of an attribute taking on all its possible values, but also its effect across other possible attributes, which allows the study to explore the possible relative weight that voters may assign to various aspects within their multidimensional choice framework (Hainmueller et al., 2014). Note that, by design, the choice task outcomes are strongly negatively correlated, because choosing one profile necessarily means not choosing all others, and that the outcomes obtained are mostly driven by unobserved respondent characteristics, who will therefore always choose their preferred combination of attributes whenever they are displayed (Hainmueller et al., 2014). For that reason, when estimating sampling variance, it is important to correct standard errors: this can be done in two ways. First, by calculating cluster-robust standard errors (when population inferences suffer from possible correlated standard errors within, in this case, respondent clusters); or, second, by bootstrapping resampled respondents and then calculating uncertainty estimates with the help of the observed distribution of AMCEs over the resamples (Hainmueller et al., 2014). ## Experimental Design A pilot study was performed largely influenced by the candidate conjoint analysis performed by Hainmueller et al. (2014), in which hypothetical candidate profiles were shown to respondents for them to choose who they would support, without theorizing about the contextual framework of the election. The study employs a total of nine candidate attributes with a combined number of 45 levels which will be independently and randomly displayed to respondents in a pairwise fashion as a hard choice task, for a total of 10 choice tasks for respondents (Bansak et al., 2018, 2021). Each respondent will be asked to choose each time between the displayed profiles the one that she would support the most. The pilot test asked 75 respondents for a total \(N\)= 1500 observations. The survey was created using the survey platform QuestionPro and the survey link was administered to respondents in the United States using the Amazon Mechanical Turk in the form of Human Intelligence Tasks - HITs (Paolacci and Chandler, 2014). The attributes chosen, and their corresponding levels, will be introduced next, along with justification regarding their selection (when considered necessary). The _Ethnicity, Ocapation, Gender_, _Income_ and \(\mathcal{A}\)ge attributes are based the candidates experiment in Hainmueller et al. (2014). ### Party Affiliation The study introduces a third level 'Party Identification not available' to be displayed with a probability of 0.66, to avoid generalized strong party identification effects in responses, which would most likely arise from the marked partisanship that characterizes American politics (Buttice and Stone, 2012; Gouret et al., 2011; Palmer et al., 2013; Sanders et al., 2011). The levels are: 1. Republican 2. Democrat 3. Party Identification not available ### Ethnicity 1. African American 2. Hispanic/Latino 3. White non-Hispanic 4. Native/American 5. Asian Another critical valence attribute in American politics (Hainmueller & Kern, 2008; Levitt, 1994; Levitt & Wolfram, 1997; Stone & Simas, 2010), the wording of the two levels has been simplified in order to be less technical and to not contain specialized words such as _inumbent_ or _challenger_. The levels are: 1. The candidate is in office and seeks reclection. 2. The candidate is looking to be elected for the first time. Gender Following Mcdermott (1997), the inclusion of a gender variable responds to the intention of interacting it for possible gender effects between respondents' and candidates' attributes. The levels are: 1. Female 2. Male Occupation Based on the immigration experiment in Hainmueller et al. (2014), but also inspired in the notion of famous political amateurs entering electoral races. Levels such as _actor_ or _athlete_ play, therefore, an important role in the experiment, especially when potentially interacted with high levels of public prominence. Furthermore, it has been recently found that in local races voters tend to support candidates with a previous occupation related to the office they are running for (Atkeson & Hamel, 2020). The attributes are: 1. Lawyer 2. Military Officer 3. Teacher 4. Farmer 5. Business owner 6. Athlete 7. Actor 8. Banker 9. Journalist 10. Union Leader Age Recent evidence indicates that age is indeed an important and understudied heuristic through which voters tend to favor candidates with the same party affiliation that are closer in age to them (Webster Pierce, 2019). The levels are: 1. 31 years old 2. 38 years old 3. 45 years old 4. 52 years old 5. 59 years old 6. 66 years old 73 years old #### Income A reference for this attribute was Wuest & Rosset (2017) in the Swiss case. The levels consist of the following fixed annual income figures that represent incomes rising to just short of the top 1% (Winters & Page, 2009): 1. Annual income $32,000 2. Annual income $54,000 3. Annual income $75,000 4. Annual income $92,000 5. Annual income $140,000 6. Annual income $360,000 7. Annual income $840,000 #### Public Prominence Based on the notion of prominence as a component of the importance of political actors (Munzert, 2018). Public prominence will be operationalized and signaled to respondents using a 'followers on Twitter' metric. This metric is a good operationalization of the idea of prominence, since it gives the respondent an idea of how many others are dedicating time of their own to follow the candidate on social media, meaning that, naturally, the more followers a candidate counts with, the more prominent she is. The levels were designed as follows: 1. The candidate has 210 followers on Twitter 2. The candidate has 2.400 followers on Twitter 3. The candidate has 23.700 followers on Twitter 4. The candidate has 315.000 followers on Twitter 5. The candidate has 1.3 million followers on Twitter ### Political Prominence Operationalized and signaled to respondents indicating whether the candidate played an important role in her political party. The levels are the following: 1. The candidate has not played a major role in the party 2. The candidate is a locally renowned member of the party 3. The candidate is a state-wide renowned member of the party 4. The candidate is a nationally renowned member of the party ### Respondents survey questions After the 10 choice tasks presented to the respondent, they were asked to answer seven mandatory sociodemographic and political ideological self-placement questions. The questions are the following: 1. In a scale from 0 to 10, where 0 indicates "Far Left" and 10 indicates "Far Right", where would you place yourself in terms of political ideology support? 2. Select from of the following ethnicities, the one with which you identify yourself most: 1. White 2. African American 3. Hispanic/ Latinx 4. Asian 5. Native American 6. Prefer not to say 3. Select from the following age ranges, the one in which you are located: 1. Younger than 20 years old 2. 20 - 30 years old 3. 31 - 40 years old 4. 41 - 50 years old 5. 51 - 60 years old 6. 61- 70 years old 7. Older than 70 years old 4. Do you usually think of yourself as a Republican, a Democrat, an Independent, or something else? 1. Republican 2. Democrat 3. Independent 4. Something else 5. In general terms: How interested in politics are you? 1. Not interested at all * \(\,\). \(\,\). being elected, meaning that for RQ1, levels of perceived political prominence do matter by themselves. Figure 2 presents all the \(\mathrm{AMCE}\)s as component change in the expected probability of a profile being chosen for every attribute and every level, compared to a baseline level, all else held equal. See Table 1 in the Appendix for detailed \(\mathrm{AMCE}\) estimates. **Figure 2 - \(\mathrm{AMCE}\) estimates for all attributes and levels** The marked effect of partisanship of the candidate is likely due to the high imbalance in the respondent sample, with Democrat respondents accounting for close to 55%, Republicans for the 17% and Independents for 27% of the total. The latter explains why belonging to the Republican party or having the highest level of income on average decreases the probability of the candidate being elected. However, the more interesting result pertaining to the study refers to the significant and positive effect of political prominence on the probability of election (except for the non-significant estimate for "Statewide Renowned Party Member"). We theorize that the recognized levels of prominence depends greatly on the subjective informational structure of the voter and therefore the significant positive effects of political prominence might be due to the response distribution regarding how interested they said they are in politics, which is slightly skewed in the direction of overall greater interest in politics: Not interested at all, 5%; slightly interested, 20%; moderately interested, 32%; rather interested, 27%; very interested, 16%. To further explore why this might be the case and to answer RQ2 and RQ3, we propose a conditional estimation of prominence regarding respondents' characteristics. Figure 3 shows \(\mathrm{ACIE}\) estimations of perceptions of political and public prominence, along with the candidate's political party affiliation, on a grid that varies along the 11-point scale of the respondent's own ideological self-placement, ranging from 0 - 'Far left' to 10 - 'Far right'. As expected, it is possible to see how the effect of the candidate's party shifts as the scale increases, from a more positive support for Democratic candidates at the beginning towards more positive support for the Republican candidate, as compared to a candidate with no party information displayed. Drawing attention only to the change in the effect of perceptions of political prominence of candidates, conditional on the ideological self-placement of the respondents, there is not too much of a gradual shift. The same could be said about the effect of perceptions of public prominence conditional on the ideological stance of the respondents. Although there are specific cases in which the effect may be clearly positive or negative and, above all, statistically significant, it is not possible to say that these changes in estimates are a function of the ideological position of the respondent. In general, estimates are non-significant, with some exceptions for instance when respondents locate themselves in position 2 and 6 of the ideological scale. In those cases, political prominence seems to keep having the general positive effect on the probability of candidate selection. However, for respondents who locate themselves at the rightmost position of the scale (lower right corner plot in Figure 3, position 11 on the scale), political prominence acquires a more negative effect on the probability of selection, whereas public prominence acquires a positive effect. Table 2 in the Appendix details the \(\Lambda\)CIEs for this last interaction estimation. Figure 4 shows the effects of the candidates' attributes, conditional on the respondents' self-assessed interest in politics. See Tables 3.1 - 3.5 in the Appendix for the complete estimates displayed in Figure 4. In general, respondents who said not to be at all interested in politics showed negative and negative and negative (negative). Figure 3: Conditional effects of a candidate’s PartyID and Public and Political Prominence on respondents’ left-right ideological self-placement significant effects for levels of political prominence, particularly for candidates who were locally and nationally renowned members of the party. The latter makes sense, as potential voters who are not familiar with political dynamics and attributable competence or viability associated with different levels of political prominence would not be familiar with how to allocate relative importance to any of the possible signaled scenarios. Moreover, they also showed positive and statistically significant effects for all levels of public prominence, as compared to the baseline (almost no followers on Twitter). This is a major finding, since it supports the idea of publicly perceived prominent political figures being more appealing to those voters who do not care that much about politics. These results would indicate that this group of voters uses a recognition heuristic to make ecologically rational inferences about the viability of the candidate based on the signaled mediator of Twitter followers. Although the proportion of respondents who said not to be interested in politics at all only accounted for 5% of the responses, this group favored further candidates whose occupation was banker and who had higher annual incomes, i.e. $140.000 USD, $360.000 USD and $840.000 USD. Furthermore, respondents who said to be slightly interested in politics only showed positive statistically significant effects for the candidates who belonged to the Democratic Party. Moderately interested respondents in politics showed positive and statistically significant effects for candidates who tended to lower levels of political prominence, favoring those who were locally or statewide renowned members of the party. They did not show any significant effect towards the candidates' levels of public prominence. Respondents who said they were rather interested in politics showed significant effects regarding the candidates' higher levels of political and public prominence. Specifically, they tended to approve of candidates who were nationally renowned members of the party and tended to disapprove of candidates who had 315.000 Twitter followers. They also showed a negative significant effect for candidates whose annual income was very high ($840.000). Lastly, respondents that said they were very interested in politics did not seem to pay much attention to the cues and signals relating to levels of perceived political and public prominence. This finding would support the idea of voters being less prone to use heuristics when they are (very) interested in politics, as opposed to those who are not, as mentioned above. **Figure 4** - Conditional effects of a candidate's attributes on respondents' self-assessed interest in politics **Figure 5** - Conditional effects of a candidate's attributes on respondents' self-assessed interest in the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-100 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 years of the 2010-1000 Finally, and to explore how education levels might interact with the effect of prominence on voting, Figure 5 shows the conditional effects of a candidate's attributes on different levels of respondents' education. See Tables 4.1 - 4.5 in the Appendix for the complete estimates displayed in Figure 5. Although representing only the 3% of the responses, respondents who did some high school but never finished show highly significant effects for all candidates' levels of public and political prominence. All those effects of the perceived prominence of candidates are positive for this group of prospective voters, except for a candidate who is a statewide renowned member of the party, which has a negative effect. Those respondents who did some college or university studies but never completed them (21% of the responses) showed a statistically significant positive effect for higher levels of political prominence of candidates, particularly for candidates who were nationally renowned members of the party. Respondents who finished their undergraduate education (56% of the responses) showed positive significant effects for candidates that were 45 years old and for candidates who belonged to the Democratic Party. Finally, those respondents who did some graduate studies (5% of the responses) showed a statistically significant preference towards candidates with higher levels of political prominence and lower levels of public prominence. Specifically, they showed positive effects for candidates who were nationally renowned members of the party and for those who had 23.700 Twitter Followers. This finding would suggest that highly educated respondents rely on high political prominence levels and on lower levels of public prominence as recognition heuristics at the time of voting. This group of prospective voters, tended to dislike Republican candidates. Figure 5 - Conditional effects of a candidate's attributes on respondents' level of education Figure 5: - Conditional effects of a candidate’s attributes on respondents’ level of education The results indicate significant effects of different levels of political or public prominence, by themselves or conditioned on respondent varying characteristics. Nevertheless, the issue remains one of replicability and validity, especially as the pilot tests presented might be underpowered. By design, the experimental conjoint analysis allows for significant internal validity of the model (Hainmueller et al., 2014). Regarding external validity, however, further attempts via replication of the experiment with a larger sample and comparative research is warranted (Hainmueller et al., 2015). Theoretical explanations to the patterns and effects found can be manifold, and thus must be treated with care to avoid premature causal inferences, although initial findings regarding all three research questions seem to be satisfying. Finally, a caveat of our design must be put forward: due to technical limitations of the online survey platform used, the randomly displayed attribute levels were shown in a fixed order every time. This aspect could have influenced the estimates by inducing primacy effects on the respondents (Hainmueller et al., 2014). These findings directly call for the crucial need to understand the heuristics that drive electoral races and help shape their outcomes. The existing literature on valence politics has shed much light on the matter, and now this research adds another piece to the puzzle, further highlighting the complex dynamics at play in electoral races. The importance of understanding said heuristics is however much needed to further design and update electoral systems with the aim of avoiding possible system abuses or even electoral related manipulations, like the possibility of spending large amounts of money on advertising. This with the goal of making a candidate more publicly prominent and making her more appealing to the share of voting population that is not interest in politics, but even more importantly, for those who do not vote (estimated at over 40% of the voting-age population (Desilver, 2020)) and for which a lack of interest in politics may be a rather important incentive not to do so. Further avenues of research and possible policy implications could relate to ballot redesign with the aim of getting ahead of possibly dangerous heuristics voters use and to disentangle why right-wing extremists apparently place more importance on public perceptions of prominence and a negative weight on political prominence.
2309.13569
On the automorphism group of parabolic structures and closed aspherical manifolds
In this expository paper we discuss several properties on closed aspherical parabolic ${\sfG}$-manifolds $X/\Gamma$. These are manifolds $X/\Gamma$, where $X$ is a smooth contractible manifold with a parabolic ${\sfG}$-structure for which $\Gamma\leq \Aut_{\sfG}(X)$ is a discrete subgroup acting properly discontinuously on $X$ with compact quotient. By a parabolic $\sfG$-structure on $X$ we have in mind a Cartan structure which is modeled on one of the classical parabolic geometries arising from simple Lie groups $\sfG$ of rank one. Our results concern in particular the properties of the automorphism groups $\Aut_{\sfG}(X/\Gamma)$. Our main results show that the existence of certain parabolic ${\sfG}$-structures can pose strong restrictions on the topology of compact aspherical manifolds $X/\Gamma$ and their parabolic automorphism groups. In this realm we prove that any compact aspherical standard $CR$-manifold with virtually solvable fundamental group is diffeomorphic to a quotient of a Heisenberg manifold of complex type with its standard $CR$-structure. Furthermore we discuss the analogue properties of standard quaternionic contact manifolds in relation to the quaternionic Heisenberg group.
Oliver Baues, Yoshinobu Kamishima
2023-09-24T07:05:25Z
http://arxiv.org/abs/2309.13569v1
# On the automorphism group of parabolic structures and closed aspherical manifolds ###### Abstract. In this expository paper we discuss several properties on closed aspherical parabolic \(\mathsf{G}\)-manifolds \(X/\Gamma\). These are manifolds \(X/\Gamma\), where \(X\) is a smooth contractible manifold with a parabolic \(\mathsf{G}\)-structure for which \(\Gamma\leq\operatorname{Aut}_{\mathsf{G}}(X)\) is a discrete subgroup acting properly discontinuously on \(X\) with compact quotient. By a parabolic \(\mathsf{G}\)-structure on \(X\) we have in mind a Cartan structure which is modeled on one of the classical parabolic geometries arising from simple Lie groups \(\mathsf{G}\) of rank one. Our results concern in particular the properties of the automorphism groups \(\operatorname{Aut}_{\mathsf{G}}(X/\Gamma)\). Our main results show that the existence of certain parabolic \(\mathsf{G}\)-structures can pose strong restrictions on the topology of compact aspherical manifolds \(X/\Gamma\) and their parabolic automorphism groups. In this realm we prove that any compact aspherical standard \(CR\)-manifold with virtually solvable fundamental group is diffeomorphic to a quotient of a Heisenberg manifold of complex type with its standard \(CR\)-structure. Furthermore we discuss the analogue properties of standard quaternionic contact manifolds in relation to the quaternionic Heisenberg group. Key words and phrases:\(CR\)-structure, Pseudo-Hermitian structure, Conformal structure, Quaternionic contact structure, Sasaki metric, (Hyper-) Kahler manifold 2010 Mathematics Subject Classification: 22E41, 53C10, 57S20, 53C55 This work was partially supported by JSPS grant No 22K03319. or \(\operatorname{PU}(n+1,1)\), \(\operatorname{PSp}(n+1,1)\) for \(\mathbb{K}=\mathbb{C},\mathbb{H}\), where: \[\mathfrak{so}(n+1,1) =\mathfrak{g}^{-1}+\mathfrak{g}^{0}+\mathfrak{g}^{1}=\mathbb{R}^ {n}+(\mathfrak{so}(n)+\mathbb{R})+(\mathbb{R}^{n})^{*},\] \[\mathfrak{su}(n+1,1) =\mathfrak{g}^{-2}+\mathfrak{g}^{-1}+\mathfrak{g}^{0}+ \mathfrak{g}^{1}+\mathfrak{g}^{2}\] \[=\operatorname{Im}\mathbb{C}+\mathbb{C}^{n}+(\mathfrak{u}(n)+ \mathbb{R})+(\mathbb{C}^{n})^{*}+(\operatorname{Im}\mathbb{C})^{*},\] \[\mathfrak{sp}(n+1,1) =\mathfrak{g}^{-2}+\mathfrak{g}^{-1}+\mathfrak{g}^{0}+\mathfrak{ g}^{1}+\mathfrak{g}^{2}\] \[=\operatorname{Im}\mathbb{H}+\mathbb{H}^{n}+(\mathfrak{sp}(n)+ \mathfrak{sp}(1)+\mathbb{R})+(\mathbb{H}^{n})^{*}+(\operatorname{Im}\mathbb{ H})^{*}\] in which \(P_{\mathbb{K}}\) is generated by the parabolic subalgebra \(\mathfrak{p}=\mathfrak{g}^{0}+\mathfrak{g}^{1}\) for \(\mathbb{K}=\mathbb{R}\), or \(\mathfrak{g}^{0}+\mathfrak{g}^{1}+\mathfrak{g}^{2}\) for \(\mathbb{C},\mathbb{H}\), respectively (see [2]). Put \(\mathsf{G}=P_{\mathbb{K}}\), for brevity. By a _parabolic \(\mathsf{G}\)-structure_ on a manifold \(M\), we thus mean either one of a positive definite conformal structure, a strictly pseudo-convex \(CR\)-structure, or a positive definite quaternionic contact structure (\(qc\)-structure for short) on \(M\). If \(\operatorname{Aut}_{\mathsf{G}}(M)\) is the group of structure preserving transformations on such a parabolic \(\mathsf{G}\)-manifold \(M\), \(\operatorname{Aut}_{\mathsf{G}}(M)\) is called: (1) The group of conformal transformations \(\operatorname{Conf}(M,[g])\). (2) The group of \(CR\)-transformations \(\operatorname{Aut}_{CR}(M,\{\mathsf{D},J\})\), or (3) The group of \(qc\)-transformations (\(\operatorname{Aut}_{qc}(M,\mathsf{D},\,\{J_{\alpha}\}_{\alpha=1}^{3})\), respectively. _Rigidity of parabolic \(\mathsf{G}\)-manifolds with non-proper automorphism group_. An important observation on parabolic \(\mathsf{G}\)-manifolds is that the automorphism group \(\operatorname{Aut}_{\mathsf{G}}(M)\) does not necessarily act properly on \(M\). In particular this is the case for the model spheres \(S^{|\mathbb{K}|(n+1)-1}\). Given a parabolic \(\mathsf{G}\)-manifold \(M\), in case \(\operatorname{Aut}_{\mathsf{G}}(M)\) is a non-proper group the parabolic \(\mathsf{G}\)-manifold \(M\) is completely determined by works of D. V. Alekseevsky [2], J. Ferrand [18], R. Schoen [29], J. Lee [26], C. Frances [19], S. Ivanov and D. Vassilev [21], Webster [30] and others, as follows: **Theorem A**.: _If the automorphism group \(\operatorname{Aut}_{\mathsf{G}}(M)\) of a parabolic \(\mathsf{G}\)-manifold \(M\) does not act properly, then \(M\) with its parabolic structure admits a structure preserving diffeomorphism to one of the standard model spaces as specified in \((1),(2),(3)\) :_ \((1)\) _\(M\) is conformal to either the standard sphere \(S^{n}\) or the euclidean space \(\mathbb{R}^{n}\). Here it occurs_ \[(\operatorname{Iso}(M),\operatorname{Conf}(M))=\begin{cases}(\operatorname{O}( n+1),\operatorname{PO}(n+1,1))&(M=S^{n})\,,\\ (\mathbb{R}^{n}\rtimes\operatorname{O}(n),\mathbb{R}^{n}\rtimes( \operatorname{O}(n)\times\mathbb{R}^{+}))&(M=\mathbb{R}^{n})\,.\end{cases}\] \((2)\) _\(M\) has a spherical \(CR\)-structure isomorphic to either the standard sphere \(S^{2n+1}\) or the Heisenberg Lie group \(\mathcal{N}\) (with its canonical \(CR\)-structure). It occurs_ \[(\operatorname{Psh}_{\,CR}(M),\operatorname{Aut}_{CR}(M))=\begin{cases}( \operatorname{U}(n+1),\operatorname{PU}(n+1,1))&(M=S^{2n+1}),\\ (\mathcal{N}\rtimes\operatorname{U}(n),\mathcal{N}\rtimes(\operatorname{U}(n )\times\mathbb{R}^{+}))&(M=\mathcal{N}).\end{cases}\] (3) \(M\) _has a spherical \(qc\)-structure isomorphic to either the standard sphere \(S^{4n+3}\) or the quaternionic Heisenberg nilpotent Lie group \(\mathcal{M}\)._ \((\mathrm{Psh}\,_{qc}(M),\mathrm{Aut}_{qc}(M))\) _occurs_ \[\begin{cases}\big{(}\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1),\ \mathrm{PSp}(n+1,1),\ S^{4n+3} \big{)}&(M=S^{4n+3}),\\ (\mathcal{M}\rtimes\mathrm{Sp}(n)\cdot\mathrm{Sp}(1),\ \mathcal{M}\rtimes( \mathrm{Sp}(n)\cdot\mathrm{Sp}(1)\times\mathbb{R}^{+}),\ \mathcal{M}\,)\ (M=\mathcal{M}).\end{cases}\] The theorem also gives the pairs \((\mathrm{Psh}\,_{\mathsf{G}}(M),\mathrm{Aut}_{\mathsf{G}}(M))\) for the respective cases (1), (2), (3). Here \(\mathrm{Psh}\,_{\mathsf{G}}(M)\) is the maximal subgroup of \(\mathrm{Aut}_{\mathsf{G}}(M)\) that is acting properly on \(M\). In particular, in case (1), \(\mathrm{Psh}\,_{\mathsf{G}}(M)\) is the subgroup of \(\mathrm{Aut}_{\mathsf{G}}(M))\) which preserves the canonical Riemannian metric that defines the conformal structure on \(M\). In cases (2) and (3), the group \(\mathrm{Psh}\,_{\mathsf{G}}(M)\) coincides with the subgroup of \(\mathrm{Aut}_{\mathsf{G}}(M)\) that preserves the canonical structure defining contact forms. \(\mathsf{G}\)_-Hermitian subgroups of \(\mathrm{Aut}_{\mathsf{G}}(M).\)_ As the rigidity theorem shows, for any parabolic \(\mathsf{G}\)-manifold \(M\), there exists a maximal subgroup \(\mathrm{Psh}\,_{\mathsf{G}}(M)\) of \(\mathrm{Aut}_{\mathsf{G}}(M)\) that is acting properly on \(M\). If the parabolic structure on \(M\) is of type (2) or (3), it is defined by the conformal class of a contact form \(\omega\). In this case we define \(\mathrm{Psh}\,_{\mathsf{G}}(M,\omega)\) to be the subgroup of \(\mathrm{Aut}_{\mathsf{G}}(M)\) that preserves \(\omega\). Note that \(\mathrm{Psh}\,_{\mathsf{G}}(M,\omega)\) always acts properly on \(M\) (see [6]), that is, \(\mathrm{Psh}\,_{\mathsf{G}}(M,\omega)\) is contained in \(\mathrm{Psh}\,_{\mathsf{G}}(M)\). The groups \(\mathrm{Psh}\,_{\mathsf{G}}(M,\omega)\) are also called \(\mathsf{G}\)-Hermitian subgroups of \(\mathrm{Aut}_{\mathsf{G}}(M)\), see Section 3 below. As to the precise relation of \(\mathrm{Psh}\,_{\mathsf{G}}(M)\) with the \(\mathsf{G}\)-Hermitian subgroups we additionally have the following: **Theorem 1** (see Proposition 3.1, [6]).: _Let \(M\) be a parabolic \(\mathsf{G}\)-manifold and let \(H\leq\mathrm{Aut}_{\mathsf{G}}(M)\) be a closed subgroup. Then there exists a canonical cohomology class \([\lambda_{\mathsf{G}}]\) in the differentiable cohomology group \(H^{1}_{d}(H,C^{\infty}(M,\mathbb{R}^{+}))\) with the following properties:_ * _If_ \([\lambda_{\mathsf{G}}]=0\) _and the_ \(\mathsf{G}\)_-structure is conformal, then there exists a Riemannian metric_ \(g\) _representing the conformal structure such that_ \(H\) _is contained in_ \(\mathrm{Iso}(M,g)\)_._ * _If_ \([\lambda_{\mathsf{G}}]=0\) _and the parabolic structures is of type_ (2) _or_ (3)_, there exists a compatible contact form_ \(\omega\) _(that is,_ \(\omega\) _is representing the parabolic structure) such that_ \(H\) _is contained in_ \(\mathrm{Psh}\,_{\mathsf{G}}(M,\omega)\)_._ * _The group_ \(H\) _acts properly on_ \(M\) _if and only if_ \([\lambda_{\mathsf{G}}]=0\)_._ If \(M\) is compact, Theorem 1 combined with Theorem A implies that \(\mathrm{Aut}_{\mathsf{G}}(M)\) is a compact Lie group, except if \(M\) is one of the standard parabolic \(\mathsf{G}\)-spheres as described in Theorem A. In the following we will be in particular interested in compact parabolic \(\mathsf{G}\)-manifolds. _Automorphisms of aspherical parabolic \(\mathsf{G}\)-manifolds._ A compact manifold \(M\) is called aspherical if its universal covering manifold \(X\) is contractible. In that case \(\mathrm{Aut}_{\mathsf{G}}(M)\) is compact and its identity component \(\mathrm{Aut}_{\mathsf{G}}(M)^{0}\) is a compact torus. The latter fact is a consequence of the following fundamental result on compact Lie group actions on closed aspherical manifolds: **Theorem B** (Conner and Raymond [15, 16]).: _Let \(X/\Gamma\) be a closed aspherical Riemannian manifold. Then the isometry group \(\operatorname{Iso}(X/\Gamma)\) is a finite group or the identity component \(\operatorname{Iso}(X/\Gamma)^{0}\) is isomorphic to a \(k\)-torus \(T^{k}\). Moreover, there is a central group extension\((\ref{eq:K})\), \(1{\rightarrow}{\mathbb{Z}}^{k}{\rightarrow}\,\Gamma{\longrightarrow}\,Q{ \rightarrow}1\), where \(Q=\Gamma/{\mathbb{Z}}^{k}\)._ Theorem B shows that if the fundamental group \(\Gamma\) of \(M\) has no normal solvable subgroup then \(\operatorname{Aut}_{\mathsf{G}}(M)\) is a finite group. Well known examples of such aspherical manifolds \(M\) are compact locally symmetric spaces of non-compact type (without local flat factors). In this context, we remark the following general fact: **Theorem 2**.: _Let \(X/\Gamma\) be a closed aspherical manifold such that \(\Gamma\) has no normal solvable subgroup. If \(X/\Gamma\) admits a parabolic \(\mathsf{G}\)-structure, then its automorphism \(\operatorname{Aut}_{\mathsf{G}}(X/\Gamma)\) is a finite group which is isomorphic to a subgroup of \(\operatorname{Out}(\Gamma)\). In particular, \(\Gamma\) is of finite index in its normalizer \(N_{\operatorname{Aut}_{\mathsf{G}}(X)}(\Gamma)\)._ In the theorem \(\operatorname{Out}(\Gamma)=\operatorname{Aut}(\Gamma)/\operatorname{Inn}(\Gamma)\) denotes the outer automorphism group of \(\Gamma\). \(CR\)_- and \(qc\)-structures on closed aspherical manifolds._ From our viewpoint of _parabolic_\(\mathsf{G}\)-structures, we are interested how the existence of a parabolic \(\mathsf{G}\)-structure determines the topology of \(X/\Gamma\) and in particular its smooth structure. We show that the existence of certain \(CR\)- or \(qc\)-parabolic structures on \(X/\Gamma\), that admit a non-trivial connected group of automorphisms \(\operatorname{Aut}_{\mathsf{G}}(X/\Gamma)^{0}\), poses a strong restriction on \(\Gamma\). In fact, we show that under the assumption that \(\Gamma\) is virtually solvable any standard \(CR\)-manifold \(X/\Gamma\) is _diffeomorphic_ to a Heisenberg type manifold which is derived from the standard model space in (2) of Theorem A. In the \(qc\)-case a much stronger rigidity result holds, namely we show that _any_ closed aspherical standard \(qc\)-manifold \(X/\Gamma\) is \(qc\)_-equivalent_ to a \(qc\)-manifold of quaternionic Heisenberg type (as in (3) of Theorem A). _Heisenberg manifolds_. Recall from Theorem A that the Heisenberg Lie group \(\mathcal{N}\) admits a maximal proper subgroup of affine transformations \(\mathcal{N}\rtimes\operatorname{U}(n)\). The group \(\mathcal{N}\rtimes\operatorname{U}(n)=\operatorname{Psh}_{\,CR}(\mathcal{N})\) then preserves the canonical left-invariant \(CR\)-structure, and the canonical pseudo-Hermitian structure on the Heisenberg group \(\mathcal{N}\) (see for example, [22, 7]). Given a torsion-free discrete uniform subgroup \(\Gamma\) contained in \(\mathcal{N}\rtimes\operatorname{U}(n)\), the compact quotient manifold \[M=\mathcal{N}/\Gamma\] is then called a _Heisenberg infra-nilmanifold_. _Standard \(CR\)-structures._ Suppose that \(M\) admits a \(CR\)-structure with contact form \(\omega\), where \(M\) is a \(2n+1\)-dimensional manifold. Then the \(CR\)-structure is called _standard_ if the Reeb vector field associated to \(\omega\) generates a one-parameter subgroup of \(\operatorname{Psh}_{\,CR}(M,\omega)\). In that sense, every Heisenberg infra-nilmanifold carries a canonical standard \(CR\)-structure, where \(\operatorname{Aut}_{CR}(M)^{0}=\operatorname{Psh}_{\,CR}(M,\omega)^{0}=S^{1}\). (The structure is induced from the standard \(CR\)-structure on the Heisenberg group \(\mathcal{N}\).) See Section 4 for details. Further examples of (aspherical) standard \(CR\)-manifolds may be constructed as \(S^{1}\)-bundles over any compact Hermitian locally symmetric spaces \(B\), using the Kahler class of \(B\) to determine the circle bundle. (In fact, this construction works over every compact Kahler manifold \(B\). See [7] and the references therein). _Virtually solvable fundamental group \(\Gamma\)._ Let \(M=X/\Gamma\) be a closed aspherical manifold such that \(\Gamma\) is a virtually solvable group (which means that \(\Gamma\) contains a solvable subgroup of finite index). In fact, since \(M\) is aspherical, \(\Gamma\) is a torsion-free virtually polycyclic group. It is known that every such group occurs as the fundamental group of a compact aspherical manifold \(X/\Gamma\). Note further that the fundamental group \(\Gamma\) determines \(M\) up to homeomorphism, but not necessarily up to diffeomorphism unless some further geometric structure on \(M\) is specified that enforces smooth rigidity (cf. [4] and the references therein.) Every _standard \(CR\)-manifold with solvable fundamental group turns out to be diffeomorphic to a Heisenberg infra-nilmanifold:_ **Theorem 3**.: _Let \(M=X/\pi\) be a \(2n+1\)-dimensional closed aspherical positive definite strictly pseudoconvex standard \(CR\)-manifold. If \(\pi\) is virtually solvable then then there exists a discrete faithful representation \(\rho:\pi\!\to\!\mathcal{N}\rtimes\operatorname{U}(n)\) and \(M\) is diffeomorphic to the Heisenberg infra-nilmanifold \(\mathcal{N}/\rho(\pi)\). In particular, \(\pi\) is virtually nilpotent and \(\operatorname{Aut}_{CR}(M)^{0}=S^{1}\)._ By choosing a representative contact form \(\omega\) for the \(CR\)-structure on \(M\), we obtain a pseudo-Hermitian manifold \((M,(\omega,J))\). Note that a standard pseudo-Hermitian manifold \((M,(\omega,J))\) is equivalent to a Sasaki manifold \((M,g,(\omega,J),\xi)\) by assigning a positive definite Riemannian metric \(g=\omega\cdot\omega+d\omega\circ J\), called Sasaki metric. (Compare [7, 10].) By the existence of the \(S^{1}\)-action generated by the Reeb field, we see that \(M\) admits a fibering over a Kahler orbifold. We will prove: **Theorem 3\({}^{\prime}\)**.: _If the fundamental group of a closed aspherical Sasaki manifold \(M\) is virtually solvable, then \(M\) is diffeomorphic to a Heisenberg infra-nilmanifold._ A Sasaki manifold is called _regular_ if the \(S^{1}\)-action generated by the Reeb field is free. For a regular Sasaki manifold Theorem 3\({}^{\prime}\) is stated and proved in [7, Corollary 2, Proposition 6.10]. Assuming that the fundamental group of \(M\) is nilpotent, a proof of Theorem 3\({}^{\prime}\) involving methods of rational homotopy theory is provided in [28]. The following result is obtained by O. Baues and V. Cortes [5] which is used for our proof. **Theorem C**.: _Let \(X/\Gamma\) be a closed aspherical Kahler manifold with \(\Gamma\) virtually solvable. Then a finite cover of \(X/\Gamma\) is biholomorphic to a complex torus._ We next seek whether similar results hold for \(4n+3\)-dimensional \(qc\)-manifolds. _Quaternionic Heisenberg manifolds._ A quaternionic Heisenberg Lie group is a \(4n+3\)-dimensional nilpotent Lie group \(\mathcal{M}\) with center \(\mathbb{R}^{3}\) whose quotient is isomorphic to the quaternionic vector space \(\mathbb{H}^{n}\), whose structure determines the Lie product on \(\mathcal{M}\) (see [2, 23]). The group \(\mathcal{M}\) admits a maximal proper subgroup of affine transformations \(\operatorname{Psh}_{\,qc}(\mathcal{M})=\mathcal{M}\rtimes\operatorname{Sp}( n)\cdot\operatorname{Sp}(1)\). Then \(\operatorname{Psh}_{\,qc}(\mathcal{M})\) preserves the canonical \(qc\)-structure and the canonical \(qc\)-Hermitian structure on \(\mathcal{M}\). (Compare Theorem A.) A quotient \(\mathcal{M}/\Gamma\) by a torsion-free discrete uniform subgroup \(\Gamma\leq\operatorname{Psh}_{\,qc}(\mathcal{M})\) is called a quaternionic Heisenberg infra-nilmanifold. For any \(qc\)-manifold \(M\), let \(\mathcal{T}=\{\xi_{1},\xi_{2},\xi_{3}\}\) denote the three-dimensional integrable distribution complementary to the codimension three subbundle \(\mathsf{D}\) on \(M\) determined by the \(qc\)-structure, called _\(qck\)-distribution_ (see Section 3). If \(\mathcal{T}\) generates a subgroup of \(\operatorname{Psh}_{\,qc}(M)\), then \(M\) is called a _standard \(qc\)-manifold_. _Remark_.: When \(M\) is a closed aspherical standard \(qc\)-manifold, \(\mathcal{T}\) generates a three-torus \(T^{3}\leq\operatorname{Psh}_{\,qc}(M)\). In this case, the \(qc\)-structure on \(M\) thus does not give rise to a \(3\)-Sasaki structure since the \(qck\)-distribution \(\mathcal{T}\) does not generate an \(\operatorname{Sp}(1)\)-action on \(M\) but a \(T^{3}\)-action [23, Theorem 5.4]. Then the quotient space \(M/\mathcal{T}\) inherits a hyper-Kahler structure from \(M\). We have the following strong rigidity property for standard \(qc\)-structures on closed aspherical manifolds. **Theorem 4**.: _Let \(X/\pi\) be a positive definite closed aspherical standard \(qc\)-manifold. Then \(X/\pi\) is \(qc\)-isometric to a quaternionic Heisenberg infra-nilmanifold \(\mathcal{M}/\pi\) where \(\pi\leq\mathcal{M}\rtimes\operatorname{PSp}(n)\), is a discrete uniform subgroup. In particular, \(\pi\) is virtually nilpotent and \(\operatorname{Aut}_{qc}(X/\pi)^{0}=T^{3}\) is a three-torus._ In the context of hyper-Kahler manifolds the following striking fact plays an important role in the proof of Theorem 4: _Rigidity of aspherical hyper-Kahler manifolds._ Recall that the quaternionic torus \(T^{n}_{\mathbb{H}}\) admits a natural flat homogeneous hyper-Kahler structure which is induced by a linear quaternionic hermitian form on \(\mathbb{H}^{n}\) (cf. [12]). We then have: **Lemma D**.: _Let \(M\) be a closed aspherical hyper-Kahler manifold. Then a finite cover of \(M\) is hypercomplexally isometric to a quaternionic torus \(T_{\mathbb{H}}^{n}\) with its natural flat hyper-Kahler structure._ Here a hypercomplex isomorphism is simultaneously a holomorphic diffeomorphism with respect to each complex structure. This strong rigidity for aspherical hyper-Kahler manifolds is a consequence of the Calabi-Yau theorem and the Cheeger-Gromoll splitting theorem. For related result on complex hyperhermitian surfaces, see [11]. _The paper is organized as follows._ In Section 2, we prove Theorem 2 of the Introduction saying that \(\operatorname{Aut}_{\mathsf{G}}(X/\Gamma)\) is finite whenever \(\Gamma\) has no normal solvable groups. In Section 3, we introduce a cohomology-invariant for the group \(\operatorname{Aut}_{\mathsf{G}}(M)\) (see Proposition 3.1) to show the coincidence of the Lie groups \(\operatorname{Aut}_{\mathsf{G}}(M)\) and \(\operatorname{Psh}_{\mathsf{G}}(M)\) when \(\operatorname{Aut}_{\mathsf{G}}(M)\) acts properly on \(M\) in Theorem 3.3. In particular, when \(M\) is compact, \(\operatorname{Aut}_{\mathsf{G}}(M)\) is a compact Lie group. We prove Theorem 3 in Section 4. Section 5 concerns the properties of standard \(qc\)-manifolds. ## 2. \(\mathsf{G}\)-manifolds with compact automorphism groups We study the structure of \(\operatorname{Aut}_{\mathsf{G}}(X/\Gamma)\) for closed aspherical \(\mathsf{G}\)-manifolds. _Lifting Lemma._ To prepare our proof we start our discussion with some well known general setup. Let \(\tilde{M}\) be the universal covering space of \(M=\tilde{M}/\pi\) and denote \(N_{\operatorname{Diff}(\tilde{M})}(\pi)\) the normalizer of \(\pi\) in \(\operatorname{Diff}(\tilde{M})\). The conjugation \[\mu:N_{\operatorname{Diff}(\tilde{M})}(\pi){\rightarrow}\text{Aut}(\pi)\] defined by \(\mu(\tilde{f})(\gamma)=\tilde{f}\circ\gamma\circ\tilde{f}^{-1}\) (\({}^{\forall}\,\gamma\in\pi\)) induces a homomorphism \[\varphi:\operatorname{Diff}(M)\rightarrow\operatorname{Out}(\pi)\;.\] **Lemma 2.1** (see [27]).: _There is an exact commutative diagram:_ (2.1) Here, \(Z_{\rm Diff}(\tilde{M})\)\((\pi)\) denotes the centralizer of \(\pi\) in \({\rm Diff}(\tilde{M})\). _Application to automorphisms of aspherical manifolds._ **Proof of Theorem 2.** As we show now, replacing \({\rm Diff}(\tilde{M})\) in Lemma 2.1 by \({\rm Aut}_{\mathsf{G}}(X)\), \(N_{{\rm Aut}_{\mathsf{G}}(X)}(\Gamma)\) turns out to be discrete and the homomorphism \(\mu:N_{{\rm Aut}_{\mathsf{G}}(X)}(\Gamma){\rightarrow}{\rm Aut}(\Gamma)\) injective. As in diagram (2.1), there is an exact sequence: \[1{\rightarrow}\ker\varphi{\rightarrow}\,{\rm Aut}_{\mathsf{G}}(X/\Gamma) \stackrel{{\varphi}}{{\longrightarrow}}{\rm Out}(\Gamma) \tag{2.2}\] such that \({\rm Aut}_{\mathsf{G}}(X/\Gamma)^{0}\leq\ker\varphi\). Let \(Z_{{\rm Aut}_{\mathsf{G}}(X)}(\Gamma)\) be the centralizer of \(\Gamma\) in \({\rm Aut}_{\mathsf{G}}(X)\). Then as can be inferred from diagram (2.1), \(\ker\varphi\) is associated with the covering group extension: \[1{\rightarrow}\,Z(\Gamma){\rightarrow}\,Z_{{\rm Aut}_{\mathsf{G}}(X)}(\Gamma) {\longrightarrow}\ker\varphi{\rightarrow}1.\] Since \(Z(\Gamma)\) is the center of \(\Gamma\), note \(Z(\Gamma)=\{1\}\) by the hypothesis so that \[Z_{{\rm Aut}_{\mathsf{G}}(X)}(\Gamma)\cong\ker\varphi. \tag{2.3}\] Now \({\rm Aut}_{\mathsf{G}}(X/\Gamma)\) is compact by Corollary 3.5. Since a compact connected Lie group acting on a closed aspherical manifold is a torus (cf. Theorem B), \({\rm Aut}_{\mathsf{G}}(X/\Gamma)^{0}=T^{k}\), \(k\geq 0\). Then the orbit map \({\rm ev}(t)=tz\) of \(T^{k}\) into \(X/\Gamma\) at any point \(z\in X/\Gamma\) induces an injective homomorphism \({\rm ev}_{*}:{\mathbb{Z}}^{k}{\longrightarrow}\,\Gamma\) such that \({\rm ev}_{*}({\mathbb{Z}}^{k})\leq Z(\Gamma)\) (cf. [16, Lemma 4.2]). As \(Z(\Gamma)\) is trivial, by assumption, we have \(k=0\). That is, \({\rm Aut}_{\mathsf{G}}(X/\Gamma)^{0}=\{1\}\). In particular \({\rm Aut}_{\mathsf{G}}(X/\Gamma)\) is a finite group. Therefore, the subgroup \(\ker\varphi\) is finite by (2.2), and so is \(Z_{{\rm Aut}_{G}(X)}(\Gamma)\) by (2.3). As \(Z_{{\rm Aut}_{\mathsf{G}}(X)}(\Gamma)\) is centralized by \(\Gamma\), it follows \(Z_{{\rm Aut}_{\mathsf{G}}(X)}(\Gamma)=\{1\}\) by [8, Theorem 2]. Hence \(\ker\varphi=\{1\}\), that is \(\varphi:{\rm Aut}_{\mathsf{G}}(X/\Gamma){\rightarrow}\,{\rm Out}(\Gamma)\) is injective. Since the quotient of \(N_{{\rm Aut}_{\mathsf{G}}(X)}(\Gamma)\) by \(\Gamma\) is isomorphic to \({\rm Aut}_{\mathsf{G}}(X/\Gamma)\) from (2.1). As \({\rm Aut}_{\mathsf{G}}(X/\Gamma)\) is finite, \(\Gamma\) is of finite index in \(N_{{\rm Aut}_{\mathsf{G}}(X)}(\Gamma)\). Along the same direction as the above proof, we have: **Corollary 2.2**.: _Let \(X/\Gamma\) be a closed aspherical manifold. Suppose that \(G\leq{\rm Diff}\,(X/\Gamma)\) is a connected Lie group. If \(Z(\Gamma)\) is trivial (or finite), then \(G\) is a simply connected solvable Lie group._ ## 3. Invariant substructures for \({\rm Psh}\,_{\mathsf{G}}(M)\) Let \(M\) be a parabolic \(\mathsf{G}\)-manifold. If \(M\) is of \(CR\)- or \(qc\)-type, we shall prove that, when \({\rm Aut}_{\mathsf{G}}(M)\) acts properly, there exists a representative form \(\omega\) for the parabolic \(\mathsf{G}\)-structure such that \({\rm Aut}_{\mathsf{G}}(M)\) coincides with \({\rm Psh}\,_{\mathsf{G}}(M,\omega)\). (The obvious analogue also holds for conformal structures defined by Riemannian metrics. See [6] and references therein for both results). _Geometry associated with parabolic \(\mathsf{G}\)-structures._ Whereas a conformal structure is equivalent with a conformal class of Riemannian metrics, the classical geometries underlying the case (2) and (3) parabolic geometries are considerably more involved. Let us thus briefly recall the geometric data associated with case (2) and (3) parabolic geometries: In the case of \(CR\)-structures, we have a contact form \(\omega\) on a connected smooth manifold \(M\), which is determined up to scaling with a positive function, and a complex structure \(J\) on the contact bundle \(\ker\omega\) which is compatible with \(\omega\) in the sense that the Levi form \(d\omega\circ J\) is a positive definite Hermitian form on \(D\). These data define a _strictly pseudo-convex_\(CR\)-structure. Note that \(\omega\) is defined up to a conformal change with a positive function. Let a \(qc\)-structure on a \(4n+3\)-manifold \(M\) be given. This amounts to a positive definite codimension three subbundle \(\mathsf{D}\), which is non-integrable and such that \(\mathsf{D}+[\mathsf{D},\mathsf{D}]=TM\). Moreover, there is a hypercomplex structure \(\{J_{k}\}_{k=1}^{3}\) on \(\mathsf{D}\), and an \(\operatorname{Im}\mathbb{H}\)-valued \(1\)-form \(\omega=\omega_{1}i+\omega_{2}j+\omega_{3}k\). It is also required that \(\mathsf{D}=\ker\omega\) and the forms \(d\omega_{k}\circ J_{k}\) are positive definite Hermitian forms. Note that \(\omega\) is defined up to a conformal change with a positive function and conjugation with \(\operatorname{Sp}(1)\). _Description of the automorphism groups of parabolic \(\mathsf{G}\)-structures._ The associated automorphism groups \(\operatorname{Aut}_{\mathsf{G}}(M)\) are then described as follows (for cases (2) and (3) compare [6, 23]): \[\begin{cases}(1)\,\operatorname{Conf}(M)=\big{\{}\,\alpha\in \operatorname{Diff}\,(M)\mid\alpha^{*}g=u_{\alpha}\,g\,\big{\}},\\ (2)\,\operatorname{Aut}_{CR}(M,\{\omega,J\})=\{\,\alpha\in\operatorname{Diff }\,(M)\mid\alpha^{*}\omega=u_{\alpha}\,\omega,\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\alpha_{*}\circ J=J\circ\alpha_{*} |_{\ker\omega}\,\big{\}},\\ (3)\,\operatorname{Aut}_{qc}(M,(\omega,\{J_{k}\}_{k=1}^{3}))=\big{\{}\,\alpha \in\operatorname{Diff}\,(M)\mid\\ \qquad\qquad\qquad\alpha^{*}\omega=u_{\alpha}\;a_{\alpha}\cdot\omega\cdot \overline{a_{\alpha}},\;\;\alpha_{*}\circ J_{k}=\sum_{j=1}^{3}a_{kj}J_{j}\circ \alpha_{*}|_{\ker\omega}\,\big{\}},\end{cases} \tag{3.1}\] where \(u_{\alpha}\in C^{\infty}(M,\mathbb{R}^{+})\), \(a_{\alpha}\in C^{\infty}(M,\operatorname{Sp}(1))\), and the matrix \((a_{kj})\in C^{\infty}(M,\operatorname{SO}(3))\) is given by the conjugation action of \(a_{\alpha}\) on \(\operatorname{Im}\mathbb{H}\). We would like to emphasize that the definition of \(\operatorname{Aut}_{\mathsf{G}}(M)\) does not depend on the particular choice of data \(g\) or \(\omega\) in their conformal class. In fact, the choice of \(g\) or \(\omega\) amounts to choosing a representative geometry. The symmetries of the representative geometry define a subgroup of \(\operatorname{Aut}_{\mathsf{G}}(M)\). These groups are the isometry group \(\operatorname{Iso}(M,g)\), respectively the pseudo-Hermitian groups \(\operatorname{Psh}\,_{CR}(M,\{\omega,J\})\) and \(\operatorname{Psh}\,_{qc}(M,(\omega,\{J_{k}\}_{k=1}^{3}))\): \[\begin{cases}(1)\,\operatorname{Iso}(M,g)=\big{\{}\alpha\in \operatorname{Diff}\,(M)\mid\alpha^{*}g=g\,\big{\}},\\ (2)\operatorname{Psh}\,_{CR}(M,\omega)=\{\alpha\in\operatorname{Diff}\,(M) \mid\alpha^{*}\omega=\omega,\\ \qquad\qquad\qquad\qquad\qquad\alpha_{*}\circ J=J\circ\alpha_{*}|_{\ker \omega}\,\big{\}},\\ (3)\operatorname{Psh}\,_{qc}(M,\omega)=\big{\{}\alpha\in\operatorname{Diff}\,(M )\mid\alpha^{*}\omega=a_{\alpha}\cdot\omega\cdot\overline{a_{\alpha}},\\ \qquad\qquad\qquad\qquad\qquad\alpha_{*}\circ J_{k}=\sum_{j=1}^{3}a_{kj}J_{j} \circ\alpha_{*}|_{\ker\omega}\big{\}}.\end{cases} \tag{3.2}\] Note that the groups in (3.2) vary considerably under a conformal change of \(g\), respectively \(\omega\), while the group \(\operatorname{Aut}_{\mathsf{G}}(M)\) is preserved. ### Conformal invariant cohomology class The space \(C^{\infty}(M,\mathbb{R}^{+})\) of smooth positive functions on \(M\) is endowed with an action of \(\operatorname{Aut}_{\mathsf{G}}(M)\), where for \(\alpha\in\operatorname{Aut}_{\mathsf{G}}(M)\), \(f\in C^{\infty}(M,\mathbb{R}^{+})\), we have \[(\alpha_{*}f)(x)=f(\alpha^{-1}x)\;\;(x\in M).\] Thus \(C^{\infty}(M,\mathbb{R}^{+})\) is a smooth \(\operatorname{Aut}_{\mathsf{G}}(M)\)-module. To any such module there is an associated differentiable group cohomology \(H^{*}_{d}\) for the Lie group \(\operatorname{Aut}_{\mathsf{G}}(M)\) (see [6], for detailed explanation). We explain now that the action of \(\operatorname{Aut}_{\mathsf{G}}(M)\) on \(C^{\infty}(M,\mathbb{R}^{+})\) gives rise to a natural cohomology class that carries geometric information about the dynamics of this action. _Construction of the associated cohomology class._ **Proposition 3.1**.: _For any closed subgroup \(L\) of \(\operatorname{Aut}_{\mathsf{G}}(M)\) there is a natural cohomology class \([\lambda_{\mathsf{G}}]\in H^{1}_{d}(L,C^{\infty}(M,\mathbb{R}^{+}))\), which is associated to the parabolic \(\mathsf{G}\)-structure on \(M\)._ Proof.: Let \(\alpha\in\operatorname{Aut}_{\mathsf{G}}(M)\) such that \(\alpha^{*}g=u_{\alpha}g\) for a Riemannian metric, \(\alpha^{*}\omega=u_{\alpha}\omega\) for a contact form, or \(\alpha^{*}\omega=u_{\alpha}\,a_{\alpha}\cdot\omega\cdot\overline{a_{\alpha}}\) for a quaternionic contact form, representing the parabolic \(\mathsf{G}\)-structure on \(M\). We construct \([\lambda_{\mathsf{G}}]\) for the \(qc\)-group \(\operatorname{Aut}_{qc}(M)\) in place of \(\operatorname{Aut}_{\mathsf{G}}(M)\). (But the proof holds also for \(\operatorname{Conf}(M)\), \(\operatorname{Aut}_{CR}(M)\), see [6]) Let \(\alpha,\beta\in\operatorname{Aut}_{qc}(M)\). We write \(\alpha\,\beta\in\operatorname{Aut}_{qc}(M)\) for the composition of the two \(qc\)-transformations. We calculate \[(\alpha\,\beta)^{*}\omega =u_{\alpha\beta}\;a_{\alpha\beta}\cdot\omega\cdot\overline{a_{ \alpha\beta}},\] \[\beta^{*}\alpha^{*}\omega =\beta^{*}(u_{\alpha}\,a_{\alpha}\cdot\omega\cdot\overline{a_{ \alpha}})=(\beta^{*}u_{\alpha}\,u_{\beta})(\beta^{*}a_{\alpha}\cdot a_{\beta}) \cdot\omega\cdot(\overline{\beta^{*}a_{\alpha}\cdot a_{\beta}}).\] (Note that \(\beta^{*}\,\overline{a_{\alpha}}\,(x)=\overline{a_{\alpha}}\,(\beta x)= \overline{a_{\alpha}(\beta x)}=\overline{\beta^{*}a_{\alpha}}\,(x)\)). Taking the norm, we have \[||(\alpha\beta)^{*}\omega||=u_{\alpha\beta}\,||\omega||=\beta^{*}u_{\alpha}\, u_{\beta}\,||\omega||\,.\] Thus the smooth maps \(u_{\alpha},u_{\beta},u_{\alpha\beta}\in C^{\infty}(M,\mathbb{R}^{+})\) satisfy \[u_{\alpha\beta}=\beta^{*}u_{\alpha}\,u_{\beta}\;\text{ on }M. \tag{3.3}\] Define \(\lambda_{\mathsf{G}}=\lambda_{\mathsf{G},\omega}:\operatorname{Aut}_{ \mathsf{G}}(M)\to C^{\infty}(M,\mathbb{R}^{+})\) to be \[\lambda_{\mathsf{G}}(\alpha)=\alpha_{*}u_{\alpha}. \tag{3.4}\] In particular, \(\lambda_{\mathsf{G}}(\alpha)(x)=u_{\alpha}(\alpha^{-1}x)\). We observe that \(\lambda_{\mathsf{G}}\) is a crossed homomorphism with respect to the representation of \(\operatorname{Aut}_{\mathsf{G}}(M)\) on \(C^{\infty}(M,\mathbb{R}^{+})\): \[\lambda_{\mathsf{G}}(\alpha\beta)\,(x) =\,(\alpha\beta)_{*}u_{\alpha\beta}\,(x)=u_{\alpha\beta}\,( \beta^{-1}\alpha^{-1}x)\] \[=\beta^{*}u_{\alpha}\,(\beta^{-1}\alpha^{-1}x)\;u_{\beta}\,( \beta^{-1}\alpha^{-1}x)\;\;(\text{by}(\ref{eq:2.1}))\] \[=\lambda_{\mathsf{G}}(\alpha)\,(x)\;\alpha_{*}\lambda_{\mathsf{G }}(\beta)\,(x)=(\lambda_{\mathsf{G}}(\alpha)\;\alpha_{*}\lambda_{\mathsf{G }}\,(\beta))\,(x).\] Hence, \(\lambda_{\mathsf{G}}(\alpha\beta)=\lambda_{\mathsf{G}}(\alpha)\cdot\alpha_{*} \lambda_{\mathsf{G}}(\beta)\), that is, \(\lambda_{\mathsf{G}}\) is a crossed homomorphism and thus a one-cocycle for the differentiable cohomology of \(\operatorname{Aut}_{\mathsf{G}}(M)\) with coefficients in \(C^{\infty}(M,\mathbb{R}^{+})\). Let \([\lambda_{\mathsf{G}}]\in H^{1}_{d}(\operatorname{Aut}_{\mathsf{G}}(M),\,C^{ \infty}(M,\mathbb{R}^{+}))\) denote its corresponding cohomology class. We show \([\lambda_{\mathsf{G}}]\) is a conformal \(\mathsf{G}\)-invariant. For \(qc\)-forms \(\omega,\omega^{\prime}\), suppose that \(\omega^{\prime}\) is \(qc\)-equivalent to \(\omega\), that is, \[\omega^{\prime}=u\;b\cdot\omega\cdot\bar{b},\quad\text{ for }u\in C^{\infty}(M, \mathbb{R}^{+}),\;b\in C^{\infty}(M,\operatorname{Sp}(1))\;. \tag{3.5}\] For \(\alpha\in\operatorname{Aut}_{\mathsf{G}}(M)\), write \(\alpha^{*}\omega^{\prime}=u^{\prime}_{\alpha}\;a^{\prime}_{\alpha}\cdot\omega ^{\prime}\cdot\overline{a^{\prime}_{\alpha}}\). Thus \[\lambda_{\mathsf{G},\omega^{\prime}}(\alpha)=\alpha_{*}u^{\prime}_{\alpha}\,.\] Then \(\alpha^{*}\omega^{\prime}=u^{\prime}_{\alpha}\;a^{\prime}_{\alpha}\cdot(u\;b \cdot\omega\cdot\bar{b})\cdot\overline{a^{\prime}_{\alpha}}=(u^{\prime}_{ \alpha}u)\;(a^{\prime}_{\alpha}\cdot b)\cdot\omega\cdot(\overline{a^{\prime}_ {\alpha}\cdot b})\). Also \[\alpha^{*}\omega^{\prime}=\alpha^{*}(u\;b\cdot\omega\cdot\bar{b})=(\alpha^{*} u\,u_{\alpha})\,(\alpha^{*}b\cdot a_{\alpha})\cdot\omega\cdot(\overline{\alpha^{*}b \cdot a_{\alpha}}).\] Taking the norm \(||\alpha^{*}\omega^{\prime}||\), it follows \(u^{\prime}_{\alpha}\,u=\alpha^{*}u\,u_{\alpha}\), that is, \[u^{\prime}_{\alpha}\,(\alpha^{*}u)^{-1}\,u=u_{\alpha}.\] This shows \(\alpha_{*}u^{\prime}_{\alpha}\cdot\delta^{0}(u)(\alpha)=\alpha_{*}u_{\alpha}\). Hence, \([\lambda_{\mathsf{G},\omega^{\prime}}]=[\lambda_{\mathsf{G},\omega}]\) and so the cohomology class \([\lambda_{\mathsf{G},\omega}]\) is a quaternionic conformal invariant. Regarding the cohomology groups of \(L\) with coefficients in \(C^{\infty}(M,\mathbb{R}^{+})\) we have the following important general fact: **Theorem 3.2** ([6, Theorem 10]).: _Suppose that \(L\) acts properly on \(M\). Then_ \[H^{i}(L,C^{\infty}(M,\mathbb{R}^{+}))=\{0\},\;i\geq 1.\] Recall that \(\operatorname{Psh}_{\mathsf{G}}(M)\) denotes the unique maximal subgroup of \(\operatorname{Aut}_{\mathsf{G}}(M)\) that acts properly on \(M\). We now prove: **Theorem 3.3**.: _Let \(M\) be a parabolic \(\mathsf{G}\)-manifold of \(CR\)- or \(qc\)-type. Then there exists a representative form \(\omega\) for the parabolic \(\mathsf{G}\)-structure such that_ \[\operatorname{Psh}_{\mathsf{G}}(M)=\operatorname{Psh}_{\mathsf{G}}(M,\omega)\;.\] Proof.: Put \(L=\operatorname{Psh}_{\mathsf{G}}(M)\) for the following. Since \(\operatorname{Psh}_{\mathsf{G}}(M)\) acts properly, Theorem 3.2 implies \(H^{1}(L,C^{\infty}(M,\mathbb{R}^{+}))=\{0\}\). Let \(\eta\) be a representative form for the parabolic \(\mathsf{G}\)-structure on \(M\). Since \(H^{1}(L,C^{\infty}(M,\mathbb{R}^{+}))=\{0\}\), in particular, \([\lambda_{\mathsf{G},\eta}]=0\), where \(\lambda_{\mathsf{G},\eta}\) is a one-cocycle for \(L\). Since \(\alpha\in\operatorname{Aut}_{\mathsf{G}}(M)\), we can write \(\alpha^{*}\eta=u_{\alpha}\,a_{\alpha}\cdot\eta\cdot\overline{a_{\alpha}}\). Thus the equation \[\lambda_{\mathsf{G},\eta}=\delta^{0}v,\text{ for some }v\in C^{\infty}(M, \mathbb{R}^{+}),\] means that \(\alpha_{*}u_{\alpha}=\alpha_{*}vv^{-1}\), \(\alpha\in L\). Or equivalently, \(u_{\alpha}\cdot\alpha^{*}v=v\). Put \(\omega=v\,\eta\). Then it follows that \[\alpha^{*}\omega=\alpha^{*}v\;u_{\alpha}\,a_{\alpha}\cdot\eta\cdot\overline{a _{\alpha}}=v\;a_{\alpha}\cdot\eta\cdot\overline{a_{\alpha}}=a_{\alpha}\cdot \omega\cdot\overline{a_{\alpha}}.\] This shows that \(\alpha\in\operatorname{Psh}_{\mathsf{G}}(M,\omega)\). That is, we have \(\operatorname{Psh}_{\mathsf{G}}(M)\) is contained in \(\operatorname{Psh}_{\mathsf{G}}(M,\omega)\). Since \(\operatorname{Aut}_{\mathsf{G}}(M,\omega)\) is acting properly, by Lemma 3.4, \(\operatorname{Aut}_{\mathsf{G}}(M,\omega)\) is contained in \(\operatorname{Psh}_{\mathsf{G}}(M)\). The theorem is proved. Next we note: **Lemma 3.4**.: _The subgroup \(\operatorname{Psh}_{qc}(M,\omega)\) of \(\operatorname{Aut}_{\mathsf{G}}(M)\) preserves an associated Riemannian metric \(g_{\omega}\). In particular, \(\operatorname{Psh}_{qc}(M,\omega)\) acts properly on \(M\)._ Proof.: By Theorem A, the automorphism group of any compact aspherical parabolic \(\mathsf{G}\)-manifold is acting properly. **Remark 3.6**.: _Note that, as \(\operatorname{Aut}_{CR}(\mathcal{N})=\mathcal{N}\rtimes\operatorname{U}(n) \times\mathbb{R}^{+}\), is not acting properly on the Heisenberg group \(\mathcal{N}\), \([\lambda_{CR}]\neq 0\) in \(H^{1}_{d}(\operatorname{Aut}_{CR}(\mathcal{N}),C^{\infty}(\mathcal{N}, \mathbb{R}^{+}))\). On the other hand, if \(X=\mathcal{N}-\{\mathbf{0}\}\), then \(\operatorname{Aut}_{CR}(X)=\operatorname{U}(n)\times\mathbb{R}^{+}\) which acts properly on \(X\). Then \([\lambda_{CR}]\in H^{1}_{d}(\operatorname{U}(n)\times\mathbb{R}^{+},C^{\infty }(X,\mathbb{R}^{+}))=\{0\}\). The quotient of \(X\) by an infinite discrete subgroup of \(\operatorname{U}(n)\times\mathbb{R}^{+}\) is an infra-Hopf manifold._ ## 4. \(Cr\)-manifolds \(X/\pi\) with virtually solvable group \(\pi\) ### Sasaki manifolds and Reeb flow A Sasaki structure on \(M\) is equivalent with a _standard_\(CR\)-structure. For any \(CR\)-structure \((\omega,J)\), the Reeb field \(\xi\) is defined by the conditions \[\omega(\xi)=1\text{ and }d\omega(\xi,\cdot)=0.\] A \(CR\)-structure is called standard if the flow of the Reeb field is contained in the pseudo-Hermitian group \(\operatorname{Psh}_{CR}(M,\omega)\). Then the metric \[g=\omega\cdot\omega+d\omega\circ J\] is called _Sasaki metric_. The group \(\operatorname{Psh}_{CR}(M,\omega)\) is thus contained in the isometry group of the Sasaki metric \(g\). Note further that the Reeb flow is contained in the center of \(\operatorname{Psh}_{CR}(M,\omega)\) (compare [7]). _The Reeb field generates \(S^{1}\) on compact manifolds._ On a compact \(CR\)-manifold the Reeb flow always gives rise to a circle action. **Proposition 4.1**.: _Let \((M,\omega,J)\) be a closed strictly pseudoconvex standard \(CR\)-manifold. Then the Reeb field generates an \(S^{1}\)-action on \(M\)._ Proof.: By the definition of standard \(CR\)-structure, the Reeb field \(\xi\) generates a one-parameter group \(\mathsf{A}\) of \(CR\)-transformations for \((\ker\omega,J)\). Let \(\operatorname{Psh}\left(M,(\omega,J)\right)\) be the pseudo-Hermitian group. Since \(\operatorname{Psh}\left(M,(\omega,J)\right)\leq\operatorname{Iso}(M,g)\) for the Sasaki metric \(g=\omega\cdot\omega+d\omega\circ J\), \(\operatorname{Psh}\left(M,(\omega,J)\right)\) is compact. If \(\bar{\mathsf{A}}\) is the closure of \(\mathsf{A}\) in \(\operatorname{Psh}\left(M,(\omega,J)\right)\), then \(\bar{A}\) is isomorphic to a \(k\)-torus \(T^{k}\). Let \(\mathcal{T}^{k}\) be the distribution of vector fields for \(T^{k}\) on \(M\). Consider the restriction \(\omega|_{\mathcal{T}^{k}}:\mathcal{T}^{k}{\rightarrow}\mathbb{R}\). Then \(\mathcal{T}^{k}=\langle\xi\rangle\oplus\ker\left(\omega|_{\mathcal{T}^{k}}\right)\). (For this, recall \(\omega(\xi)=1\), \(d\omega(\xi,\cdot)=0\).) Let \(\mathbf{u}\in\ker\left(\omega|_{\mathcal{T}^{k}}\right)\). Since \(\ker\,\omega\) is \(J\)-invariant, \(J\mathbf{u}\in\ker\,\omega\). As \(\mathbf{u}\in\mathcal{T}^{k}\), \(\mathbf{u}\) generates a one-parameter group of transformations \(\{\varphi_{t}\}_{t\in\mathbb{R}}\) holomorphic on \(\mathsf{D}\). Putting \(p_{-t}=\varphi_{-t}p\), for a point \(p\in M\), note that \([\mathbf{u},J\mathbf{u}]=\lim_{t\to 0}(\varphi_{t_{*}}J\mathbf{u}_{p_{-t}}-J\mathbf{u}_{p})/t=J \lim_{t\to 0}(\varphi_{t_{*}}\mathbf{u}_{p_{-t}}-\mathbf{u}_{p})/t=J[\mathbf{u},\mathbf{u}]= \mathbf{0}\). Let \(d\omega\circ J\) be the positive definite Levi form on \(\ker\,\omega\), then \(2d\omega(\mathbf{u},J\mathbf{u})=\mathbf{u}\omega(J\mathbf{u})-J\mathbf{u}\omega(\mathbf{u})-\omega([ \mathbf{u},J\mathbf{u}])=0\). Thus \(\mathbf{u}=\mathbf{0}\). It follows \(\mathsf{A}=S^{1}\). ### Aspherical Sasaki manifolds Now let \(M=X/\pi\) be a \(2n+1\)-dimensional closed aspherical manifold with a standard \(CR\)-structure. Since \(M\) is compact, the Reeb field \(\xi\) generates an \(S^{1}\)-action on \(M\) (Proposition 4.1). Then it follows (cf. [7]) that 1. A Sasaki structure \((\omega,J)\) induces a Kahler structure \((\Omega,J)\) on \(W\) such that \(d\omega=p^{*}\Omega\) and \(J\) is an induced complex structure from \((\ker\,\omega,J)\) with \(p_{*}J=Jp_{*}\). 2. The central group extension \(1\to\mathbb{R}\cap\pi\to\pi\xrightarrow{\phi}Q\to 1\) embeds into the pseudo-Hermitian group as in the diagram (4.1) \[\begin{CD}1@>{}>{}>\mathbb{R}@>{}>{}>\operatorname{Psh}\left(X\right)@>{\phi}>{ }>\operatorname{Iso}_{h}(W)@>{}>{}>1\\ @A{}A{}A@A{}A{}A@A{}A{}A\\ 1@>{}>{}>\mathbb{R}\cap\pi@>{}>{}>\pi@>{\phi}>{}>Q@>{}>{}>1\end{CD}\] where the quotient group \(Q=\pi/\,\mathbb{R}\cap\pi\) acts effectively and properly discontinuously on \(W\) as a group of Kahler isometries. It follows from diagram (4.1) that \(Q\) acts effectively and properly discontinuously on \(W\). ### \(S^{1}\)-action on a closed aspherical manifold Let \(M=X/\pi\) be a closed aspherical manifold with an effective \(S^{1}\)-action. Recall (cf. Section 2, proof of Theorem 2) that, for any \(x\in M\), the orbit map \(\operatorname{ev}_{x}:S^{1}{\to}M\) defined by \(\operatorname{ev}_{x}(t)=t\cdot x\), induces an _injective_ homomorphism \[\operatorname{ev}_{*}:\pi_{1}(S^{1},1)=\mathbb{Z}\to\pi_{1}(M,x)=\pi,\] such that \(\operatorname{ev}_{*}(\mathbb{Z})\leq Z(\pi)\) is contained in the center of \(\pi\). This implies that the \(S^{1}\)-action lifts to a proper action of \(\mathbb{R}\) on \(X\), where \(\pi\) commutes with \(\mathbb{R}\). Since \(S^{1}=\mathbb{R}/\mathbb{Z}\) and the action is effective, we thus have an equivariant principal bundle on the universal cover \(X\) of the form \[\left(\mathbb{Z}=\mathbb{R}\cap\pi,\mathbb{R}\right)@>{}>{}>\left(\pi,X\right)@ {}>{}>\left(Q=\pi\big{/}\mathbb{R}\cap\pi,W=X/\mathbb{R}\right)\,. \tag{4.2}\] Associated principal bundleWe suppose now that \(Q\) admits a torsion-free subgroup of finite index. Thus we may choose a torsion-free normal subgroup \(Q^{\prime}\) of finite index in \(Q\), such that \(W/Q^{\prime}\) is a closed aspherical manifold. We put \(\pi^{\prime}=\phi^{-1}(Q^{\prime})\) for the preimage of \(Q^{\prime}\) in \(\pi\). Then the central group extension \[\begin{CD}1@>{}>{}>\mathbb{Z}=\mathbb{R}\cap\pi@>{}>{}>\pi^{\prime}@>{}>{}>Q^{ \prime}@>{}>{}>1\end{CD} \tag{4.3}\] gives rise to a principal circle bundle \[S^{1}=\mathbb{R}/\mathbb{Z}@>{}>{}>P=X/\pi^{\prime}@>{p}>{}>B=W/Q^{\prime}. \tag{4.4}\] ### Standard \(Cr\)-structures on circle bundles As in Section 4.2, we now suppose that the contractible manifold \(X\) has a standard \(CR\)-structure \((\omega,J)\). We require further that the Reeb flow generates the principal \(\mathbb{R}\)-action in (4.3). Also we assume that the \(CR\)-structure is preserved by \(\pi\), that is, \(\pi\leq\operatorname{Psh}\,(X,\omega)\). And we impose that \(\pi\cap\mathbb{R}=\mathbb{Z}\). This ensures that the principal action of \(\mathbb{R}\) on \(X\) descends to an _effective_ action of \(S^{1}=\mathbb{R}\big{/}\mathbb{Z}\) on \(X/\pi\), as in (4.4). Since \(\pi\leq\operatorname{Psh}\,(X,\omega)\), the principal \(S^{1}\)-bundle (4.4) with total space \[P=X/\pi^{\prime}\] inherits a compatible induced standard \(CR\)-structure \((\bar{\omega},J)\). The Reeb field \(\xi\) pushes down to the Reeb field \(\bar{\xi}\) on \(P\). Since the action of \(S^{1}=\mathbb{R}/\mathbb{Z}\) is effective, \(\bar{\xi}\) is also the fundamental vector field of the principal \(S^{1}\)-action on \(P\). This fact implies that the induced contact form \(\bar{\omega}\) is in fact a _connection form_ for the principal circle bundle \(P\) in (4.4). Furthermore, \(d\omega\) is the curvature form of the connection \(\bar{\omega}\) and satisfies \[d\omega=p^{*}\bar{\Omega}\,,\] for a closed form \(\bar{\Omega}\) on \(B\). Since it arises as the curvature of the connection form, it follows that the cohomology class of \(\bar{\Omega}\) is integral, and that \[e(B)=\,[\ \bar{\Omega}\,]\ \in H^{2}(B,\mathbb{Z})\] is the characteristic class of the bundle (4.4) (cf. [25] or [10, Section 2.2]). Since we also have a \(CR\)-structure, \(\bar{\Omega}\) is its associated Kahler form on \(B\). #### Finite group action on \(P\) Since the group \(\pi\) is contained in \(\operatorname{Psh}\,(X,\omega)\), \(\pi\) centralizes the \(\mathbb{R}\)-action on \(X\), since it arises from the Reeb flow. Therefore \(\pi\) acts on \(P\) by bundle automorphisms with respect to (4.4). Furthermore, the action of \(\pi\) on \(P\) descends to the group \[\mu=Q/Q^{\prime}\,,\] which is acting on \(B=W/Q^{\prime}\) by Kahler isometries. That is, \(\mu\) preserves the Kahler form on \(B\), and, in particular, it fixes the Kahler class \(e(B)\). We further note: **Lemma 4.2**.: _The holomorphic action of the finite quotient group \(\mu\) on \(B\) is effective, that is, the homomorphism \(\mu\to\operatorname{hol}(B)\) is injective._ Proof.: By our construction, \(Q\) is normalizing \(Q^{\prime}\), and it has an effective action on \(W\). Since \(W\to W/Q^{\prime}\) is a covering map, any element of \(Q\) that acts trivially on \(W/Q^{\prime}\) is a lift of the identity of \(W/Q^{\prime}\) with respect to this covering. Therefore, it must be in \(Q^{\prime}\), showing that \(\mu\to\operatorname{hol}(B)\) is injective. ### Biholomorphism between \(W\) and \(\mathbb{C}^{n}\) By [5, Theorem 2.1], the aspherical Kahler manifold \(W/Q^{\prime}\) is biholomorphic to a complex euclidean space form \(\mathbb{C}^{n}/\rho(Q^{\prime})\), where \(\rho:Q^{\prime}\to E_{\mathbb{C}}(n)=\mathbb{C}^{n}\rtimes\operatorname{U}(n)\) is a faithful representation. Note that \(\rho(Q^{\prime})\) is a Bieberbach group. Therefore, \(\Lambda=\mathbb{C}^{n}\cap\rho(Q^{\prime})\) is a maximal free abelian normal subgroup of \(\rho(Q^{\prime})\) and of finite index in \(\rho(Q^{\prime})\). Let \[H:\;W\to\mathbb{C}^{n}\] be a corresponding biholomorphism equivariant with respect to \(\rho\). From this, we see that (going down to a finite index subgroup if necessary) we may choose \(Q^{\prime}\) such that \(\rho(Q^{\prime})=\Lambda\) is contained in \(\mathbb{C}^{n}\). Then \(X/Q^{\prime}\) is a complex torus biholomorphic to \[T^{n}_{\mathbb{C}}=\mathbb{C}^{n}/\Lambda.\] Moreover, we have a covering action: (4.5) where \(Q^{H}\) induces a holomorphic action of \(Q^{H}/\Lambda\) on \(T^{n}_{\mathbb{C}}\). That is, there is a homomorphism \(\theta:Q^{H}/\Lambda\to\operatorname{hol}(T^{n}_{\mathbb{C}})\). Every biholomorphic map of a complex torus \(T^{n}_{\mathbb{C}}\) is induced by a complex affine transformation of the vector space \(\mathbb{C}^{n}\), see [20]. Thus it follows that \(Q^{H}\) is contained in \(E_{\mathbb{C}}(n)\). In particular, \(\rho\) extends to a homomorphism \(\rho:Q\to E_{\mathbb{C}}(n)\) inducing \(\theta\). Since \(\rho(Q)\) is a crystallographic group, we may, in addition, arrange things such that \(\rho(Q^{\prime})\) equals \(\Lambda\): \[\rho(Q^{\prime})=\Lambda=\rho(Q)\cap\mathbb{C}^{n}.\] Then, the finite group \(Q^{H}/\Lambda\) maps injectively to \(\operatorname{U}(n)\), so that we have \[\mu=Q^{H}/\Lambda\leq\operatorname{U}(n). \tag{4.6}\] Compatible choice of associated linear Hermitian formConsider now the characteristic class \(e(B)\) of the circle bundle (4.4). By the biholomorphism \(B\to T^{n}_{\mathbb{C}}\), \(e(B)\) is transported to a class \[e(T^{n}_{\mathbb{C}})=e(B)^{H}\in H^{2}(T^{n}_{\mathbb{C}},\mathbb{Z}).\] Moreover, since \(e(B)\) is a Kahler class, \(e(T^{n}_{\mathbb{C}})\) is contained in the Kahler cone of \(T^{n}_{\mathbb{C}}\). **Lemma 4.3**.: _There exists a positive definite linear Hermitian two form \(\Omega_{\mathbb{C}^{n}}\) (of type \((1,1)\)) on \(\mathbb{C}^{n}\) such that its image \(\bar{\Omega}_{\mathbb{C}}\) on \(T^{n}_{\mathbb{C}}\) represents the characteristic class \(e(T^{n}_{\mathbb{C}})\). Then, we have,_ \[e(T^{n}_{\mathbb{C}})=[\,\bar{\Omega}_{\mathbb{C}^{n}}\,]\in H^{2}(T^{n}_{ \mathbb{C}},\mathbb{Z})\,. \tag{4.7}\] Proof.: Let \(g\) be a Kahler metric on \(T^{n}_{\mathbb{C}}\) with Kahler form \(\bar{\Omega}\) such that the Kahler class \(e(T^{n}_{\mathbb{C}})=[\bar{\Omega}]\). Since \(T^{n}_{\mathbb{C}}\) has trivial canonical bundle its first Chern-class \(c_{1}(T^{n}_{\mathbb{C}})\) vanishes. Let \(\Theta\) denote the Ricci form for \(g\). Then \(0=c_{1}(T^{n}_{\mathbb{C}})=[\Theta]\). In particular, \(\Theta\) is null-cohomologous. In this situation the Calabi-Yau existence theorem for Kahler metrics with prescibed Ricci curvature [9, Chapter 11] asserts that there exists a unique Ricci flat Kahler metric \(g^{\prime}\) on \(T^{n}_{\mathbb{C}}\) with Kahler form \(\bar{\Omega}^{\prime}\), satisfying \([\bar{\Omega}^{\prime}]=[\bar{\Omega}]=e(T^{n}_{\mathbb{C}})\). As a consequence of the Cheeger-Gromoll splitting and decomposition theorem for Ricci-nonnegatively curved manifolds [14, 17] it follows that the only Ricci flat Riemannian metrics on a torus (in fact on closed aspherical manifolds) are flat metrics. This shows that \(g^{\prime}\) is a flat Kahler metric on \(T^{n}_{\mathbb{C}}\) and thus invariant by the holomorphic action of \(T^{n}_{\mathbb{C}}\) on itself. Pulling back \(\bar{\Omega}^{\prime}\) to a form \(\Omega^{\prime}\) on \(\mathbb{C}^{n}\), \(\Omega_{\mathbb{C}^{n}}=\Omega^{\prime}\) is linear and it has the required properties. Note that by this construction, \(\rho(Q)\) preserves the positive definite Hermitian form \(\Omega_{\mathbb{C}^{n}}\), and we may thus put \(\mathrm{U}(n)=\mathrm{U}(\Omega_{\mathbb{C}^{n}})\) on \(\mathbb{C}^{n}\). _Equivalence of bundles over the complex torus._ By the identification \(W=\mathbb{C}^{n}\) via the biholomorphism \(H\) and our construction, the bundle (4.2), now equates to an equivariant principal bundle (4.8) \[(\mathbb{Z}=\mathbb{R}\cap\pi^{\prime},\mathbb{R})\,\xrightarrow{\phantom{ \xrightarrow{\phantom{\xrightarrow{\phantom{\xrightarrow{\phantom{\xrightarrow{ \phantom{\xrightarrow{\phantom{\xrightarrow{\ We show now that \(X\) with its \(CR\)-structure is equivalent to the Heisenberg group \(\mathcal{N}\) equipped with its standard left-invariant \(CR\)-structure. In fact, it follows that \(\mathrm{Psh}\,(X,\eta)=\mathcal{N}\rtimes\mathrm{U}(n)\), as is noted in [7, Proposition 6.1 (2)]. Here, the Reeb flow \(\mathbb{R}\) generates the center of the Heisenberg group \(\mathcal{N}\), and \(\mathcal{N}/\mathbb{R}=\mathbb{C}^{n}\). Note that, by construction, \(\pi^{\prime}\) is contained in \(\mathrm{Psh}\,(X,\eta,J)\). Therefore \(\pi^{\prime}\) is a discrete uniform subgroup of \[\mathrm{Psh}\,(X,\eta,J)=\mathrm{Psh}\,(\mathcal{N})=\mathcal{N}\rtimes \mathrm{U}(n).\] Note further that \(\pi^{\prime}\) is a nilpotent group, since it is a central extension of \(\Lambda\). Therefore \(\pi^{\prime}\leq\mathcal{N}\) is a uniform lattice in \(\mathcal{N}\), by the Bieberbach theorem for nilmanifolds [1, 24]. In particular, the orbit map \(\mathcal{N}\to X\) gives an \(S^{1}\)-equivariant diffeomorphism \[\mathcal{N}/\pi^{\prime}\to X/\pi^{\prime}\] over the base space \(T^{n}_{\mathbb{C}}\). This shows that \(X/\pi^{\prime}\) is a Heisenberg nilmanifold. ### Locally homogeneous standard \(Cr\)-structure on \(X/\pi\) As we have seen, the group \(\mu=\pi/\pi^{\prime}\) acts on the total space \(P\) of the principal circle bundle (4.9) by bundle automorphisms, and the induced action of \(\mu\) on \(T^{n}_{\mathbb{C}}\) fixes the curvature form \(\bar{\Omega}_{0}\). For the final step in the proof of Theorem 3, we show now that there exists a connection form with curvature \(\bar{\Omega}\) that is fixed by the group \(\mu\): **Proposition 4.4**.: _There exists a connection form \(\bar{\vartheta}\) for the principal circle bundle (4.9), such that_ 1. \(d\bar{\vartheta}=p^{*}\,\bar{\Omega}_{0}\)_._ 2. \(\mu\) _is contained in_ \(\mathrm{Psh}\,(P,\bar{\vartheta},J)\)_._ Proof.: Let \(\mathcal{A}(\bar{\Omega}_{0})=\{\bar{\eta}\in\Omega^{1}(P)\ |\ d\bar{\eta}=p^{*}\,\bar{\Omega}_{0}\}\) be the space of connection forms for the bundle (4.11) with curvature \(\bar{\Omega}_{0}\). Choose a base point \(\bar{\eta}_{0}\in\mathcal{A}(\bar{\Omega}_{0})\). Then \(\mathcal{A}(\bar{\Omega}_{0})=\bar{\eta}_{0}+\{p^{*}\tau\ |\ \tau\in Z^{1}(T^{n}_{ \mathbb{C}})\}\) is an affine space over the vector space of closed one-forms \(Z^{1}(T^{n}_{\mathbb{C}})\). Since the curvature \(\bar{\Omega}_{0}\) is preserved by \(\mu\), for any \(g\in\mu\), \(\bar{\eta}\in\mathcal{A}(\bar{\Omega}_{0})\), it follows \(g^{*}\bar{\eta}\in\mathcal{A}(\bar{\Omega}_{0})\). Therefore the group \(\mu\) acts on \(\mathcal{A}(\bar{\Omega}_{0})\). For \(g\in\mu\), define \[c(g)=g^{*}\bar{\eta}_{0}-\bar{\eta}_{0}\,\in Z^{1}(T^{n}_{\mathbb{C}})\;.\] Since \(c(g_{1}g_{2})=g^{*}_{2}g^{*}_{1}\bar{\eta}_{0}-\bar{\eta}_{0}=g^{*}_{2}c(g_{1} )+c(g_{2})\), \(c:\mu\to Z^{1}(T^{n}_{\mathbb{C}})\) defines a one-cocycle for the natural representation of \(\mu\) on \(Z^{1}(T^{n}_{\mathbb{C}})\). The action of \(\mu\) on \(\mathcal{A}(\bar{\Omega}_{0})\) is by affine transformations, since, for \(\bar{\eta}\in\mathcal{A}(\bar{\Omega}_{0})\), we have \[g^{*}\bar{\eta}=\bar{\eta}_{0}+\,g^{*}(\bar{\eta}-\bar{\eta}_{0})+c(g).\] There is a corresponding cohomology class \([c]\in H^{1}(\mu,Z^{1}(T^{n}_{\mathbb{C}}))\) associated to the one-cocycle \(c\). Since \(\mu\) is finite, we have \(H^{1}(\mu,Z^{1}(T^{n}_{\mathbb{C}}))=\{0\}\), so \(c\) must be a coboundary. This implies that there exists \(\tau\in Z^{1}(T^{n}_{\mathbb{C}})\) such that \(c(g)=g^{*}\tau-\tau\). As a consequence, \(\bar{\vartheta}=\bar{\eta}_{0}-\tau\) is a fixed point for the action of \(\mu\) Pulling back the connection form \(\bar{\vartheta}\) to a connection form \(\vartheta\) on the covering bundle (4.8), we note: **Corollary 4.5**.: _There exists on \(X\) a connection form \(\vartheta\) for the bundle (4.2), with \(d\vartheta=\Omega_{0}\), such that the corresponding \(CR\)-structure on \(X\) satisfies_ \[\pi\leq\operatorname{Psh}\,(X,\vartheta,J).\] As above, the \(CR\)-structure \((X,\vartheta,J)\) is homogeneous and \(\operatorname{Psh}\,(X,\vartheta,J)=\operatorname{Psh}\,(\mathcal{N})= \mathcal{N}\rtimes\operatorname{U}(n)\). Therefore \(X/\pi\) is an infra-nilmanifold that is finitely covered by the nilmanifold \(X/\pi^{\prime}\). This concludes the proof of Theorem 3. ## 5. Indication to the proof of Theorem 4 ### Quaternionic contact structures Let \(X\) be a \(4n+3\)-dimensional smooth manifold. A _quaternionic contact structure_ is a codimension \(3\)-subbundle \(\mathsf{D}\) on \(X\) which satisfies that \(\mathsf{D}\oplus[\mathsf{D},\mathsf{D}]=TX\). In addition the following conditions are required: There exists a non-degenerate \(\operatorname{Im}\mathbb{H}\)-valued \(1\)-form \(\omega=\omega_{1}i+\omega_{2}j+\omega_{3}k\) called a _quaternionic contact form_ on \(X\) such that 1. \(\ker\,\omega=\overset{3}{\underset{i=1}{\cap}}\ker\,\omega_{i}=\mathsf{D}\). 2. \(\omega\wedge\omega\wedge\omega\,\wedge\overbrace{d\omega\wedge\cdots\wedge d \omega}^{n}\neq 0\) on \(X\). The non-degeneracy of \(d\omega_{k}\), \(k=1,2,3\), on \(\mathsf{D}\) defines the bundle of endomorphisms \(\{J_{1},J_{2},J_{3}\}\): \[J_{k}=(d\omega_{j}|_{\mathsf{D}})^{-1}\circ(d\omega_{i}|_{\mathsf{D}}): \mathsf{D}\smash{\mathop{\longrightarrow}}\,\mathsf{D},\ (i,j,k)\sim(1,2,3) \tag{5.1}\] which constitutes a _hypercomplex structure_ on \(\mathsf{D}\). Note that the Levi form \[d\omega_{i}\circ J_{i}:\mathsf{D}\times\mathsf{D}\smash{\mathop{\longrightarrow }}\,\mathbb{R},\ i=1,2,3\] is a positive definite symmetric bilinear form on \(\mathsf{D}\). Then \((X,\mathsf{D},\omega,\{J_{i}\}_{i=1}^{3})\) is said to be a positive definite \(qc\)-manifold. _Standard \(qc\)-manifolds and three-Sasaki manifolds._ For any \(qc\)-manifold \(M\), let \(\mathcal{T}=\{\xi_{1},\xi_{2},\xi_{3}\}\) denote the three-dimensional integrable distribution complementary to the codimension three subbundle \(\mathsf{D}\) on \(M\) determined by the \(qc\)-structure, called _\(qck\)-distribution_ (see Section 3). If \(\mathcal{T}\) generates a subgroup of \(\operatorname{Psh}\,_{qc}(M)\), then \(M\) is called a _standard \(qc\)_-manifold. **Proposition 5.1**.: _Let \((M,\omega,\{J_{i}\}_{i=1}^{3})\) be a closed strictly pseudoconvex standard \(qc\)-manifold. Then the Reeb fields generate a compact Lie group action of \(\,\operatorname{SU}(2)\) or the torus group \(T^{3}\) on \(M\)._ See [23, Proposition 4.5] for the proof. If the Reeb fields generate an \(\operatorname{SU}(2)\)-action then \(M\) is called a _three-Sasaki manifold_ (sometimes also a quaternionic \(CR\)-manifold [3]). ### Aspherical standard \(qc\)-manifolds Let \(X/\pi\) be a \(4n+3\)-dimensional positive definite closed aspherical _standard_\(qc\)-manifold. Since \(X/\pi\) is aspherical, the \(qck\)-distribution \(\hat{\mathcal{T}}=\{\hat{\xi}_{1},\hat{\xi}_{2},\hat{\xi}_{3}\}\) on \(X/\pi\) generates a compact Lie group action by Proposition 5.1. Since \(X/\pi\) is aspherical, the flow is given by a three-torus \(T^{3}\leq\operatorname{Psh}_{qc}(X/\pi)\). Moreover, the \(T^{3}\)-action lifts to a proper \(\mathbb{R}^{3}\)-action on \(X\) (as in Section 4.3 in the case of circle actions). This gives rise to an equivariant principal bundle over \(W=X/\mathbb{R}^{3}\): \[(\mathbb{R}^{3}\cap\pi,\mathbb{R}^{3})\to\,(\pi,X)\stackrel{{ p}}{{\longrightarrow}}(Q,W)\;.\] Then it follows (cf. [23]) that 1. The standard qc-structure \((\omega,\{J_{i}\}_{i=1}^{3})\) induces a hyper-Kahler structure \((\Omega,J=\{J_{i}\}_{i=1}^{3})\) on \(W\), such that \(d\omega=p^{*}\Omega\), where \(\Omega=\Omega_{1}i+\Omega_{2}j+\Omega_{3}k\) and \(J\) is an induced hypercomplex structure from (ker \(\omega,J\)) with \(p_{*}J=Jp_{*}\). Here \(\Omega_{i}\) is a Kahler form with respect to \(J_{i}\), \(i=1,2,3\). 2. The central group extension \(1\to\mathbb{R}^{3}\cap\pi\to\pi\stackrel{{\phi}}{{\longrightarrow }}Q\to 1\) embeds into the pseudo-Hermitian group of the \(qc\)-structure as in the diagram (5.2) \[\begin{CD}1@>{}>{}>\mathbb{R}^{3}@>{}>{}>\operatorname{Psh}_{qc}(X)@>{\phi}>{ }>\operatorname{Iso}_{hK}(W)@>{}>{}>1\\ @V{}V{}V@V{}V{}V@V{}V{}V\\ 1@>{}>{}>\mathbb{R}^{3}\cap\pi@>{}>{}>\pi@>{\phi}>{}>Q@>{}>{}>1\end{CD}\] where the quotient group \(Q=\pi/\,\mathbb{R}^{3}\cap\pi\) acts effectively and properly discontinuously on \(W\), and as a group of hyper-Kahler isometries for \((\Omega,J)\). 3. \(X\) is \(qc\)-homogeneous if and only if \(W\) is hyper-Kahler homogeneous ([23, Proposition C]). ### Detailed explanation of (5.2) Let \(\omega=\omega_{1}i+\omega_{2}j+\omega_{3}k\) be a lift of the \(qc\)-form of \(X/\pi\) to the universal covering (cf. Section 3). Then \(\omega\) is a \(\pi\)-invariant non-degenerate \(\operatorname{Im}\mathbb{H}\)-valued \(1\)-form such that \(\pi\leq\operatorname{Psh}_{qc}(X,(\omega,\{J_{i}\}_{j=1}^{3})\) as in (3) of (3.2). Let \(\mathcal{T}=\{\xi_{1},\xi_{2},\xi_{3}\}\) be a lift of \(\hat{\mathcal{T}}\) which generates a proper \(\mathbb{R}^{3}\)-action on \(X\). Since \(\pi\) centralizes \(\mathbb{R}^{3}\), for every \(\gamma\in\pi\), it follows \(\gamma_{*}\xi_{i}=\xi_{i}\) (\(i=1,2,3\)). Noting the action of \(\pi\) on \(\omega\) from (3) of (3.2), each element \(\gamma\in\pi\) satisfies \[\gamma^{*}\omega=\omega,\ \gamma_{*}J_{i}=J_{i}\gamma_{*}. \tag{5.3}\] Recall that the \(qc\)-structure of \((X,\mathbb{R}^{3},\{\omega_{i},J_{i}\}_{i=1}^{3},g_{\omega})\) induces a simply connected complete hyper-Kahler manifold \((W,\{\Omega_{i},J_{i}\}_{i=1}^{3},g)\) for which \(p^{*}\Omega=d\omega\) where \(\omega=\omega_{1}i+\omega_{2}j+\omega_{3}k\) is the \(\pi\)-invariant one-form on \(X\) and \(\Omega\) is a \(Q\)-invariant two-form on \(W\): \[\Omega=\Omega_{1}i+\Omega_{2}j+\Omega_{3}k. \tag{5.4}\] Furthermore, \(g_{\omega}=\sum_{i=1}^{3}\omega_{i}\cdot\omega_{i}+d\omega_{1}\circ J_{1}\) is a canonical Riemannian metric on \(X\) and \(g=\Omega_{i}\circ J_{i}\) (\(i=1,2,3\)) is a hyper-Kahler metric on \(W\). By the equation \(p^{*}\Omega=d\omega\) and (5.3), each \(\alpha\in Q\) preserves the hyper-Kahler struture on \(W\): \[\alpha^{*}\Omega=\Omega,\ \alpha_{*}J_{i}=J_{i}\alpha_{*}. \tag{5.5}\] In particular, \(Q\) acts as Kahler isometries of \((W,(\Omega_{i},J_{i}))\). #### 5.2.1. Associated principal three-torus bundle We suppose now that \(Q\) admits a torsion-free subgroup of finite index. Thus we may choose a torsion-free normal subgroup \(Q^{\prime}\) of finite index in \(Q\), such that \(W/Q^{\prime}\) is a closed aspherical manifold. We put \(\pi^{\prime}=\phi^{-1}(Q^{\prime})\) for the preimage of \(Q^{\prime}\) in \(\pi\). Then the central group extension \[\begin{CD}1@>{}>{}>\mathbb{Z}^{3}=\mathbb{R}^{3}\cap\pi@>{}>{}>\pi^{\prime}@>{ }>{}>Q^{\prime}@>{}>{}>1\end{CD} \tag{5.6}\] gives rise to a principal torus bundle \[T^{3}=\mathbb{R}^{3}/\mathbb{Z}^{3}@>{}>{}>P=X/\pi^{\prime}@>{p}>{}>B=W/Q^{ \prime}. \tag{5.7}\] ### Proof of Lemma D Let \(g\) be hyper-Kahler with hyper-Kahler form \(\Omega\) (compare (5.4)) on \(M\). Since \(g\) is hyper-Kahler its holonomy group is contained in \(\operatorname{Sp}(n)\). In particular, it is contained in \(\operatorname{SU}(n)\), which implies that \(g\) is a Ricci- flat Kahler metric on \(M\) (e.g. [9, Proposition 10.29]). In view of the fact that \(M\) is aspherical all Ricci flat metrics are flat [17]. We therefore have established that \(g\) is a flat hyper-Kahler metric. In particular, its universal covering space must be isometric to \(\mathbb{H}^{n}\) with linear Kahler structure \(\Omega\). ### Hypercomplex isometric isomorph Applying Lemma D in the introduction to the hyper-Kahler manifold \(B=W/Q^{\prime}\) asserts that \(B=W/Q^{\prime}\) with the hyper-Kahler structure (5.4) is hyper-holomorphically isometric to a flat hyper-Kahler torus \(T^{n}_{\mathbb{H}}=\mathbb{H}^{n}/\Lambda\), where \(\Lambda\leq\mathbb{H}^{n}\) is a lattice. ### Conclusion of the proof Using basic methods developed in [23] the remaining part of the proof of Theorem 4 follows a similar but simplified procedure as that of Section 4. The key step is to establish that the universal covering \(X\) with its \(qc\)-structure is homogeneous and arises from a quaternionic Heisenberg group, that is \(X\) is \(qc\)-isometric to a \(qc\)-Heisenberg group with its standard \(qc\)-structure. The details shall be presented elsewhere.
2309.04015
Optimal Transport with Tempered Exponential Measures
In the field of optimal transport, two prominent subfields face each other: (i) unregularized optimal transport, "\`a-la-Kantorovich", which leads to extremely sparse plans but with algorithms that scale poorly, and (ii) entropic-regularized optimal transport, "\`a-la-Sinkhorn-Cuturi", which gets near-linear approximation algorithms but leads to maximally un-sparse plans. In this paper, we show that an extension of the latter to tempered exponential measures, a generalization of exponential families with indirect measure normalization, gets to a very convenient middle ground, with both very fast approximation algorithms and sparsity, which is under control up to sparsity patterns. In addition, our formulation fits naturally in the unbalanced optimal transport problem setting.
Ehsan Amid, Frank Nielsen, Richard Nock, Manfred K. Warmuth
2023-09-07T20:53:23Z
http://arxiv.org/abs/2309.04015v3
# Optimal Transport with Tempered Exponential Measures ###### Abstract In the field of optimal transport, two prominent subfields face each other: (i) unregularized optimal transport, "a-la-Kantorovich", which leads to extremely sparse plans but with algorithms that scale poorly, and (ii) entropic-regularized optimal transport, "a-la-Sinkhorn-Cuturi", which gets near-linear approximation algorithms but leads to maximally un-sparse plans. In this paper, we show that a generalization of the latter to tempered exponential measures, a generalization of exponential families with indirect measure normalization, gets to a very convenient middle ground, with both very fast approximation algorithms and sparsity which is under control up to sparsity patterns. In addition, it fits naturally in the unbalanced optimal transport problem setting as well. ## 1 Introduction Most loss functions used in machine learning (ML) can be related, directly or indirectly, to a comparison of positive measures (in general, probability distributions). Historically, two broad families of distortions were mainly used: \(f\)-divergences (Ali and Silvey, 1966; Csiszar, 1963) and Bregman divergences (Bregman, 1967). Among other properties, the former are appealing because they encapsulate the notion of monotonicity of information (Amari, 2016), while the latter are convenient because they axiomatize the expectation as a maximum likelihood estimator (Banerjee et al., 2004). Those properties, however, put constraints on the distributions, either on their support for the former or their analytical form for the latter. A third class of distortion measures has progressively emerged later on that alleviates those constraints and with the appealing property to meet distance axioms: Optimal Transport distances (Peyre and Cuturi, 2019; Villani, 2009). Those can be interesting in wide ML fields (Peyre and Cuturi, 2019), but they suffer from poor scalability. A trick of balancing the metric cost with an entropic regularizer (Cuturi, 2013) substantially improves scalability to near-optimality but blurs the frontiers with other distortion measures (Cuturi, 2013; Muzellec et al., 2017). Most importantly, the structure of the unregularized OT plan is substantially altered through regularization: its sparsity is reduced by a factor \(\Omega(n)\), \(n\) being the dimension of the marginals (we consider discrete optimal transport). Sparsity is an important topic in optimal transport: both unregularized and entropic-regularized OT (EOT) plans are extremal in the sparsity scale, which does not necessarily fit in observed patterns (Peyre and Cuturi, 2019). Finally and most importantly, optimal transport, regularized or not, does not require normalized measures; in fact, it can be extended to the unbalanced problem where marginals' total masses do not even match (Janati et al., 2020). In that last, very general case, the problem is usually cast with approximate marginal constraints and without any constraint whatsoever on the transport plan's total mass. In this context, our paper introduces OT on tempered exponential measures (TEMs, a generalization of exponential families, with applications in clustering (Amid et al., 2023) and boosting (Nock et al., 2023)), with a generalization of the EOT. Notable structural properties of the problem include training as fast as Sinkhorn balancing _and_ with guarantees on the solution's sparsity, also including the possibility of unbalanced optimal transport _but_ with tight control over total masses via their _co-densities_, distributions that are used to indirectly normalize TEMs (see Figure 1). We characterize sparsity up to sparsity patterns in the optimal solution and show that sparsity with TEMs can be interpreted as balancing the classical OT cost with an interaction term interpretable in the popular gravity model for spatial interactions (Haynes and Fotheringham, 1984). Interestingly, this interpretation cannot hold anymore for the particular case of exponential families and thus breaks for EOT. To maximize readability, all proofs are deferred to an appendix. Figure 1: In classical optimal transport (OT, left), regularized or not, marginals and the OT plan sought are in the probability simplex; the optimal solution solely depends on the metric properties of the supports. Entropic regularization balances the metric cost with an entropic cost, and the optimal solution has remarkable properties related to exponential families. In this paper (right), we lift the whole setting to families of measures generalizing exponential families: tempered exponential measures (TEMs). Specific properties that appear include unbalancedness and sparsity of the optimal solution (see text). Definitions Optimal transport in the simplexIn classical discrete optimal transport (OT), we are given a cost matrix \(\mathbf{M}\in\mathbb{R}^{n\times n}\) (\(n>1\)) and two probability vectors \(\boldsymbol{r}\) and column \(\boldsymbol{c}\) in the simplex \(\Delta_{n}\doteq\{\boldsymbol{p}\in\mathbb{R}^{n}:\boldsymbol{p}\geq\boldsymbol {0}\wedge\boldsymbol{1}^{\top}\boldsymbol{p}=1\}\). Usually, \(\mathbf{M}\) satisfies the axioms of a distance, though only non-negativity is really important for all the results that we state. The OT problem seeks to find \(d_{\mathbf{M}}(\boldsymbol{r},\boldsymbol{c})\doteq\min_{\mathbf{P}\in U_{n} (\boldsymbol{r},\boldsymbol{c})}\langle\mathbf{P},\mathbf{M}\rangle\), where \(U_{n}(\boldsymbol{r},\boldsymbol{c})\doteq\{\mathbf{P}\in\mathbb{R}_{+}^{n \times n}|\,\mathbf{P}\boldsymbol{1}_{n}=\boldsymbol{r},\,\mathbf{P}^{\top} \boldsymbol{1}_{n}=\boldsymbol{c}\}\). In the _entropic-regularized OT_ problem (Cuturi, 2013), we rather seek \[d_{\mathbf{M}}^{\lambda}(\boldsymbol{r},\boldsymbol{c})\doteq \min_{\mathbf{P}\in U_{n}(\boldsymbol{r},\boldsymbol{c})}\langle\mathbf{P}, \mathbf{M}\rangle+\frac{1}{\lambda}\cdot\langle\mathbf{P},\log\mathbf{P} \rangle\,,\lambda>0 \tag{1}\] where \(\langle\cdot,\cdot\rangle\) stands for the Frobenius dot-product. Any discrete distribution is an exponential family (Amari, 2016) but the OT plan solution to (1), say \(\mathbf{P}^{*}\), has a special form. Denote \(\boldsymbol{\mu},\boldsymbol{\xi}\in\mathbb{R}^{n}\) the vectors of dual variables, corresponding to the row and column (respectively) marginalization constraints in (1). The support of \(\mathbf{P}^{*}\) is \([n]^{2}\), where \([n]\doteq\{1,2,...,n\}\). We need to express \(\mathbf{P}^{*}\) as \[P_{ij}^{*}=\exp(\langle\boldsymbol{\Theta},\mathbf{E}_{ij} \rangle-G(\boldsymbol{\Theta})), \tag{2}\] with \(\mathbf{E}_{ij}\) the matrix with general entry \((\delta_{ik}\delta_{jl})_{kl}\) ("\(\delta\)" being Kronecker symbol). Let \(\mathbf{S}\doteq\mathbf{M}-\boldsymbol{\mu}\boldsymbol{1}^{\top}-\boldsymbol{ 1}\boldsymbol{\xi}^{\top}\), which is a strictly positive matrix. In the unregularized case, \(\mathbf{S}\) encodes the slack of the constraints over the dual variables (Peyre and Cuturi, 2019, Section 2.5). It follows from (Cuturi, 2013) that the natural parameter of the exponential family is defined from those slack variables: \[\Theta=-\lambda\cdot\mathbf{S},\] while the cumulant or log-partition function \(G\) in (2) is, in fact, \(0\) because normalization is implicitly ensured in \(U_{n}(\boldsymbol{r},\boldsymbol{c})\) (otherwise, the cumulant would depend on the Lagrange multiplier of the normalization constraint). Tempered exponential measuresAny exponential family is a probability distribution that maximizes Shannon's entropy subject to a constraint on its expectation (Amari, 2016). A _tempered exponential measure_ (TEM) adopts a similar axiomatization _but_ via a generalization of Shannon's entropy (Tsallis entropy) and normalization put not on the TEM itself but on a so-called _co-distribution_(Amid et al., 2023). This last constraint is a fundamental difference from previous generalizations of exponential families, \(q\)-exponential families, and deformed exponential families (Amari, 2016). Compared to those, TEMs also have the analytical advantage of getting a closed-form solution for the cumulant, a key ML function. A TEM has the general form (with \([z]_{+}\doteq\max\{0,z\}\)): \[\tilde{p}(\boldsymbol{x})\doteq\frac{\exp_{t}(\langle\boldsymbol{ \theta},\boldsymbol{\varphi}(\boldsymbol{x})\rangle)}{\exp_{t}(G_{t}( \boldsymbol{\theta}))},\quad\exp_{t}(z)\doteq[1+(1-t)z]_{+}^{\frac{1}{1-t}}\,,\] where \(G_{t}\) is the cumulant and \(\boldsymbol{\theta}\) denotes the natural parameter. The inverse of \(\exp_{t}\) is \(\log_{t}(z)\doteq(z^{1-t}-1)\left/(1-t\right)\), both being continuous generalizations of \(\exp\) and \(\log\) for \(t=1\) Both functions keep their \(t=1\) convexity / concavity properties for \(t\geq 0\). The tilde notation above \(p\) indicates that normalization does not occur on the TEM, but on a co-density defined as \[p \doteq \tilde{p}^{2-t}\quad\left(=\tilde{p}^{1/t^{*}},\text{ with }t^{*}\doteq 1/(2-t)\right). \tag{3}\] **Remark 1**.: _For a given vector \(\tilde{\mathbf{p}}\) (or a matrix \(\tilde{\mathbf{P}}\)) with the tilde notation, whenever convenient, we will use the convention \(\mathbf{p}=\tilde{\mathbf{p}}^{1/t^{*}}\) (correspondingly, \(\mathbf{P}=\tilde{\mathbf{P}}^{1/t^{*}}\), the exponent being coordinate-wise) whenever the tilde sign is removed._ Hence, a TEM satisfies the indirect normalization \(\int\tilde{p}^{2-t}\mathrm{d}\xi=\int p\mathrm{d}\xi=1\). **Remark 2**.: _In this paper, we assume \(t\in[0,1]\), though some of our results are valid for a broader range (discussed in context)._ In the same way, as KL divergence is the canonical divergence for exponential families (Amari and Nagaoka, 2000), the same happens for a generalization in TEMs. Given two non-negative vectors \(\tilde{\mathbf{u}},\tilde{\mathbf{v}}\in\mathbb{R}^{m}\), we define the generalized tempered relative entropy as (Amid et al., 2019) \[D_{t}(\tilde{\mathbf{u}}\|\tilde{\mathbf{v}})\doteq\!\!\!\!\!\!\sum_{i\in[n]}\!\!\! \tilde{u}_{i}\big{(}\log_{t}\tilde{u}_{i}-\log_{t}\tilde{v}_{i}\big{)}-\log_{t -1}\tilde{u}_{i}+\log_{t-1}\tilde{v}_{i}.\] Just like the KL divergence (\(t\to 1\)), the tempered relative entropy is a Bregman divergence, induced by the generator \(\varphi_{t}(z)\doteq z\log_{t}z-\log_{t-1}(z)\), which is convex for \(t\in\mathbb{R}\). We also have \(\varphi_{t}^{\prime}(z)=\log_{t}(z)\). We define the following extension of the probability simplex \(\Delta_{n}\) in \(\mathbb{R}^{n}\). **Definition 1**.: _The co-simplex of \(\mathbb{R}^{n}\), \(\tilde{\Delta}_{n}\) is defined as \(\tilde{\Delta}_{n}\doteq\{\tilde{\mathbf{p}}\in\mathbb{R}^{n}:\tilde{\mathbf{p}}\geq \mathbf{0}\wedge\mathbf{1}^{\top}\tilde{\mathbf{p}}^{1/t^{*}}=1\}\)._ Note that \(\tilde{\mathbf{p}}^{1/t^{*}}\doteq\mathbf{p}\in\Delta_{n}\) iff \(\tilde{\mathbf{p}}\in\tilde{\Delta}_{n}\) and \(\tilde{\Delta}_{n}\to\Delta_{n}\) when \(t\to 1\). Similarly, given \(\tilde{\mathbf{r}},\tilde{\mathbf{c}}\in\tilde{\Delta}_{n}\), we define their corresponding co-polytope in \(\mathbb{R}^{n\times n}_{+}\). **Definition 2**.: _The co-polyhedral set of \(n\times n\) non-negative matrices with co-marginals \(\tilde{\mathbf{r}},\tilde{\mathbf{c}}\in\tilde{\Delta}_{n}\) is defined as \(\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\doteq\{\tilde{\mathbf{P}}\in \mathbb{R}^{n\times n}_{+}|\,\tilde{\mathbf{P}}^{1/t^{*}}\mathbf{1}=\tilde{\mathbf{r}}^ {1/t^{*}},\,\tilde{\mathbf{P}}^{1/t^{*}\top}\mathbf{1}=\tilde{\mathbf{c}}^{1/t^{*}}\}\)._ Likewise, \(\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\to U_{n}(\mathbf{r},\mathbf{c})\) (the transport polytope) in the limit \(t\to 1\). More importantly, using our notation convention, \[\mathbf{P}\in U_{n}(\mathbf{r},\mathbf{c})\text{ iff }\tilde{\mathbf{P}}\in\tilde{U}_{ n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}}).\] ## 3 Related work From an ML standpoint, there are two key components to optimal transport (OT): the problem structure and its solving algorithms. While historically focused on the former (Monge, 1781; Kantorovich, 1958), the field then became substantially "algorithm-aware", indirectly first via linear programming (Dantzig, 1949) and then specifically because of its wide applicability in ML (Cuturi, 2013). The entropic-regularized OT (EOT) mixes metric and entropic terms in the cost function but can also be viewed as an approximation of OT in a Kullback-Leibler ball centered at the independence plan, which is, in fact, a metric (Cuturi, 2013). The resolution of the EOT problem can be obtained via Sinkhorn's algorithm (Sinkhorn and Knopp, 1967; Franklin and Lorenz, 1989; Knight, 2008) (see Algorithm 1), which corresponds to iterative Bregman projections onto the affine constraint sets (one for the rows and another for the columns). The algorithm requires matrix-vector multiplication and can be easily implemented in a few lines of code, making it ideal for a wide range of ML applications. However, alternative implementations of the algorithm via the dual formulation prove to be more numerically stable and better suited for high-dimensional settings (Peyre and Cuturi, 2019). The structure of the solution - the transportation plan - is also important, and some features have become prominent in ML, like the sparsity of the solution (Liu et al., 2023). Sinkhorn iteration can be fine-tuned to lead to near-optimal complexity (Altschuler et al., 2017), but entropic regularization suffers a substantial structural downside: the solution is maximally un-sparse, which contrasts with the sparsity of the unregularized solution (Peyre and Cuturi, 2019). Sparsity is a modern instantiation of ML's early constraint on model's simplicity, otherwise known as Ockham's razor (Blumer et al., 1987)1. It has been known for a long time that "extreme" simplicity constraints lead to intractability for linear programming (Karp, 1972), so sparsity in OT is desirable - and not just for the sake of Ockham's razor (Blondel et al., 2018) - but it is non-trivial and various notions of tractable sparsity can be sought, from a general objective (Blondel et al., 2018; Muzellec et al., 2017) down to _ex-ante_ node specifics like transport obstruction (Dessein et al., 2018) or limiting the transport degree (Liu et al., 2023). Sparsity makes it convenient to train from general optimizers (Liu et al., 2023). This comes however at the expense of losing an appealing probabilistic structure of the EOT solution, a discrete exponential family with very specific features, and eventually loses as well the near-optimal algorithmic convenience that fine-tuning Sinkhorn offers for training (Altschuler et al., 2017). Footnote 1: _Numquam ponenda est pluralitas sine necessitate_, ”plurality should never be imposed without necessity”, William of Ockham, XIV\({}^{th}\) century. Taking EOT as a starting point, two different directions can be sought for generalization. The first consists in replacing the entropic term with a more general one, such as Tsallis entropy (still on the simplex), which was introduced in (Muzellec et al., 2017); the second consists in alleviating the condition of identical marginal masses, which is touched upon in (Janati et al., 2020) and was initially proposed without regularization by (Benamou, 2003). In work predating the focus on optimal transport (Helmbold and Warmuth, 2009), the same relative entropy regularized optimal transport problem was used to develop online algorithms for learning permutations that predict close to the best permutation chosen in hindsight. We expand on the connection to this earlier work in the conclusion section. ## 4 Beyond Sinkhorn Distances with TEMs OT costs with TEMsSince tempered exponential measures involve two distinct sets (the probability simplex and the co-simplex), we can naturally define two unregularized OT objectives given a cost matrix \(\mathbf{M}\in\mathbb{R}^{n\times n}\). The first is the classical OT cost; we denote it as the _expected cost_, \[d^{t}_{\mathbf{M}}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\doteq\min_{\tilde{\mathbf{P}} \in\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})}\langle\mathbf{P},\mathbf{M}\rangle \tag{4}\] (with our notations, note that the constraint is equivalent to \(\mathbf{P}\in U_{n}(\mathbf{r},\mathbf{c})\)). Instead of embedding the cost matrix on the probability simplex, we can put it directly on the co-simplex, which leads to the _measured cost_: \[\tilde{d}^{t}_{\mathbf{M}}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\doteq\min_{\mathbf{ P}\in\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})}\langle\tilde{\mathbf{P}}, \mathbf{M}\rangle\,. \tag{5}\] \(d^{t}_{\mathbf{M}}\) is a distance if \(\mathbf{M}\) is a metric matrix. \(\tilde{d}^{t}_{\mathbf{M}}\) is trivially non-negative, symmetric and meets the identity of indiscernibles. However, it seems to only satisfy a slightly different version of the triangle inequality, which converges to the triangle inequality as \(t\to 1\). **Proposition 1**.: _If \(\mathbf{M}\) is a distance matrix and \(t\leq 1\),_ \[(\tilde{d}^{t}_{\mathbf{M}}(\tilde{\mathbf{x}},\tilde{\mathbf{z}}))^{2-t}\leq M^{1-t} \cdot\left(\tilde{d}^{t}_{\mathbf{M}}(\tilde{\mathbf{x}},\tilde{\mathbf{y}})+\tilde{ d}^{t}_{\mathbf{M}}(\tilde{\mathbf{y}},\tilde{\mathbf{z}})\right),\forall\tilde{\mathbf{x}}, \tilde{\mathbf{y}},\tilde{\mathbf{z}}\in\tilde{\Delta}_{n},\] _where \(M\doteq\sum_{ij}M_{ij}\)._ Factor \(M^{1-t}\) is somehow necessary to prevent vacuity of the inequality: scaling a cost matrix by a constant \(\kappa>0\) does not change the OT optimal plan, but scales the OT cost by \(\kappa\); in this case, the LHS scales by \(\kappa^{2-t}\) and the RHS scales by \(\kappa^{1-t}\cdot\kappa=\kappa^{2-t}\) as well. Note that it can be the case that \(M<1\) so the RHS can be smaller than the triangle inequality's counterpart - yet we would not necessarily get an inequality tighter than the triangle inequality because, in this case, it is easy to show that the LHS would also be smaller than the triangle inequality's counterpart. OT costs in a ballThis problem is an intermediary that grounds a particular metric structure of EOT. This constrained problem seeks the optimal transport plan in an information ball - a KL ball - centered at the independence plan. Using our generalized tempered relative entropy, this set can be generalized as: \[\tilde{U}^{\varepsilon}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\doteq\left\{ \tilde{\mathbf{P}}\in\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\right|D_{t} \big{(}\tilde{\mathbf{P}}\|\tilde{\mathbf{r}}\tilde{\mathbf{c}}^{\top}\big{)}\leq \varepsilon\right\}, \tag{6}\] where \(\varepsilon\) is the radius of this ball. It turns out that when \(t=1\), minimizing the OT cost subject to being in this ball also yields a distance, called a Sinkhorn distance - if, of course, \(\mathbf{M}\) is a metric matrix (Cuturi, 2013). For a more general \(t\), we can first remark that \[D_{t}\big{(}\tilde{\mathbf{P}}\|\tilde{\mathbf{r}}\tilde{\mathbf{c}}^{\top}\big{)}\leq \frac{1}{1-t},\forall\tilde{\mathbf{P}}\in\tilde{\Delta}_{n\times n},\forall \tilde{\mathbf{r}},\tilde{\mathbf{c}}\in\tilde{\Delta}_{n},\] so that we can consider that \[\varepsilon < \frac{1}{1-t} \tag{7}\] for the ball constraint not to be vacuous. \(\tilde{\mathbf{r}}\tilde{\mathbf{c}}^{\top}\in\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\bm {c}})\) is the _independence table_ with co-marginals \(\tilde{\mathbf{r}}\) and \(\tilde{\mathbf{c}}\). When \(\varepsilon\to\infty\), we have \(\tilde{U}^{\varepsilon}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\to\tilde{U}_{n}( \tilde{\mathbf{r}},\tilde{\mathbf{c}})\). When \(t\to 1\) \(\tilde{U}_{n}^{\varepsilon}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\to U_{n}^{\varepsilon}( \mathbf{r},\mathbf{c})\), the subset of the transport polytope with bounded KL divergence to the independence table (Cuturi, 2013). Notably, the generalization of the ball \(\tilde{U}_{n}^{\varepsilon}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\) for \(t\neq 1\) loses the convexity of the ball itself - while the divergence \(D_{t}\) remains convex. However, the domain keeps an important property: it is \(1/t^{*}\)-_power convex_. **Proposition 2**.: _For any \(\tilde{\mathbf{P}},\tilde{\mathbf{Q}}\in\tilde{U}_{n}^{\varepsilon}(\tilde{ \mathbf{r}},\tilde{\mathbf{c}})\) and any \(t\in\mathbb{R}\),_ \[(\beta\,\tilde{\mathbf{P}}^{1/t^{*}}+(1-\beta)\,\tilde{\mathbf{Q}}^{1/t^{*}})^ {t^{*}}\in\tilde{U}_{n}^{\varepsilon}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\,,\forall \beta\in[0,1].\] Regularized OT costsin the case of entropic regularization, the OT cost is replaced by \(\langle\mathbf{P},\mathbf{M}\rangle+(1/\lambda)\cdot D_{1}\big{(}\mathbf{P} \|\mathbf{r}\mathbf{c}^{\top}\big{)}\). In the case of TEMs, we can formulate two types of regularized OT costs generalizing this expression, the _regularized expected cost_ \[d_{\mathbf{M}}^{t,\lambda}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\doteq\min_{\tilde{ \mathbf{P}}\in\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})}\langle\mathbf{P},\mathbf{M}\rangle+\frac{1}{\lambda}\cdot D_{t}(\tilde{\mathbf{P}}\|\tilde{ \mathbf{r}}\tilde{\mathbf{c}}^{\top})\,, \tag{8}\] for \(\lambda>0\) and the _regularized measured cost_ \[\tilde{d}_{\mathbf{M}}^{t,\lambda}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\doteq\min_{ \tilde{\mathbf{P}}\in\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})}\langle \tilde{\mathbf{P}},\mathbf{M}\rangle+\frac{1}{\lambda}\cdot D_{t}(\tilde{ \mathbf{P}}\|\tilde{\mathbf{r}}\tilde{\mathbf{c}}^{\top})\,. \tag{9}\] The _raison d'etre_ of entropic regularization is the algorithmic efficiency of its approximation. As we shall see, this stands for TEMs as well. In the case of TEMs, the question remains: what is the structure of the regularized problem? In the case of EOT (\(t=1\)), the answer is simple, as the OT costs in a ball bring the metric foundation of regularized OT, since the regularized cost is just the Lagrangian of the OT in a ball problem. Of course, the downside is that parameter \(\lambda\) in the regularized cost comes from a Lagrange multiplier, which is unknown in general, but at least a connection does exist with the metric structure of the unregularized OT problem - assuming, again, that \(\mathbf{M}\) is a metric matrix. As we highlight in Proposition 1, the measured cost only meets an approximate version of the triangle inequality, so a metric connection holds only in a weaker sense for \(t\neq 1\). However, as we now show, when \(t\neq 1\), there happens to be a _direct_ connection with the unregularized OT costs themselves (expected and measured), the connection to which is blurred when \(t=1\) and sheds light on the algorithms we use. **Proposition 3**.: _For any TEM \(\tilde{\mathbf{P}}\in\tilde{\Delta}_{n\times n}\), any \(t\in[0,1)\) and \(0\leqslant\varepsilon\leqslant 1/(1-t)\), letting \(\mathbf{M}_{t}\doteq(\tilde{\mathbf{r}}\tilde{\mathbf{c}}^{\top})^{1-t}\), we have_ \[D_{t}\big{(}\tilde{\mathbf{P}}\|\tilde{\mathbf{r}}\tilde{\mathbf{c}}^{\top}\big{)} \leqslant\varepsilon\ \ \Leftrightarrow\ \ \langle\tilde{\mathbf{P}},\mathbf{M}_{t}\rangle\geqslant\exp_{t}^{1-t}( \varepsilon). \tag{10}\] The proof is immediate once we remark that on the co-simplex, the generalized tempered relative entropy simplifies for \(t\neq 1\) as: \[D_{t}\big{(}\tilde{\mathbf{P}}\|\tilde{\mathbf{r}}\tilde{\mathbf{c}}^{ \top}\big{)} = \frac{1}{1-t}\cdot\Big{(}1-\langle\tilde{\mathbf{P}},\mathbf{M}_{t }\rangle\Big{)}\,. \tag{11}\] Interestingly, this simplification does not happen for \(t=1\), a case for which we keep Shannon's entropy in the equation and thus get an expression not as "clean" as (11). Though \(\mathbf{M}_{t}\) does not define a metric, it is useful to think of (10) as giving an equivalence of being in the generalized tempered relative entropy ball to the independence plan (a fact relevant to information theory) and having a large OT cost with respect to a cost matrix defined from the independence plan (a fact relevant to OT). For \(t\neq 1\), the constrained OT problem becomes solving one OT problem subject to a constraint on another one. Regularized OT costs with TEMs implies sparsityThe regularized problem becomes even "cleaner" for the regularized measured cost (9) as it becomes an unregularized measured cost2 (5) over a _fixed_ cost matrix. Footnote 2: A similar discussion, albeit more involved, holds for the expected cost. We omit it due to the lack of space. **Proposition 4**.: _For any \(t\in[0,1)\), the regularized measured cost (9) can be written as_ \[\lambda\cdot\tilde{d}_{\mathbf{M}}^{t,\lambda}(\tilde{\boldsymbol {r}},\tilde{\boldsymbol{c}}) = \frac{1}{1-t}+\min_{\tilde{\mathbf{P}}\tilde{\mathbf{e}}\tilde{U} _{n}(\tilde{\boldsymbol{r}},\tilde{\boldsymbol{c}})}\langle\tilde{\mathbf{P}},\mathbf{M}^{\prime}\rangle, \tag{12}\] \[\mathbf{M}^{\prime} \doteq \lambda\cdot\mathbf{M}-\frac{1}{1-t}\cdot\mathbf{M}_{t}, \tag{13}\] _with \(\mathbf{M}_{t}\) defined in Proposition 3._ (Proof straightforward) This formulation shows that regularized OT with TEMs for \(t\neq 1\) can achieve something that classical EOT (\(t=1\)) cannot: getting sparse OT plans. Indeed, as the next theorem shows, specific sparsity patterns happen in any configuration of two sources \(i\neq k\) and two destinations \(l\neq j\) containing two distinct paths of negative costs and at least one of positive cost. **Theorem 1**.: _Let \(\tilde{\mathbf{P}}\) be the optimal solution to the regularized measured cost (9). Let \(\mathbf{S}\) be the support indicator matrix of \(\tilde{\mathbf{P}}\), defined by the general term \(S_{ij}=1\) if \(\tilde{P}_{ij}>0\) (and 0 otherwise). The following properties hold:_ _1. For any coordinates \((i,j)\), if \(M^{\prime}_{ij}<0\) then \(S_{ij}=1\);_ Figure 2: Illustration of Theorem 1. Negative costs of matrix \(\mathbf{M}^{\prime}\) in the regularized measured cost (13) are in blue, and those positive are in red. The sparsity results of the theorem are shown, where no arrow means no transport and a dashed arrow means a transport necessarily “small” (see text). 2. _For any coordinates_ \(i\neq k\) _and_ \(j\neq l\)_, suppose we have the following configuration (Figure_ 2_):_ \(M^{\prime}_{ij}>0,M^{\prime}_{il}<0,M^{\prime}_{kj}<0\)_. Then we have the following_ * _if_ \(M^{\prime}_{kl}>0\)_, then_ \(S_{ij}=0\) _or (non exclusive)_ \(S_{kl}=0\)_;_ * _if_ \(M^{\prime}_{kl}<0\) _and_ \(S_{ij}=1\) _then necessarily_ \[\tilde{P}^{1-t}_{kl}\leq\frac{|M^{\prime}_{kl}|}{|M^{\prime}_{ij}|+|M^{\prime} _{il}|+|M^{\prime}_{kj}|}\cdot\max\{\tilde{P}_{ij},\tilde{P}_{il},\tilde{P}_{ kj}\}^{1-t}.\] Theorem 1 is illustrated in Figure 2. Interpretations of the result in terms of transport follow: if we have a negative cost between \(i,l\) and \(j,k\) then "some" transport necessarily happens in both directions; furthermore, if the cost between \(i,j\) is positive, then sparsity patterns happen: * if the other cost \(k,l\) is also positive, then we do not transport between \(i,j\) or (non exclusive) \(k,l\); * if the other cost \(k,l\) is negative, then either we do not transport between \(i,j\), or the transport between \(k,l\) is "small". What is most interesting is that negative coordinates in \(\mathbf{M}^{\prime}\) are under tight control by the user, and they enforce non-zero transport, so the flexibility of the design of \(\mathbf{M}^{\prime}\), via tuning the strength of the regularization \(\lambda\)_and_ the TEMs family via \(t\), can allow to tightly design transport patterns. Interpretation of matrix \(\mathbf{M}^{\prime}\)Coordinate \((i,j)\) is \(M^{\prime}_{ij}=\lambda M_{ij}-(\tilde{r}_{i}\tilde{c}_{j})^{1-t}/(1-t)\). Quite remarkably, the term \((\tilde{r}_{i}\tilde{c}_{j})^{1-t}/(1-t)\) happens to be equivalent, up to the exponent itself, to an interaction in the gravity model if distances are constant (Haynes and Fotheringham, 1984). If the original cost \(\mathbf{M}\) factors the distance, then we can get the full interaction term with the distance via its factorization. Hence, we can abstract any coordinate in \(\mathbf{M}^{\prime}\) as: \[M^{\prime}_{ij} \propto \text{original cost}(i,j)-\text{ interaction}(i,j). \tag{14}\] One would then expect that OT with TEMs reflects the mixture of both terms: having a large cost wrt interaction should not encourage transport, while having a large interaction wrt cost might just encourage transport. This is, in essence, the basis of Theorem 1. ## 5 Algorithms for Regularized Optimal Transport with TEMs We show the analytic form of the solutions to (9) and (8) and explain how to solve the corresponding problems using an iterative procedure based on alternating Bregman projections. We then show the reduction of the iterative procedure to the standard Sinkhorn algorithm via a simple reparameterization. Regularized measured costThe following theorem characterizes the form of the solution, i.e., the transport plan. **Theorem 2**.: _A solution of (9) can be written in the form_ \[\tilde{P}_{ij}=\exp_{t}\left((\log_{t}(\tilde{r}_{i}\tilde{c}_{j})-\lambda\,M_{ ij})\,\ominus_{t}\left(\nu_{i}+\gamma_{j}\right)\right)\,, \tag{15}\] _where \(a\,\ominus_{t}b\doteq(a-b)/(1+(1-t)b)\) and \(\nu_{i}\) and \(\gamma_{j}\) are chosen s.t. \(\tilde{\mathbf{P}}\in\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\)._ Regularized expected costThe solution of the second form of regularized OT with TEMs is characterized as follows. **Theorem 3**.: _A solution of (8) can be written in the form_ \[\tilde{P}_{ij}=\frac{\tilde{r}_{i}\tilde{c}_{j}}{\exp_{t}\left(\nu_{i}+\gamma_ {j}+\lambda M_{ij}\right)}, \tag{16}\] _where \(\nu_{i}\) and \(\gamma_{j}\) are chosen s.t. \(\tilde{\mathbf{P}}\in\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\)._ ### Approximation via Alternating Projections The Lagrange multipliers \(\nu_{i}\) and \(\gamma_{j}\) in the solutions (15) and (16) no longer act as separate scaling factors for the rows and the columns because \(\exp_{t}(a+b)\neq\exp_{t}(a)\cdot\exp_{t}(b)\) (yet, an efficient approximation is possible, _Cf_ below). Consequently, the solutions are not _diagonally equivalent_ to their corresponding seed matrices (20) and (21). However, keeping just one marginal constraint leads to a solution that bears the analytical shape of Sinkhorn balancing. Letting \(\tilde{\mathbf{P}}_{\circ}\in\mathbb{R}_{+}^{n\times n}\), the _row projection_ and _column projection_ to \(\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\) correspond to \[\min_{\tilde{\mathbf{P}}:\,\mathbf{P}\mathbf{1}_{n}=\mathbf{r}}D_{t}( \tilde{\mathbf{P}}\|\tilde{\mathbf{P}}_{\circ})\,,\] (row projection) \[\min_{\tilde{\mathbf{P}}:\,\mathbf{P}^{\top}\mathbf{1}_{n}=\mathbf{c }}D_{t}(\tilde{\mathbf{P}}\|\tilde{\mathbf{P}}_{\circ})\,,\] (column projection) in which, we use the shorthand notation \(\mathbf{P}=\tilde{\mathbf{P}}^{1/t^{\mathbf{*}}}\) (similarly for \(\mathbf{r}\) and \(\mathbf{c}\)). **Theorem 4**.: _Given \(\tilde{\mathbf{P}}_{\circ}\in\mathbb{R}_{+}^{n\times n}\), the row and column projections to \(\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\) can be performed via_ \[\tilde{\mathbf{P}} =\mathrm{diag}\big{(}\tilde{\mathbf{r}}/\tilde{\mathbf{\mu}}\big{)}\, \tilde{\mathbf{P}}_{\circ}\,,\;\;\text{where}\;\;\tilde{\mathbf{\mu}}=\big{(} \tilde{\mathbf{P}}_{\circ}^{1/t^{\mathbf{*}}\top}\mathbf{1}_{n}\big{)}^{t^{\mathbf{*} }},\measuredangle \tag{17}\] \[\tilde{\mathbf{P}} =\tilde{\mathbf{P}}_{\circ}\,\mathrm{diag}\big{(}\tilde{\mathbf{c}} /\tilde{\mathbf{\xi}}\big{)}\,,\;\;\text{where}\;\;\tilde{\mathbf{\xi}}=\big{(}\tilde{ \mathbf{P}}_{\circ}^{1/t^{\mathbf{*}}\top}\mathbf{1}_{n}\big{)}^{t^{\mathbf{*}}}. \tag{18}\] It is imperative to understand the set of solutions of the alternating Bregman projections are of the form \(\tilde{\mathbf{P}}=\mathrm{diag}(\mathbf{\nu})\tilde{\mathbf{P}}_{\circ}\,\mathrm{ diag}(\mathbf{\gamma})\), whose analytical shape is different from that required by Theorems 2 and 3. The primary reason for the solution being an approximation is the fact that the solution set by definition is non-convex for \(t\neq 1\). We empirically evaluate the quality of this approximation in our experiments. Figure 4: Number of iterations to converge for (a) measured and (b) expected cost OT for different values of \(\lambda\). Relative (to the OT solution) expected cost for the two cases, respectively, shown in (c) and (d). Figure 3: The expected \(t\)-Sinkhorn distance relative to the Sinkhorn distance for different values of \(t\). As \(t\to 1\), the approximation error due to solving the unconstrained problem via alternating Bregman projections becomes smaller and the expected \(t\)-Sinkhorn distance converges to the Sinkhorn distance for when \(\lambda\to\infty\). ### Sinkhorn balancing for approximating (15) & (16) Regularized measured costIn general, we can simplify (15) by \[\tilde{P}_{ij}=\frac{\exp_{t}\big{(}\log_{t}(\tilde{r}_{i}\tilde{c}_{j})-\lambda M _{ij}\big{)}}{\exp_{t}(\nu_{i})\otimes_{t}\exp_{t}(\gamma_{j})} \tag{19}\] (see the appendix) where \(a\otimes_{t}b\doteq[a^{1-t}+b^{1-t}-1]^{\frac{1}{1-t}}_{+}\) (and \(\lim_{t\to 1}a\otimes_{t}b=a\cdot b\)). Conditions for these simplifications to be valid include \(t\) close enough to \(1\). The regularizer in (9) is the Bregman divergence to the independence table, rather than the tempered entropy function \(\varphi_{t}(\tilde{\mathbf{P}})\); the reason being that cross terms in the numerator of the solution (19) can no longer be combined with the denominator, primarily because \(\exp_{t}(\log_{t}(a)-b)\neq a\,\exp_{t}b\) for \(t\neq 1\). Furthermore, due to the normalization in (19), the solution is not a diagonally equivalent scaling of the _measured seed matrix_ \[\tilde{\mathbf{K}}_{m}\doteq\exp_{t}\big{(}\log_{t}(\tilde{\mathbf{r}}\tilde{ \mathbf{c}}^{\top})-\lambda\,\mathbf{M}\big{)}\,, \tag{20}\] but it can be approximated by a diagonally equivalent matrix of \(\tilde{\mathbf{K}}_{m}\) (See the preceding section with \(\tilde{\mathbf{P}}_{\circ}\doteq\tilde{\mathbf{K}}_{m}\)). In terms of sparsity patterns, the simplification (19) has the direct effect of constraining the transportation plan to the coordinates \(\log_{t}(\tilde{r}_{i}\tilde{c}_{j})-\lambda M_{ij}>-1/(1-t)\) (otherwise, \((\tilde{K}_{m})_{ij}=0\)). Remark the link with \(\mathbf{M}^{\prime}\) in (13): \(\tilde{\mathbf{K}}_{m}=[-\mathbf{M}^{\prime}]_{+}\,.\) So, the simplification (19) prevents coordinates \(\log_{t}(\tilde{r}_{i}\tilde{c}_{j})-\lambda M_{ij}\) too small so that they are \(<-1/(1-t)\) to yield non-zero transport, as would Theorem 1 authorize. This is not an issue because, as the theorem shows, only a subset of such coordinates would yield non-zero transport anyway, and the approximation brings the benefit of being in position to design in a tighter way the desired sparsity patterns. One needs to make sure that the support of \(\tilde{\mathbf{K}}_{m}\) is big enough to allow for feasible solutions -- which is not also a real issue, granted that any optimal solution to the unregularized expected cost is so sparse that it has at most \(2n-1\) non-zero values (Peyre and Cuturi, 2019). Regularized expected costAgain, we have in general, the possible simplification \[\tilde{P}_{ij}=\frac{\tilde{r}_{i}\tilde{c}_{j}}{\exp_{t}(\nu_{i})\otimes_{t} \exp_{t}(\nicefrac{{1}}{{t}}\!\!*\lambda M_{ij})\otimes_{t}\exp_{t}\gamma_{j }}\,.\] Pretty much like we did for the measured seed matrix, we can define an _expected seed matrix_ for the problem (8) as \[\tilde{\mathbf{K}}_{e}\doteq\frac{1}{\exp_{t}\big{(}\nicefrac{{1}}{{t}}\!\!* \lambda\,\mathbf{M}\big{)}}=\exp_{t}\big{(}\,\bigodot_{t}\nicefrac{{1}}{{t}}\! \!*\lambda\,\mathbf{M}\big{)}\,, \tag{21}\] where \(\bigodot_{t}a\doteq\frac{-a}{1+(1-t)a}\) (and \(\lim_{t\to 1}\bigodot_{t}a=-a\)), and the solution (16) can this time be approximated through a diagonally equivalent matrix of \(\tilde{\mathbf{K}}_{e}\) (See the preceeding section with \(\tilde{\mathbf{P}}_{\circ}\doteq\tilde{\mathbf{K}}_{e}\)). Both cases (20) and (21) reduce to the entropic regularized seed \(\mathbf{K}=\exp(-\lambda\mathbf{M})\) (up to a diagonal scaling) when \(t\to 1\). We then get the general approach to approximating (15) and (16), which consists in a _reduction_ to Sinkhorn balancing with specific initialization matrices. Solution by reduction to Sinkhorn balancingIt can be simply verified that the projection steps in (17) and (18) can be written in terms of the transport polytope of \(\mathbf{r}\) and \(\mathbf{c}\), when working directly with the transport plan \(\mathbf{P}=\tilde{\mathbf{P}}^{1/t^{\mathsf{s}}}\in U_{n}(\mathbf{r},\mathbf{c})\). Notably, the steps are identical to the standard Sinkhorn's iterations (i.e., scaling of the rows and columns), which can be computed efficiently via Algorithm 1. The main alterations to carry out the iterations are: 1) form the seed matrix \(\tilde{\mathbf{K}}\) via (20) or (21), 2) apply Sinkhorn's iterations to \(\tilde{\mathbf{K}}^{1/t^{\mathsf{s}}}\), 3) map the solution back to the co-polyhedral by computing its \(t^{\mathsf{s}}\)-th power. The steps are summarized in Algorithm 2. ### Sparsity of approximate solutions Although the sparsity result of Theorem 1 is for the closed-form solution of the regularized OT plan, the approximate solutions via Sinkhorn may result in a sparse solution for an appropriate choice of \(t\) and for sufficiently large \(\lambda\). **Proposition 5**.: _For \(\mathbf{M}\in\mathbb{R}_{+}^{n\times n}\), the measured cost seed matrix (20) includes zero elements for \(t<1\) when \(\lambda\) is large enough. Similarly, the expected cost seed matrix (21) contains zero elements for \(1<t<2\) and sufficiently large \(\lambda\). Both matrices are positive otherwise for any \(\lambda>0\). Additionally, in both cases, for \(\lambda_{1}<\lambda_{2}\), the zero elements of the seed matrix induced by \(\lambda_{1}\) are a subset of the zero elements induced by \(\lambda_{2}\)._ Remark that the level of the sparsity of the solution monotonically increases with \(\lambda\). Nonetheless, care must be taken when the sparsity level is too high (_e.g._, for \(\lambda\to\infty\), \(\tilde{\mathbf{K}}\to\mathbf{0}_{n\times n}\)), as the resulting seed matrix may no longer induce a feasible solution that is diagonally equivalent to a transport plan \(\tilde{\mathbf{P}}\in\tilde{U}_{n}(\tilde{\mathbf{r}},\tilde{\mathbf{c}})\), as stated next. ### Convergence and Remarks on Feasibility Franklin and Lorenz (1989) show the linear convergence of the both scaling factors \(\mathbf{\mu}\) and \(\mathbf{\xi}\) of Sinkhorn's algorithm for _positive matrices_. Specifically, the convergence rate is proportional to the square of the _contraction coefficient_\(\kappa(\mathbf{K})\doteq\tanh^{(\delta(\mathbf{K})/\!4)}\) where \(\delta(\mathbf{K})\doteq\log\max_{i,j,k,\ell}\frac{K_{i\ell}K_{jk}}{K_{j\ell} K_{ik}}\) is called the _projective diameter_ of the linear map \(\mathbf{K}\). We summarized the convergence results of Algorithm 2 for positive \(\tilde{\mathbf{K}}\) in the following. **Remark 3**.: _When the seed matrix \(\tilde{\mathbf{K}}\) in Algorithm 2 is positive, the linear convergence is then an immediate consequence of the convergence of Sinkhorn's iteration. The range of \(t\) for Figure 5: Transport plans induced by OT and the expected cost formulation for different values of \(t\). The non-zero values are marked by a square. The EOT (\(t=1\)) induces a fully-dense plan. The sparsity of the solution increases by increasing \(1<t<2\). _which \(\tilde{\mathbf{K}}\) is a positive matrix is characterized by Proposition 5 and the convergence rate is thus proportional to \(\kappa(\tilde{\mathbf{K}}^{1/t^{*}})^{2}\). Note that for \(t=1\), \(\kappa(\operatorname{diag}(\boldsymbol{r})\exp(-\lambda\,\mathbf{M})\operatorname {diag}(\boldsymbol{c}))\!=\!\kappa(\exp(-\lambda\,\mathbf{M}))\) and both seeds (20) and (21) recover the convergence rate of the EOT._ Although the convergence of Algorithm 2 is guaranteed for positive \(\tilde{\mathbf{K}}\), we still need to specify when a solution exists for non-negative \(\tilde{\mathbf{K}}\). Further discussion on the feasibility of a solution is deferred to the appendix. Nonetheless, _if a solution exists_, we have the following result in terms of the seed and transport plan. **Remark 4**.: _The non-negative matrix \(\tilde{\mathbf{K}}\) is diagonally equivalent to \(\tilde{\mathbf{P}}\in\tilde{U}_{n}(\tilde{\boldsymbol{r}},\tilde{\boldsymbol {c}})\) if an only if \(\tilde{\mathbf{K}}^{1/t^{*}}\) with \(t^{*}>0\) is diagonally equivalent to a matrix \(\mathbf{P}\in U_{n}(\boldsymbol{r},\boldsymbol{c})\)._ ## 6 Experiments We provide experimental evidence to validate the results in the paper. For each case, we sample \(\mathbf{M}\) uniformly between \([0,1]\) and also sample \(\boldsymbol{r}\) and \(\boldsymbol{c}\) randomly. Due to limited space, we defer some of the results to the appendix. ### \(t\)-Sinkhorn Distances We plot the relative cost of the tempered entropic regularized OT to the value of the unregularized (measured or expected) cost. For the experiment, we set \(n=64\) and average over 20 trials. Figure 3 shows the relative expected cost for different values of \(t\) and \(\lambda\). The relative cost decreases with larger \(\lambda\), and the asymptotic value is closer to zero for \(t\) is closer to one, which is the case for the EOT. Further results are shown in the appendix. ### Convergence of \(t\)-OT We measure the number of steps to converge for the tempered entropic regularized OT problem using Sinkhorn's iterations for different values of \(t\) and \(\lambda\). We stop Sinkhorn's iterations when the maximum absolute change in each coordinate of \(\boldsymbol{\xi}\) is less than 1e\(-\)10. For the experiment, we set \(n=64\) and average over 100 trials. Figure 4 shows the number of iterations to converge along with the relative _expected_ cost to the solution of the unregularized OT problem. The number of iterations to converge follows a similar pattern to the contraction ratios of the seed matrices, shown in Figure 7 in the appendix, while the relative expected cost is inversely proportional to the number of iterations. This result highlights the trade-off between the convergence and the (expected) transport cost. ### Sparse Solutions We analyze the sparsity of the solution of the (unregularized) OT problem as well as the solutions of the regularized expected cost problem (8) for \(t\!\in\![1,2)\). Note that \(t=1\) is equal to the EOT problem (1). We set \(n=32\) and, for each case, set \(\lambda=6.0/t^{*}\) to offset the scaling factor in (21). In Figure 5 we show the non-zero values of the transport plans (more precisely, values larger than 1e\(-\)25). OT induces a sparse solution with \(2n-1=63\) non-zero components. On the other hand, the EOT (\(t=1\)) solution is fully-dense, with 1024 non-zero components. The sparsity increased by increasing \(t\in[1,2)\). In this case, the transport plan with \(t=1.9\) has only 83 non-zero values. More results for the regularized measured cost problem are given in the appendix. ## 7 Conclusion In this paper, we investigated regularized version of the optimal transport problem. The regularizations are tempered Bregman divergences induced by the negative Tsallis entropy. We studied how the regularization affects the sparsity pattern of the solution and adapted Sinkhorn balancing to quickly find the regularized solution. However, in Machine Learning we mainly care about generalization. A key question is whether the work of Helmbold and Warmuth (2009) can be extended, where the optimal transport value function acts as a loss and the relative entropy as an inertia term for developing online algorithms for learning permutations with optimal regret bounds. 3 The open question is how the alternate tempered regularization affects the generalization bounds of such algorithms. Footnote 3: Also very general recent work on Sinkhorn balancing (for the case of \(t=1\)) including regret bounds appeared recently in Ballu and Berthet (2023). ## 8 Acknowledgments The authors warmly thank Mathieu Blondel for remarks and discussions around the material presented.
2302.00101
AstroPix: CMOS pixels in space
Space-based gamma-ray telescopes such as the Fermi Large Area Telescope have used single sided silicon strip detectors to measure the position of charged particles produced by incident gamma rays with high resolution. At energies in the Compton regime and below, two dimensional position information within a single detector is required. Double sided silicon strip detectors are one option; however, this technology is difficult to fabricate and large arrays are susceptible to noise. This work outlines the development and implementation of monolithic CMOS active pixel silicon sensors, AstroPix, for use in future gamma-ray telescopes. Based upon detectors designed using the HVCMOS process at the Karlsruhe Institute of Technology, AstroPix has the potential to maintain the high energy and angular resolution required of a medium-energy gamma-ray telescope while reducing noise with the dual detection-and-readout capabilities of a CMOS chip. The status of AstroPix development and testing as well as outlook for application in future telescopes is presented.
Amanda L. Steinhebel, Regina Caputo, Henrike Fleischhack, Nicolas Striebig, Manoj Jadhav, Yusuke Suda, Ricardo Luz, Daniel Violette, Carolyn Kierans, Hiroyasu Tajima, Yasushi Fukazawa, Richard Leys, Ivan Peric, Jessica Metcalfe, Michela Negro, Jeremy S. Perkins
2023-01-31T21:07:44Z
http://arxiv.org/abs/2302.00101v1
# AstroPix: CMOS pixels in space ###### Abstract: Space-based gamma-ray telescopes such as the Fermi Large Area Telescope have used single sided silicon strip detectors to measure the position of charged particles produced by incident gamma rays with high resolution. At energies in the Compton regime and below, two dimensional position information within a single detector is required. Double sided silicon strip detectors are one option; however, this technology is difficult to fabricate and large arrays are susceptible to noise. This work outlines the development and implementation of monolithic CMOS active pixel silicon sensors, AstroPix, for use in future gamma-ray telescopes. Based upon detectors designed using the HVCMOS process at the Karlsruhe Institute of Technology, AstroPix has the potential to maintain the high energy and angular resolution required of a medium-energy gamma-ray telescope while reducing noise with the dual detection-and-readout capabilities of a CMOS chip. The status of AstroPix development and testing as well as outlook for application in future telescopes is presented. ## 1 Motivation and Previous Work The AstroPix project aims to develop and test pixelated silicon sensors for use in space-based gamma-ray instruments. This novel space-based technology is based upon work done with similar detectors at the Large Hadron Collider [8], and could contribute to a host of future instruments such as a next-generation wide-field gamma-ray explorer whose time domain capabilities is prioritized in the Astro2020 Decadal Survey [6]. This work will overview the design, testing, and development of AstroPix version 2. This version is a step toward a flight prototype which will be realized with version 3 (Section 3). The long-term goal is to continue this development and testing in order to determine AstroPix performance and its suitability for use in space (Fig. 1). Heritage technology used for tracking instruments on previous gamma-ray, hard X-ray, and cosmic ray instruments include single-sided silicon strip detectors, double-sided silicon strip detectors (Fig. 2), and other pixelated detectors. Each design carries unique strengths and weaknesses regarding event timing, position resolution, readout efficiency, noise, energy resolution, and power consumption. The evolution of silicon tracker technology used in for the study of astrophysics in space has been made possible in part through its long history of implementing technologies developed by ground-based particle physics experiments. The next step in the development of a silicon detector for space is AstroPix, a monolithic high voltage complimentary metal-oxide semiconductor (HVCMOS) sensor. CMOS pixels perform charge collection, signal amplification, and readout with electronics all co-integrated into the pixel matrix (Fig. 3). The addition of a high voltage bias to every pixel enhances the charge collection efficiency over previous diffusion-based methods (Fig. 3a). At an individual pixel level, once the charge is collected, it is converted to a voltage signal by a charge sensitive amplifier, which goes into a comparator to generate a trigger above the threshold level. This signal is routed to the digital periphery of the chip (Fig. 3b) where the output from all pixels are digitized and read out [7]. In Figure 1: AstroPix project overview - collider-based particle physics silicon detectors have been redesigned for use in space and ongoing lab development and optimization will ready AstroPix for future space-based applications in gamma-ray tracker subsystems. this way, two signals can be tested - the analog signal from individual pixels being the output of the charge-sensitive amplifier, and the fully digitized full-chip digital signal as readout from the digital periphery. This analog data is used in testing and characterization, but final AstroPix designs will exclusively utilize digital data readout. An HVCMOS design such as AstroPix carries huge potential benefits over legacy technology in currently flying instruments. The CMOS fabrication process is common in commercial industry and well understood, making chip production affordable. Silicon is abundant, affordable, and operates at room temperature which further drives down costs. The CMOS design requires no readout ASIC board since the readout is done on-chip, easing integration and minimizing noise especially when compared to arrays of silicon strip detectors. The on-pixel circuitry is also customizable and low-power, creating sensors with less power per channel count relative to other pixelated or strip sensors. An HVCMOS chip designed for use at the Large Hadron Collider's ATLAS detector was first Figure 3: HVCMOS design, where circuitry in each pixel allows for charge collection, amplification, and readout to the digital periphery of the chip [7, 11]. Figure 2: The varied use of heritage silicon detector technology in space and balloon astrophysical instruments, modified from [3]. tested as a proof-of-concept study for AstroPix. This chip, called ATLASPix, utilized \(150\times 50\)\(\mu\)m\({}^{2}\) pixels in four arrays of \(25\times 100\) pixels. Local testing detailed in [2] showed that ATLASPix was a feasible starting point for AstroPix development. The space environment and type of incident particle that AstroPix is intended for differs greatly from those in the hadron collider that ATLASPix was designed for, so basic changes to the chip design had to be made. In stepping from ATLASPix to the first \(bona-fide\) AstroPix chip, AstroPix_v1 or'version1', the digital bit allocation for the time over threshold measurement (Section 2) was modified so that the precise nanosecond timing resolution was relaxed in favor of energy resolution. The pixels also increased in size. AstroPix_v1 was fabricated on 500 \(\mu\)m thick silicon wafers with \(175\times 175\)\(\mu\)m\({}^{2}\) in an \(18\times 18\) array. Insufficient pixel shielding caused oscillations in the digital readout, so only analog data could be read out by AstroPix_v1. Studies from [10] detail charge injection studies, threshold studies, and energy calibration performed with AstroPix_v1. From one probed pixel, energy calibration from analog data was found to match known X-ray and gamma-ray sources within 6%. A maximum energy resolution (at FWHM, where \(E=(2.355\cdot\sigma)/\mu\cdot 100\%\) where \(\mu\) and \(\sigma\) are Gaussian fit parameters from a fit to the photopeak) of 25% (at 14 keV) was measured (Fig. 3(a)). ## 2 AstroPix_v2 Testing AstroPix_v2 aimed to incrementally move toward a flight prototype by fixing the shielding flaw and redesigning on-pixel circuitry to reduce power consumption. Pixels measure \(250\times 250\)\(\mu\)m\({}^{2}\) in a \(35\times 35\) array covering an area of \(1\times 1\) cm\({}^{2}\) (Fig. 4(a)). The guard ring design around each pixel was updated, allowing for higher bias voltages and deeper depletion of the 500 \(\mu\)m wafer [10]. The performance of AstroPix_v2 was studied analogously to AstroPix_v1, where charge injection studies, threshold studies, and energy calibration was measured from the analog outputs of Figure 4: Mean and energy resolution (at FWHM) of calibrated spectra for AstroPix_v1 and AstroPix_v2 [10]. individual pixels (Fig. 4(b)). Energy calibration for AstroPix_v2 (Fig. 3(b)) is more precise than that of AstroPix_v1, where calibrated photopeak values matched the expected value within 3%. AstroPix_v2 also measures better energy resolution for each calibrated point with a maximum energy resolution at 14 keV of 16% (25%) for AstroPix_v2 (AstroPix_v1). In order to test radiation hardness and performance in a relevant environment, AstroPix_v2 saw four test-beam campaigns - two at the Fermilab Test Beam Facility [1] with a 120 GeV proton beam and two at the Berkeley 88-Inch Cyclotron [5] with a cocktail of ions up to a linear energy transfer value of 65 MeV cm\({}^{2}\)/mg. Measurements made during both campaigns confirmed that no catastrophic latchup occurred during running - a state of inactivity where an incident particle triggers a parasitic thyristor, resulting in a short circuit, which can result in runaway current draws and the subsequent destruction of the device. Closer analysis is still underway from the radiation testing with ion beams to determine whether AstroPix_v2 experienced single event functional interrupt events, where single bits would have been flipped or corrupted by an ion interaction [10]. The testing in this extreme flux environment also resulted in improved data collection software. Further details of the design changes from AstroPix_v1 to AstroPix_v2, analog performance and tests, and radiation testing can be found in [10]. A major correction from AstroPix_v1 to AstroPix_v2 was the ability to access digital data. While analog data can only be collected from a handful of select pixels, digital data is available from the whole array and is digitized on-chip for readout. In order to save on power and bandwidth, AstroPix_v2 reads out only row and column information rather than individual pixels, where individual pixel outputs are OR'ed together. In this way, only two channels (row and column) are sent from the array to the digital periphery. The digitized data is returned as an encoded bit stream containing information regarding the time of each hit, whether it is a row or column hit, the location of the row or column, and a Time over Threshold (ToT) measurement. Rather than associating the height of a voltage pulse with the deposited charge, as is done with analog analysis, the digitization requires the input of some voltage threshold and correlates the deposited charge to the amount of Figure 5: AstroPix_v2 on the laboratory bench at NASA Goddard Space Flight Center. time that the resulting voltage pulse was over the threshold. A larger ToT is associated with a larger charge deposit. In this way, the activation of one pixel should return two digitized hits - one for the measurement of charge in the row, and a separate hit for the column. If corresponding to the same event, these hits must also match in time and measured ToT. This level of correlation was tested with an injected charge administered individually to a sample of pixels around the full array (Fig. 6). Each pixel was probed 140 times with an injected charge and the fraction of events where two paired row and column hits were recorded is plotted. 99% of probed pixels read out data where more than 80% of injections contain matching row and column hits, showing a very high correlation between row and column hits as expected. There is also no noted trend around the array of non-correlating pixels, indicating no large issues with chip fabrication or bonding to its custom printed circuit board. Pixels with poor coincidence may have large rates of noise, confusing the coincidence matching or overwhelming the trigger. Pixels such at these will be masked for data collection campaigns. Future iterations of AstroPix (Section 3) will record every pixel individually without OR'ing rows and columns, thus simplifying this problem and eliminating the need for postprocessing coincidence matching. One way to verify the digital circuitry and configuration settings is to measure the analog signal at the same time. The analog signal is read after passing through a lowpass filter while the full digital signal is also passed through a highpass filter, but an incident particle should generate both an analog and a digital signal with nearly identical ToT measurements (or a ToT-proxy measurement as calculated from the shape of the analog voltage pulse). The corner pixel, row 0 column 0, alone was activated on the array and exposed to a 0.01 \(\mu\)Ci Barium-133 source for 30 minutes. Post-processing software was designed to consider both the output analog and digital data sets, coincide the data in time in order to identify events that triggered both analog and digital readout, and finally to plot this coincident data. Though more analog hits were recorded than digital hits (with a data collection rate of 0.217 Hz compared to 0.162 Hz), 90.6% of digital data had a corresponding analog hit within a Figure 6: Most of the AstroPix_v2 array measures both row and column digital hits (as expected) when a signal is injected into the pixel. timing window of 0.07 s. The digital ToT measurements of these hits is matched very closely to the corresponding analog ToT-proxy measurement (Fig. 7), indicating that chip configuration settings are properly optimized. The timing window of 0.07s was derived as an optimal value from the data and reflects the large latency associated with the analog data collection method - it is not indicative of the inherent timing resolution of the chip. This digital output was also used to make a first measurement of depletion depth with AstroPix_v2. The AstroPix design utilizes a thick 500 \(\mu\)m wafer and HV bias voltage into order to facilitate a large dynamic range of 25-700 keV. In order to achieve this full depletion, high resistivity wafers of 5 k\(\Omega\)-cm will be utilized but the chips currently under test utilize 300 \(\Omega\)-cm silicon so a smaller depletion depth and therefore dynamic range is expeted. In order to measure the depletion depth of this lower resistivty array, individual pixels were probed with an Americium-241 source (photopeak at 59.5 keV) and -160V bias voltage. The source is assumed to be point-like, and it is assumed that there is no absorption. A depletion depth \(d\) is calculated from the detection rate, \[r_{d}=Ap\omega\left(1-e^{-\rho_{N}\,\sigma d}\right)\,\] where \(A\) is the nuclear decay rate (1 MBq), \(p\) is the emission probability of 59.5 keV, \(\rho_{N}\) is the number density of silicon, and \(\sigma\) is the photoelectric cross section of 59.5 keV in silicon. The geometric factor \(\omega\) relates to the pixel size and source distance from the array. The detection rate \(r_{d}\) is found by integrating the measured spectrum of the Americium source. These direct measurements from every individual pixel of this lower resistivity array show that the depletion achieved with a -160V bias voltage is, on average, 119 \(\mu\)m with a 9% variation at 1\(\sigma\) (Fig. 8). The -160V bias value was chosen due to the event rate maximizing and saturating at this bias. Final AstroPix designs will fully deplete 500 \(\mu\)m in high-resistivity wafers, and this measured level of depletion on low-resistivity chips offers a promising start. Figure 7: Measurements of Barium-133 with a single AstroPix_v2 pixel confirms that analog and digital data measure identical ToT values. This value, and the depletion curve over a range of bias values, follows the shape of the model of a p-substrate sensor where \[d=\sqrt{2\epsilon\mu\rho(V_{bias}+V_{i})}\,\] where \(\epsilon\) is the permittivity \(1.04\times 10^{-12}\) F/cm, \(\mu\) is hole mobility (500 cm\({}^{2}\)/s/V), \(\rho\) is the sensor resistivity of 300 \(\Omega\)-cm, and \(V_{i}\) is a built-in potential of 0.6 V. At the time of writing, a systematic offset is found between the model prediction and data such that the data agrees more closely with the modeling of an n-substrate sensor where \(\mu\) is the electron mobility of 1500 cm\({}^{2}\)/s/V. Work is underway to further understand this effect. ## 3 Ongoing work and Next Steps Testing of AstroPix_v2 is still underway, with a current emphasis on digital data read out from the full array. There is also active testing of AstroPix_v2 fabricated on silicon wafers with high resistivity. This higher resistivity of 5 k\(\Omega\cdot\)cm as compared to the 300 \(\Omega\cdot\)cm wafers utilized for the data thus far presented should allow for larger depletion depth and therefore enhanced charge collection and dynamic range. AstroPix_v2 will see another Fermilab Test Beam Facility campaign in early 2023 with emphasis on multiple-layer readout utilizing multiple AstroPix_v2 chips in the beamline and testing preliminary particle tracking software. Concurrently, the AstroPix project continues to look ahead with the design of AstroPix_v3. This is a flight-prototype version with a \(35\times 35\) array of \(500\times 500\)\(\mu\)m\({}^{2}\) pixels with a \(300\times 300\)\(\mu\)m\({}^{2}\) active area. The continued increase in pixel size does not impact the energy resolution of future missions and allows AstroPix to consume less power and bandwidth with a smaller number of channels overall. However, the larger pixels create engineering challenges - for example, the large active area creates high capacitance levels which increase noise. The AstroPix_v3 strategy of reducing the active pixel area with respect to the pixel pitch avoids these complications. AstroPix_v3 will also be diced from the wafer as a \(4\times 4\) cm\({}^{2}\) quad chip, where four AstroPix_v3 arrays will Figure 8: Histogram of depletion depth as measured individually in each AstroPix_v2 pixel. The dotted orange band is a Gaussian fit with the indicated parameters, and the black bar indicates the expected error on the mean value from the depletion depth model assuming an n-substrate sensor (\(\mu\)=electron mobility) with resistivity between 200-400 \(\Omega\)-cm. be connected through common bus bars (Fig. 8(a)). AstroPix_v3 was delivered from the foundry in January 2023 and testing began shortly thereafter. The AstroPix_v3 quad chip will be used on the Astropix Sounding rocket Technology dEmonstration Payload (A-STEP), currently scheduled for launch in late 2024. The 2U payload will consist of three layers of AstroPix_v3 quad chips with a thin aluminum housing (Fig. 8(b)), along with supporting electronics. A sounding rocket flight of roughly 10 minutes will take A-STEP 500 km above ground and provide the opportunity to measure cosmic rays and gamma rays. The project intention is to demonstrate functionality of the AstroPix sensors in a relevant space environment by reconstructing these charged particle tracks. The A-STEP project kicked off in October 2022 led through Goddard Space Flight Center with support from Wallops Flight Facility engineering support and the Sounding Rocket Program Office for coordination and planning. AstroPix is a mission enabling technology for future large-scale gamma-ray instruments. It is implemented in the 2021 MIDEX concept AMEGO-X as the main detector of the tracking subsystem [4]. The AMEGO-X tracker design features four identical towers of 40 layers of AstroPix with 95 quad chips per layer (Fig. 9(a)). The use of AstroPix at this scale provides AMEGO-X a significant improvement of effective area at low energies (50-500 keV) over that with the use of double sided silicon strip detectors (Fig. 9(b)). In order to build to full implementation in AMEGO-X, an AMEGO-X prototype is planned. This prototype is a combination of many efforts, including: AstroPix (as a novel tracker), a cesium-iodide calorimeter, the ComPair balloon instrument [9], and Compton event reconstruction improvements. The AMEGO-X prototype (Fig. 11), a next-generation ComPair instrument, will be one tower of the AMEGO-X instrument (though with less AstroPix tracker layers). The aim is to Figure 9: AstroPix_v3 is currently being fabricated and will be diced as a quad-chip. This version will be utilized in a sounding rocket payload, A-STEP. demonstrate operation of the full system in a relevant environment. These implementations will also utilize future versions of AstroPix. AstroPix_v4 is currently in development with planned upgrades to the ToT readout system, and individual pixel readout (without row and column OR'ing) and threshold tuning. ## 4 Summary and Outlook Future large-scale gamma-ray instruments, as prioritized by the Astro2020 Decadal Survey, will benefit from new technologies that allow for measurements with low noise and precise position and energy resolution. As an HVCMOS sensor, AstroPix serves as mission-enabling technology for these next-generation instruments. With development rooted in work done by the collider-based particle physics community, AstroPix has now realized two design iterations with a third recently delivered at the time of writing. The sensors are capable of analog and digital data readout and measure energy resolutions from analog data better than 16% at 14 keV in the most recent design iteration, AstroPix_v2. Digital readout is being tested with AstroPix_v2, where individual pixels are returning triggered data uniformly around the array as expected. The next design iteration, AstroPix_v3, will be diced as a quad chip sharing common readout of four individual arrays connected via a common bus bar and be utilized in future technology demonstrations including the sounding rocket payload A-STEP and AMEGO-X prototype ComPair balloon instrument. The international AstroPix team spans three countries over five institutions with more than half of the contributors being early career scientists or engineers (including students). The team looks forward to continued testing of AstroPix, future development, and further implementation in next-generation space-based instruments. Figure 10: Future implementation of AstroPix in the MIDEX-scale AMEGO-X tracker will increase effective area at low energies [4].
2309.15118
Classification of symmetry-enriched topological quantum spin liquids
We present a systematic framework to classify symmetry-enriched topological quantum spin liquids in two spatial dimensions. This framework can deal with all topological quantum spin liquids, which may be either Abelian or non-Abelian, chiral or non-chiral. It can systematically treat a general symmetry, which may include both lattice symmetry and internal symmetry, may contain anti-unitary symmetry, and may permute anyons. The framework applies to all types of lattices, and can systematically distinguish different lattice systems with the same symmetry group using their Lieb-Schultz-Mattis anomalies. We apply this framework to classify $U(1)_{2N}$ chiral states and non-Abelian Ising$^{(\nu)}$ states enriched by a $p6\times SO(3)$ or $p4\times SO(3)$ symmetry, and $\mathbb{Z}_N$ topological orders and $U(1)_{2N}\times U(1)_{-2N}$ topological orders enriched by a $p6m\times SO(3)\times\mathbb{Z}_2^T$, $p4m\times SO(3)\times\mathbb{Z}_2^T$, $p6m\times\mathbb{Z}_2^T$ or $p4m\times\mathbb{Z}_2^T$ symmetry, where $p6$, $p4$, $p6m$ and $p4m$ are lattice symmetries, while $SO(3)$ and $\mathbb{Z}_2^T$ are spin rotation and time reversal symmetries, respectively. In particular, we identify symmetry-enriched topological quantum spin liquids that are not easily captured by the usual parton-mean-field approach, including examples with the familiar $\mathbb{Z}_2$ topological order.
Weicheng Ye, Liujun Zou
2023-09-26T17:59:58Z
http://arxiv.org/abs/2309.15118v3
# Classification of symmetry-enriched topological quantum spin liquids ###### Abstract We present a systematic framework to classify symmetry-enriched topological quantum spin liquids in two spatial dimensions. This framework can deal with all topological quantum spin liquids, which may be either Abelian or non-Abelian, chiral or non-chiral. It can systematically treat a general symmetry, which may include both lattice symmetry and internal symmetry, may contain anti-unitary symmetry, and may permute anyons. The framework applies to all types of lattices, and can systematically distinguish different lattice systems with the same symmetry group using their Lieb-Schultz-Mattis anomalies. We apply this framework to classify \(U(1)_{2N}\) chiral states and non-Abelian Ising\({}^{(\nu)}\) states enriched by a \(p6\times SO(3)\) or \(p4\times SO(3)\) symmetry, and \(\mathbb{Z}_{N}\) topological orders and \(U(1)_{2N}\times U(1)_{-2N}\) topological orders enriched by a \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\) or \(p4m\times\mathbb{Z}_{2}^{T}\) symmetry, where \(p6\), \(p4\), \(p6m\) and \(p4m\) are lattice symmetries, while \(SO(3)\) and \(\mathbb{Z}_{2}^{T}\) are spin rotation and time reversal symmetries, respectively. In particular, we identify symmetry-enriched topological quantum spin liquids that are not easily captured by the usual parton-mean-field approach, including examples with the familiar \(\mathbb{Z}_{2}\) topological order. ###### Contents * I Introduction * II Outline and summary * III Universal characterization of symmetry-enriched topological quantum spin liquids * III.1 Topological order * III.2 Global symmetry * III.3 Anomalies of symmetry-enriched topological orders * III.4 Crystalline equivalence principle * IV Symmetry properties and quantum anomalies of lattice systems * V Framework of classification * VI \(\mathrm{U}(1)_{2N}\) topological orders: Generalized Abelian chiral spin liquids * VI.1 Example: \(\mathbb{Z}_{2}\times SO(3)\) * VI.1 No anyon permutation * VI.2 \(C_{2}\) acts as charge conjugation * VI.2 \(p6\times SO(3)\) * VII.3 \(p4\times SO(3)\) * VII Ising\({}^{(\nu)}\) topological orders: Kitaev's non-Abelian chiral spin liquids * VIII \(Z_{N}\) topological orders: Generalized toric codes * IX \(\mathrm{U}(1)_{2N}\times\mathrm{U}(1)_{-2N}\) topological orders: Generalizations of the double-semion state * X Discussion * A Translation between the characterization of reflection symmetry and time-reversal symmetry * B Wallpaper group symmetries: group structure and \(\mathbb{Z}_{2}\) cohomology * 1. \(p6\) * 2. \(p6m\) * 3. \(p4\) * 4. \(p4m\) * C Details of realizations: Anyon permutation patterns and symmetry fractionalization classes * 1. \(\mathrm{U}(1)_{2N}\) * 2. Ising\({}^{(\nu)}\) * 3. \(\mathbb{Z}_{2}\) topological order * 4. \(\mathbb{Z}_{N}\) topological order (\(N\geqslant 3\)) * 5. \(\mathrm{U}(1)_{2}\times\mathrm{U}(1)_{-2}\) (Double Semion) * 6. \(\mathrm{U}(1)_{4}\times\mathrm{U}(1)_{-4}\) * D Anomaly indicators * E Symmetry fractionalization classes of the "beyond-parton" \(\mathbb{Z}_{2}\) topological quantum spin liquids ## I Introduction Topological quantum spin liquids, which are more formally referred to as bosonic topological orders, are exotic gapped quantum phases of matter with long-range entanglement beyond the phenomenon of spon taneous symmetry breaking.1 In two space dimensions, they can host anyons, i.e., point-like excitations that are neither bosons nor fermions [1]. Besides being fundamentally interesting, they are also potential platforms for fault-tolerant quantum computation [2; 3]. Footnote 1: All symmetries discussed in this paper are 0-form invertible symmetries, unless otherwise stated. Roughly speaking, in the absence of symmetries, the universal properties of a topological quantum spin liquid are encoded in the properties of its anyons, such as their fusion rules and statistics. We refer to these properties as the topological properties of a topological quantum spin liquid. In the presence of symmetries, there can be interesting interplay between these topological properties and the symmetries. In particular, even with fixed topological properties, there can be sharply distinct topological quantum spin liquids that cannot evolve into each other without breaking the symmetries or encountering a quantum phase transition. These are known as different symmetry-enriched topological quantum spin liquids. The goal of this paper is to classify symmetry-enriched topological quantum spin liquids in two space dimensions. That is, given the topological and symmetry properties, we would like to understand which symmetry-enriched topological quantum spin liquids are possible. This problem is first of fundamental conceptual interest, as understanding different types of quantum matter is one of the central goals of condensed matter physics. Moreover, although topological quantum spin liquids have been identified in certain lattice models and small-sized quantum simulators, its realization and detection in macroscopic quantum materials remains elusive [4]. The knowledge of which symmetry-enriched topological quantum spin liquids are possible is helpful for identifying the correct observable signatures of them, thus paving the way to the realization and detection of these interesting quantum phases in the future. In the previous condensed matter literature, there are two widely used approaches to classify symmetry-enriched topological quantum spin liquids, and most of the other approaches can be viewed as variations or generalizations of them. The first approach is based on parton mean fields and projective symmetry groups, which starts by representing the microscopic degrees of freedom (such as spins) via certain fractional degrees of freedom, and then studies the possible projective representations of the symmetries carried by these fractional degrees of freedom [5]. The advantages of this approach include its simplicity and its intimate connection to the microscopic degrees of freedom of the system. It has been applied to classify various symmetry-enriched quantum spin liquids in many different lattice systems, and it treats internal symmetries and lattice symmetries on equal footing. However, this approach is not perfect. One of the main disadvantages of this approach is that it is not easily applicable to general topological quantum spin liquids. For example, it is inconvenient to apply this approach to classify symmetry-enriched \(\mathbb{Z}_{N}\) topological orders with \(N>2\). This is because such topological orders are most naturally described by partons coupled to a dynamical \(\mathbb{Z}_{N}\) gauge field. When \(N>2\), the partons themselves are interacting, so a non-interacting parton mean field description is inadequate. Also, it is often challenging to use this approach to study topological quantum spin liquids where anyons are permuted in a complicated manner by symmetries. Moreover, even for quantum spin liquids that are often assumed to be captured by parton mean fields, some of their symmetry enrichment patterns may be beyond parton mean fields. Ref. [6] presented such phenomenon for the gapless Dirac spin liquid, and in Sec. VIII we find that this phenomenon can occur even for the familiar \(\mathbb{Z}_{2}\) topological quantum spin liquid. Another disadvantage of this approach is that the projective representations carried by the fractional degrees of freedom may not be the same as the symmetry properties of the physical anyons, which sometimes require one more nontrivial step to obtain. For examples, see Refs. [7; 8]. The second approach is based on the categorical theoretic description of the topological quantum spin liquids, and studies how the category theory corresponding to a topological quantum spin liquid can be consistently extended to include a symmetry group [9; 10]. The advantage of this approach is that it is fully general and can be applied to situations where the parton-based approach is inconvenient, since it is believed that the topological properties of all topological quantum spin liquids can be described by category theory. Also, it yields the symmetry properties of the anyons directly, without relying on some artificial fractional degrees of freedom as in the first approach. Overall, this approach is very systematic and powerful in studying topological quantum spin liquids with only internal symmetries. However, this approach also has its disadvantages: Unlike the first approach, it has a relatively weaker connection with the microscopic properties of the physical system, and it is particularly inconvenient when applied to systems with lattice symmetries, which are often physically important. Specifically, within this approach all symmetry properties of the physical system are assumed to be captured by its symmetry group. However, such a description of the symmetries is inadequate for many purposes. For example, spin-1/2 systems defined on a triangular lattice, kagome lattice and honeycomb lattice can all have the same symmetry group, say, the \(SO(3)\) spin rotational symmetry and the \(p\bar{6}m\) lattice symmetry. But they are physically distinct, and any symmetry-enriched topological quantum spin liquid that can emerge in one of these three systems cannot emerge in the other two [6; 11]. A physically relevant classification of symmetry-enriched topological quantum spin liquids should take into account this distinction, which reflects some microscopic symmetry properties beyond the symmetry group. In the above example, these properties can be viewed as the locations of the spin-1/2's in those three lattices. In the present paper, we develop a framework to classify symmetry-enriched topological quantum spin liquids, which combines the advantages of the above two approaches while avoiding their disadvantages. Specifically, we will use the language of category theory to directly describe the topological properties of a topological quantum spin liquid and the symmetry properties of the anyons, and we will also keep track of the robust microscopic symmetry-related information of the physical system. Roughly speaking, this information includes: * The symmetry group. For example, the \(SO(3)\) spin rotational symmetry and \(p6m\) lattice symmetry. * The nature of the microscopic degrees of freedom, or more precisely, how the microscopic degrees of freedom transform under the symmetry group. For example, whether the microscopic degrees of freedom are spin-1/2's or spin-1's. * The locations of the microscopic degrees of freedom. For example, whether the microscopic degrees of freedom reside on the sites of a triangular, kagome or honeycomb lattice. All three pieces of information above are taken into account in the aforementioned first approach, but the second approach usually only considers the first piece (see exceptions in Refs. [12; 13; 14; 15; 16; 17; 18] studying certain specific symmetry-enriched quantum spin liquids, but the methods therein have not been generalized to the generic case before). The reason why the other two pieces of information are physically relevant is because they are also robust properties of a system as long as the symmetries are preserved. In fact, if two systems have the same symmetry group but their microscopic degrees of freedom are of different nature or at different locations, they may not be able to evolve into each other without breaking the symmetry (even if going across quantum phase transitions is allowed) [11]. Moreover, just like the symmetry group, these two pieces of information are often relatively easy to determine experimentally, and they are the basic aspects of any theoretical model. We will make these concepts more precise in Sec. IV. Ultimately, our framework takes as input 1) the topological properties of a topological quantum spin liquid and 2) a collection of the above symmetry-related properties of a microscopic system, and outputs which symmetry-enriched topological quantum spin liquids are possible. Each of these symmetry-enriched topological quantum spin liquids is characterized by the symmetry properties of its anyons. This framework relies on the state-of-the-art in the studies of the quantum anomalies of lattice systems and topological quantum spin liquids. In particular, in Ref. [6] we and our coauthors obtained the precise characterizations of the full set of the quantum anomalies of a large class of lattice systems, which exactly encode the aforementioned robust microscopic symmetry-related information of the lattice system. These quantum anomalies are referred to as Lieb-Schultz-Mattis anomalies. Moreover, in Ref. [19] we developed a systematic framework to calculate the quantum anomaly of a generic symmetry-enriched topological quantum spin liquid. With these previous developments, in Sec. V we will present the general framework to classify any two dimensional symmetry-enriched topological quantum spin liquids in any lattice systems with any symmetries, by matching the anomaly of the lattice system and the anomaly of the symmetry-enriched topological quantum spin liquid. Here the topological quantum spin liquid can be described by a topological order that is either chiral or non-chiral, Abelian or non-Abelian. The symmetry can include both internal and lattice symmetries, unitary and anti-unitary symmetries, and symmetries that permute anyons. In the rest of the paper, we will apply this framework to classify various symmetry-enriched topological quantum spin liquids. We will focus on examples with symmetry group \(G=G_{s}\times G_{int}\), where \(G_{s}\) is certain lattice symmetry, and \(G_{int}\) is an on-site internal symmetry. We remark that our philosophy of encoding the robust microscopic symmetry-related information of a physical system into its quantum anomaly and using anomaly-matching to classify quantum many-body states is very general. Besides classifying symmetry-enriched topological quantum spin liquids, it can also be used to classify other quantum states. In fact, we and our coauthors have applied it to classify certain symmetry-enriched gapless quantum spin liquids in Ref. [6]. Our anomaly-based philosophy can be viewed as a generalization of the theories of symmetry indicators or topological quantum chemistry, which classify band structures [20; 21]. In fact, the robust microscopic symmetry-related information of a physical system we take is identical to what those theories take, but those theories only apply to weakly correlated systems that can be described by band theory, while ours applies to generic strongly correlated systems. ## II Outline and summary The outline of the rest of the paper and summary of the main results are as follows. * In Sec. III, we briefly review how to describe a symmetry-enriched topological quantum spin liquid and its quantum anomaly. The description is based on the category theory, but no previous knowledge of category theory is required. * In Sec. IV, we discuss the symmetry properties of a lattice system and how to encode them into the quantum anomaly of a lattice system. * In Sec. V, we present our general framework to classify symmetry-enriched topological quantum spin liquid, which may be Abelian or non-Abelian, and chiral or non-chiral. This framework applies to topological quantum spin liquids with any symmetry, which may contain both lattice symmetry and internal symmetry, may contain anti-unitary symmetry and may permute anyons. * In Sec. VI, we apply the framework in Sec. V to classify the \(U(1)_{2N}\) topological quantum spin liquid enriched by a \(p6\times SO(3)\) or \(p4\times SO(3)\) symmetry, where \(p6\) and \(p4\) are lattice symmetries and \(SO(3)\) is the spin rotational symmetry. In this section we spell out the details of the analysis, including the symmetry fractionalization class of each symmetry-enriched state and the calculation of the quantum anomalies. The results are summarized in Table 7. * In Sec. VII, we apply our framework in Sec. V to classify the non-Abelian Ising\({}^{(\nu)}\) topological quantum spin liquid enriched by a \(p6\times SO(3)\) or \(p4\times SO(3)\) symmetry. * In Sec. VIII, we apply our framework in Sec. V to classify the \(\mathbb{Z}_{N}\) topological quantum spin liquids enriched by one of these four symmetries: \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\), where the \(p6m\) and \(p4m\) are lattice symmetries, and \(SO(3)\) and \(\mathbb{Z}_{2}^{T}\) are on-site spin rotational symmetry and time reversal symmetry, respectively. The results are summarized in Table 10. In particular, even for the simple case with \(N=2\), we find many states beyond the description based on the usual parton mean field. * In Sec. IX, we apply our framework in Sec. V to classify the \(U(1)_{2N}\times U(1)_{-2N}\) topological quantum spin liquids enriched by one of these four symmetries: \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\). The results are summarized in Table 10. * We close our paper in Sec. X. * Various appendices contain further details. Appendix A presents a formal treatment of the connection between the data characterizing a topological order enriched by reflection symmetry and the data characterizing a topological order enriched by time reversal symmetry. Appendix B reviews the properties of the lattice symmetries involved in this paper. Appendix C details the information of all symmetry fractionalization classes of all states studied in this paper. Appendix D presents the anomaly indicators for various symmetries, and their values in the physically relevant cases considered in this paper. Appendix E presents the details of the symmetry fractionalization classes of the symmetry-enriched \(\mathbb{Z}_{2}\) topological orders that are beyond the usual parton mean fields. ## III Universal characterization of symmetry-enriched topological quantum spin liquids In this section, we review the universal characterization of symmetry-enriched topological quantum spin liquids. This characterization is divided into two parts. We first specify the topological order corresponding to the topological quantum spin liquid, which is reviewed in Sec. III.1. After this, assuming the symmetry is an internal symmetry, we will specify the global symmetry of the topological quantum spin liquid and how it acts on the topological order, which is reviewed in Sec. III.2. We review the anomaly of a symmetry-enriched topological order in Sec. III.3. Finally, we review the crystalline equivalence principle in Sec. III.4, which allows us to connect a topological order with lattice symmetry to one with internal symmetry. ### Topological order The characteristic feature of a topological order is the presence of anyons, point-like excitations that can have self-statistics other than bosonic or fermionic statistics, and nontrivial mutual braiding statistics. When multiple anyons are present, there may also be a protected degenerate state space. It is believed that the category theory can universally characterize all topological orders or the anyons therein. In this subsection we briefly review the concepts relevant to this paper. Our review will be minimal and does not assume any knowledge of the category theory itself. For a more comprehensive review, see e.g., Refs. [9; 22; 23] for a more physics oriented introduction, or Refs. [24; 25; 26; 27] for a more mathematical treatment. Anyons in a topological order are denoted by \(a,b,c,\cdots\). A single anyon cannot be converted into a different single anyon via any local process. There is always a trivial anyon in all topological orders, obtained by performing some local operation on the ground state. Roughly speaking, in a many-body system where a topological order emerges at low energies, the quantum state is specified by two pieces of data: the global data characterizing the anyons, which can not be changed by any local process, and the local data independent of the anyons, which can change by local operations. For the purpose of this paper, two important properties of an anyon \(a\) will be used, its quantum dimension \(d_{a}\) and topological spin \(\theta_{a}\). Suppose \(n\) anyons \(a\) are created, the dimension of the degenerate state space scales as \(d_{a}^{n}\) when \(n\) is large. So the quantum dimension \(d_{a}\) effectively measures how rich the internal degree of freedom the anyon \(a\) carries. If \(d_{a}=1\), then \(a\) is said to be an Abelian anyon. Otherwise it is a non-Abelian anyon. The topological spin \(\theta_{a}\) measures the self-statistics of \(a\). For bosons and fermions, the topological spin is \(\pm 1\). For a generic anyon, the topological spin can take other values. Two anyons \(a\) and \(b\) can fuse into other anyons, expressed using the equation \(a\times b\cong\sum_{c}N_{ab}^{c}c\), where \(N_{ab}^{c}\) are positive integers and \(c\) is said to be a fusion outcome of \(a\) and \(b\). Generically, \(a\) and \(b\) may have multiple fusion outcomes, and there may also be multiple different ways to get each fusion outcome, which is why the right hand side of the fusion equation involves a summation and \(N_{ab}^{c}\) can be larger than \(1\). Physically, fusion means that if we only perform measurements far away from the anyons \(a\) and \(b\), we will think they look like the fusion outcome \(c\). Diagramatically, we can use to represent a splitting process, which can be viewed as the reversed process of fusion. In the above, \(\mu\in\{1,2,\cdots,N_{ab}^{c}\}\), and the arrows can be viewed as the world lines of the anyons. If the trivial anyon is in the fusion product of two anyons, we say that these two anyons are anti-particles of each other, and we denote the anti-particle of \(a\) by \(\bar{a}\). When multiple anyons are present, we may imagine fusing them in different orders. Similarly, when a given anyon splits into multiple other anyons, it may also do so in different orders. For example, the two sides of the following equation represent two different orders of splitting an anyon \(d\) into three anyons, \(a\), \(b\) and \(c\). Generically, the state obtained after these two processes are not identical, but they can be related by the so-called \(F\)-symbol, which is a unitary matrix acting on the degenerate state space. In this equation, \(e\) is a fusion out come of \(a\) and \(b\), where \(\alpha\in\{1,\cdots N_{ab}^{e}\}\), \(d\) is a fusion outcome of \(c\) and \(e\), where \(\beta\in\{1,\cdots,N_{ec}^{d}\}\), \(f\) is a fusion outcome of \(b\) and \(c\), where \(\mu\in\{1,\cdots,N_{bc}^{f}\}\), and \(f\) and \(a\) also fuse into \(d\), where \(\nu\in\{1,\cdots,N_{af}^{d}\}\). When anyons move around each other, the state may acquire some nontrivial braiding phase factor, and it may even be acted by a unitary matrix in general. The statistics and braiding properties of the anyons are encoded in the \(R\)-symbol, which is also a unitary matrix acting in the degenerate state space and is defined via the the following diagram: The topological spin can be expressed using the \(R\)-symbol via The \(F\)- and \(R\)-symbols satisfy strong constraints and have some "gauge" freedom i.e., two sets of \(\{F,R\}\) data related by certain gauge transformations are physically equivalent. We remark that the \(F\)- and \(R\)-symbols can all be defined microscopically [28]. ### Global symmetry In the presence of global symmetry, anyons can display interesting phenomena including (1) anyon permutation and (2) symmetry fractionalization. We will review the concepts relevant to this paper below, and more comprehensive reviews can be found in, e.g., Refs. [9; 25; 29]. In this subsection and the next, we will assume that all symmetries are internal symmetries, i.e., they do not move the locations of the degrees of freedom. We will comment on how to deal with the case with lattice symmetry in Sec. III.4. Before specifying any microscopic symmetry of interest, it is useful to first discuss the abstract topological symmetry group of a topological order, whose elements can be viewed as invertible maps that take one anyon into another, so that the fusion properties of the topological order are invariant. For unitary (anti-unitary) topological symmetry action, the braiding properties, encoded in the \(F\)- and \(R\)-symbols, is preserved (conjugated). Later in the paper we will see many examples of topological symmetries of various topological orders. Now we consider a microscopic symmetry described by a group \(G\). Suppose \(R_{\mathbf{g}}\) is a symmetry action, where \(\mathbf{g}\) labels an element of \(G\). This action can change anyons into other types, for example, it change an anyon \(a\) into another anyon \({}^{\mathbf{g}}a\). We use \[\rho_{\mathbf{g}}:a\rightarrow\ ^{\mathbf{g}}a \tag{1}\] to represent the anyon permutation pattern. Mathematically, we may say that \(\rho_{\rm g}\) describes a group homomorphism from \(G\) to the topological symmetry group of a topological order. From here we see why the notion of topological symmetry is important: It encodes all possible anyon permutation patterns. However, \(\rho_{\rm g}\) by itself is insufficient to fully characterize a symmetry-enriched topological order. Consider creating three anyons \(a_{1}\), \(a_{2}\) and \(a_{3}\) from the ground states, such that \(a_{1}\times a_{2}\rightarrow\bar{a}_{3}\), i.e., these three anyons can fuse into the trivial anyon. After separating these anyons far away from each other, there are generically \(N^{\bar{a}_{3}}_{\bar{a}a_{2}}\) degenerate such states. What effect does \(R_{\bf g}\) have when it acts on a state in this degenerate space, denoted by \(|\Psi_{a_{1},a_{2},a_{3}}\rangle\)? Since the state of a topological order is specified by the two pieces of information, the global one and the local one, the symmetry localization ansatz states that [30] \[\begin{split}& R_{\bf g}|\Psi_{a_{1},a_{2},a_{3}}\rangle\\ &=V^{(1)}_{\bf g}V^{(2)}_{\bf g}V^{(3)}_{\bf g}U_{\bf g}({}^{\bf g }a_{1},^{\bf g}a_{2},^{\bf g}\bar{a}_{3})|\Psi_{{\bf g}a_{1},{\bf g}a_{2},{\bf g }a_{3}}\rangle\end{split} \tag{2}\] where \(V^{(i)}_{\bf g}\) is a local unitary operation supported only around the anyon \(a_{i}\), for \(i=1,2,3\), and \(U_{\rm g}({}^{\bf g}a_{1},^{\bf g}a_{2};^{\bf g}\bar{a}_{3})\) is a unitary matrix with rank \(N^{\bar{a}_{3}}_{\bar{a}a_{2}}\), which acts on the degenerate state space and describes the symmetry action on the global part of the information contained in the state. Notice that the state appearing on the right hand side is \(|\Psi_{{\bf g}a_{1},{\bf g}a_{2},{\bf g}a_{3}}\rangle\), i.e., generically the anyons are permuted by the symmetry. It can be shown that the local operations \(V\) satisfy \[\eta_{a_{i}}({\bf g},{\bf h})V^{(i)}_{\bf gh}|\Psi_{a_{1},a_{2},a_{3}}\rangle= R_{\bf g}V^{(i)}_{\bf h}R^{-1}_{\bf g}V^{(i)}_{\bf g}|\Psi_{a_{1},a_{2},a_{3}}\rangle \tag{3}\] for a pair of group elements \({\bf g}\) and \({\bf h}\). Here \(\eta_{a_{i}}({\bf g},{\bf h})\) are generically nontrivial phase factors, implying that the anyons may carry fractional charge or projective quantum number under the symmetry, i.e., there can be symmetry fractionalization.2 Footnote 2: In the context of symmetry fractionalization, it is well known that the notion of fractional charge or projective quantum number is generically not the same as the projective representations more commonly discussed in mathematics, which are classified by \(H^{2}(G,U(1))\). For a given topological order and a symmetry group \(G\), it turns out that the data \(\{\rho_{\bf g};U_{\bf g}(a,b;c),\eta_{a}({\bf g},{\bf h})\}\) completely characterizes how this topological order is enriched by the symmetry \(G\). We will often call \(U_{\bf g}(a,b;c)\) the \(U\)-symbol, and \(\eta_{a}({\bf g},{\bf h})\) the \(\eta\)-symbol. This data \(\{\rho_{\bf g};U_{\bf g}(a,b;c),\eta_{a}({\bf g},{\bf h})\}\) also satisfies strong constraints and have some "gauge" freedom, i.e., two sets of \(\{\rho_{\bf g};U_{\bf g}(a,b;c),\eta_{a}({\bf g},{\bf h})\}\) data related by certain gauge transformations are physically equivalent. These gauge transformations will not be explicitly used in this paper, but they are summarized in, e.g., Ref. [19]. Moreover, even if two sets of data \(\{\rho_{\bf g};U_{\bf g}(a,b;c),\eta_{a}({\bf g},{\bf h})\}\) are not related by a gauge transformation, they still correspond to the physical state if they are related to each other by anyon relabeling that preserves the fusion and braiding properties [9; 30], i.e., such relabeling is precisely a unitary element in the topological symmetry group.3 Footnote 3: Namely, for any unitary element \({\bf k}_{0}\) in the topological symmetry group (which does not have to be a microscopic symmetry), we can relabel each anyon \(a\) by \({}^{\bf k}_{0}a\) (under this relabeling the data \(\{F^{abc}_{d},R^{ab}_{e}\}\) is the same as \(\{F^{\bf k}_{0}a_{0}b_{0}b_{0}c,R^{\bf k}_{0}a_{0}b_{0}\}\) up to gauge transformations). Then the symmetry-enriched topological order with data \(\{\rho_{\bf g}:a\rightarrow\bar{\pi}a;U_{\bf g}(a,b;c),\eta_{a}({\bf g},{\bf h})\}\) is the same as the one with data \(\{\rho^{\prime}_{\bf g}:a\rightarrow\lambda_{\rm g}\left[{\bf g}\left(\overline {\bf k}_{0}a\right)\right];U^{\prime}_{\bf g}(a,b;c)=U_{\bf g}(\overline{\bf k }_{0}a,^{\overline{K}_{0}}b;^{\overline{K}_{0}}c),\eta^{\prime}_{a}({\bf g},{ \bf h})=\eta_{\overline{K}_{0}a}({\bf g},{\bf h})\}\). For a given anyon permutation pattern, one can show that distinct symmetry fractionalization classes form a torsor over \(H^{2}_{\rho}(G,\mathcal{A})\). Namely, different possible symmetry fractionalization classes can be related to each other by elements of \(H^{2}_{\rho}(G,\mathcal{A})\), where \(\mathcal{A}\) is an Abelian group whose group elements correspond to the Abelian anyons in this topological order, and the group multiplication corresponds to the fusion of these Abelian anyons. The subscript \(\rho\) represents the permutation action of \(G\) on these Abelian anyons. In particular, given an element \([t]\in H^{2}_{\rho}(G,\mathcal{A})\), we can go from one symmetry fractionalization class with data \(\eta_{a}({\bf g},{\bf h})\) to another with data \(\tilde{\eta}_{a}({\bf g},{\bf h})\) given by \[\tilde{\eta}_{a}({\bf g},{\bf h})=\eta_{a}({\bf g},{\bf h})M_{a,t({\bf g},{\bf h })} \tag{4}\] where \(t({\bf g},{\bf h})\in\mathcal{A}\) is a representative 2-cocyle for the cohomology class \([t]\) and \(M_{a,t({\bf g},{\bf h})}=\frac{\theta_{a}\times t({\bf g},{\bf h})}{\theta_{a} \theta_{t({\bf g},{\bf h})}}\) is the braiding statistics between \(a\) and \(t({\bf g},{\bf h})\)[31]. In the case where no anyon is permuted by any symmetry, there is always a canonical notion of a trivial symmetry fractionalization class, where \(\eta_{a}({\bf g},{\bf h})=1\) for all anyon \(a\) and all \({\bf g},{\bf h}\in G\). In this case, an element of \(H^{2}(G,\mathcal{A})\) is sufficient to completely characterize the symmetry fractionalization class. Later in the paper, for a symmetry-enriched topological order we will often just specify \(\rho_{\bf g}\), without explicitly specifying the \(U\)- and \(\eta\)-symbols. However, we will specify the \(U\)- and \(\eta\)-symbols of the topological symmetry group of this topological order, which allows us to determine the \(U\)- and \(\eta\)-symbols of the microscopic symmetry as follows. Denote the microscopic symmetry group by \(G\) and the topological symmetry group by \(G_{0}\), then \(\rho_{\bf g}\) defines a group homomorphism \(\varphi:G\to G_{0}\). Denote the \(U\)- and a set of \(\eta\)-symbols of \(G_{0}\) by \(U^{(0)}_{\bf g_{0}}(a,b;c)\) and \(\eta^{(0)}_{a}({\bf g}_{0},{\bf h}_{0})\), for any \({\bf g}_{0},{\bf h}_{0}\in G_{0}\). These \(U\)- and \(\eta\)-symbols are some data intrinsic to the topological order, independent of the microscopic symmetry \(G\), just like the \(F\)- and \(R\)-symbols. The \(U\)- and a set of \(\eta\)-symbols of the microscopic symmetry \(G\) can be written as \[\begin{split}& U_{\mathbf{g}}(a,b;c)=U_{\varphi(\mathbf{g})}(a,b;c), \\ &\eta_{a}(\mathbf{g},\mathbf{h})=\eta_{a}(\varphi(\mathbf{g}), \varphi(\mathbf{h}))\end{split} \tag{5}\] for any \(\mathbf{g},\mathbf{h}\in G\). Other symmetry fractionalization classes correpending to other sets of \(\eta\)-symbols can be related to this one via Eq. (4). ### Anomalies of symmetry-enriched topological orders A symmetry-enriched topological order can have a quantum anomaly. Roughly speaking, the anomaly characterizes the interplay between locality and the symmetry of the system. There are a few different definitions of quantum anomaly that are believed to be equivalent. In general, the anomaly can be viewed as an obstruction to gauging the symmetry, an obstruction to having a symmetric short-range entangled ground state, an obstruction to describing the system using a Hilbert space with a tensor product structure and on-site symmetry actions, or as the boundary manifestation of a higher dimensional bulk. For a symmetry-enriched topological order, we can characterize its anomaly via anomaly indicators, a set of quantities expressed in terms of the data like \(\{F^{abc}_{a\nu},P^{ab}_{\nu},U_{\mathbf{g}}(a,b;c),\eta_{a}(\mathbf{g}, \mathbf{h})\}\). For example, consider a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry. The anomalies of \((2+1)\)-dimensional bosonic systems with a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry are classified by \((\mathbb{Z}_{2})^{2}\). Suppose the two generators of \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) are \(C_{1}\) and \(C_{2}\). The two anomaly indicators can be given by \(\mathcal{I}_{3}(C_{1},C_{2})\) and \(\mathcal{I}_{3}(C_{2},C_{1})\), where \[\begin{split}\mathcal{I}_{3}\left(C_{1},C_{2}\right)=\frac{1}{D^{ 2}}\sum_{\begin{subarray}{c}a,b,x,u\\ \mu\nu\hat{\mu}\hat{\nu}\rho\sigma\alpha\\ c_{1}\mu=a\\ a\times b\times C^{\ast}\hat{1}_{b}\to c^{\ast}\hat{2}_{a}\end{subarray}}& \begin{split}& d_{b}\frac{\theta_{x}}{\theta_{a}}\left(R _{u}^{b,C_{1}b}\right)_{\rho\sigma}\left(F_{c_{2}a}^{a,b^{C_{1}b}}\right)_{(x, \hat{\mu},\hat{\nu})(u,\sigma,\alpha)}^{\ast}\left(F_{c_{2}a}^{a,C_{1}b,b} \right)_{(C_{1}x,\mu,\nu)(u,\rho,\alpha)}\\ &\times U_{C_{1}}^{-1}(a,b;x)_{\hat{\mu}\mu}U_{C_{1}}^{-1}(x,^{C_ {1}}b;^{C_{2}}a)_{\hat{\nu}\nu}\times\frac{1}{\eta_{b}(C_{1},C_{1})}\frac{ \eta_{a}(C_{2},C_{1})}{\eta_{a}(C_{1},C_{2})}\end{split} \tag{6}\] and \(\mathcal{I}_{3}(C_{2},C_{1})\) is obtained from the above equation by swapping \(C_{1}\leftrightarrow C_{2}\)[19]. The reason for the subscript of \(\mathcal{I}_{3}\) can be seen in Appendix D. The summation is over all anyon types \(a\) and \(b\) satisfying \({}^{C_{1}}a=a\) and that \({}^{C_{2}}a\) is a fusion outcome of \(a\times b\times^{C_{1}}b\), \(x\) also denotes anyon types, and the Greek letters index different ways of getting a particular fusion outcome (e.g., \(\mu=1,2,\cdots,N_{a^{C_{2}}a}^{C_{2}a}\)). These two anomaly indicators take values in \(\pm 1\), and each set of values of these anomaly indicators specifies an element in the \((\mathbb{Z}_{2})^{2}\) group, which classifies these anomalies. ### Crystalline equivalence principle In the above two subsections our focus is topological orders with purely internal symmetries. But as mentioned in the Introduction, one of the main goals of this paper is to classify topological quantum spin liquids enriched by a general symmetry, which may contain both lattice symmetry and internal symmetry. The crystalline equivalence principle [32; 33] provides a convenient way to describe a topological order with lattice symmetry (and possibly also internal symmetry) using a topological order with a purely internal symmetry. More concretely, the crystalline equivalence principle asserts that for each topological phase with a symmetry group \(G\), where \(G\) may contain both lattice symmetry and internal symmetry, there is a corresponding topological phase with only internal symmetries, where the symmetry group is still \(G\), and all orientation reversing symmetries in the original topological phase should be viewed as anti-unitary symmetries in the corresponding topological phase. For example, Appendix A explains how to translate the data characterizing a symmetry-enriched topological order with reflection symmetry into the data for a time reversal symmetry-enriched topological order. Strictly speaking, the above statement only applies to bosonic systems, which is the focus of the present paper. The fermionic version of this statement is still under development (see Refs. [34; 35; 36; 37] for recent progress). ## IV Symmetry properties and quantum anomalies of lattice systems In the Introduction we mentioned that the three pieces of robust symetry-related information of a lattice system can be encoded in its quantum anomaly. In this section, we make this notion more precise. Although this idea is general, for concreteness, we focus on lattice spin systems in two spatial dimensions with one of these six symmetry groups: \(p6\times SO(3)\), \(p4\times SO(3)\), \(p6m\times\mathbb{Z}_{2}^{T}\), \(p4m\times\mathbb{Z}_{2}^{T}\), \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), and \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\). Here \(p6\), \(p4\), \(p6m\) and \(p4m\) are lattice symmetry groups, whose definitions are explained in Figs. 1 and 2. These lattice symmetries are assumed to only move the locations of the microscopic degrees of freedom, without acting on their internal states, i.e., there is no spin-orbit coupling. The \(SO(3)\) and \(\mathbb{Z}_{2}^{T}\) are on-site spin rotational symmetry and time reversal symmetry, respectively. These symmetry settings are relevant to many theoretical, experimental and numerical studies, and the examples we consider in the later part of the paper will also be based on these symmetry settings. Given such a symmetry group, different lattice systems can be organized into the so-called lattice homotopy classes [11; 38]. Two lattice systems are in the same lattice homotopy class if and only if they can be deformed into each other by these operations: 1) moving the microscopic degrees of freedom while preserving the lattice symmetry, 2) at each location, identifying degrees of freedom with the same type of projective representation under the on-site symmetry, and 3) adding or removing degrees of freedom with linear representation (i.e., trivial projective representation) under the on-site symmetry, in a way preserving the lattice symmetry. Lattice systems within the same class share the same robust symmetry-related properties, while those in different classes have distinct symmetry properties and cannot be smoothly connected without breaking the symmetry. So the robust symmetry-related information of a lattice system is the lattice homotopy class it belongs to, while colloquially it is the three pieces of information mentioned in the Introduction. To make the above discussion less abstract, consider systems with \(p6\times SO(3)\) symmetry. From Fig. 1, there are three types of high symmetry points, forming a triangular, honeycomb and kagome lattice, respectively. The on-site symmetry \(SO(3)\) has two types of projective representations: half-odd-integer spins and integer spins. According to the above discussion, although spin-1/2 systems defined on triangular, honeycomb and kagome lattices have the same symmetry group, they are in different lattice homotopy classes and have sharply distinct symmetry properties, because they cannot be deformed into each other via the above operations. Below we enumerate all lattice homotopy classes in our symmetry settings. To this end, we first specify the types of projective representations under the internal symmetries we consider. As mentioned above, there are two types of projective representations for the \(SO(3)\) symmetry. For time reversal symmetry \(\mathbb{Z}_{2}^{T}\), there are also two types of projective representations: Kramers singlet and Kramers doublet. For symmetry \(SO(3)\times\mathbb{Z}_{2}^{T}\), there are actually four types of projective representations: integer spin under \(SO(3)\) while Kramers singlet under \(\mathbb{Z}_{2}^{T}\), half-odd-integer spin under \(SO(3)\) while Kramers singlet under \(\mathbb{Z}_{2}^{T}\), and integer spin under \(SO(3)\) while Kramers doublet under \(\mathbb{Z}_{2}^{T}\). The first two types of projective representations are more common in physical systems and theoretical models than the last two, so below we will only consider the first two. Therefore, for all three types of internal symmetries we consider, i.e., \(SO(3)\), \(\mathbb{Z}_{2}^{T}\) and \(SO(3)\times\mathbb{Z}_{2}^{T}\), there is a trivial projective representation and a nontrivial one under consideration. Then for lattice systems with symmetry group being either \(p6\times SO(3)\), \(p6m\times\mathbb{Z}_{2}^{T}\) or \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), there are 4 different lattice homotopy classes [11]: 1. Class "0". A representative configuration: A system with degrees of freedom only carrying the trivial projective representation under the internal symmetry. Figure 1: Panel (a) shows the generators of the \(p6m\) group, including translations \(T_{1}\) and \(T_{2}\), a 6-fold rotation \(C_{6}\) and a mirror reflection \(M\). The two translation vectors have the same length, and their angle is \(2\pi/3\). The reflection axis of \(M\) bisects these two translation vectors. The \(p6\) symmetry is generated by \(T_{1}\), \(T_{2}\) and \(C_{6}\). Namely, \(p6\) has no \(M\) compared to \(p6m\). In panel (b), the hexagon is a translation unit cell of either \(p6m\) or \(p6\) lattice symmetry. There are three types of high symmetry points, labelled by \(a\), \(b\) and \(c\), and they form the sites of the triangular, honeycomb and kagome lattices, respectively. The \(C_{6}\) rotation center in panel (a) is at a type-\(a\) point. Figure 2: Panel (a) shows the generators of the \(p4m\) group, including translations \(T_{1}\) and \(T_{2}\), a 4-fold rotation \(C_{4}\) and a mirror reflection \(M\). The two translation vectors have the same length, and their angle is \(\pi/2\). The reflection axis of \(M\) is parallel to the translation vector of \(T_{2}\). The \(p4\) symmetry is generated by \(T_{1}\), \(T_{2}\) and \(C_{4}\). Namely, \(p4\) has no \(M\) compared with \(p4m\). In panel (b), the square is a translation unit cell of either \(p4m\) or \(p4\) lattice symmetry. There are three high symmetry points, labelled by \(a\), \(b\) and \(c\). Both type-\(a\) and type-\(b\) points form a square lattice, and type-\(c\) points form a checkerboard lattice. The \(C_{4}\) rotation center in panel (a) is taken to be at a type-\(a\) point. 2. Class "a". A representative configuration: A system with degrees of freedom carrying the nontrivial projective representation under the internal symmetry, which locate at the triangular lattice sites (type-\(a\) high symmetry points in Fig. 1). 3. Class "c". A representative configuration: A system with degrees of freedom carrying the nontrivial projective representation under the internal symmetry, which locate at the kagome lattice sites (type-\(c\) high symmetry points in Fig. 1). 4. Class "a+c". A representative configuration: A system with degrees of freedom carrying the nontrivial projective representation under the internal symmetry, which locate at both the triangular and kagome lattice sites (both type-\(a\) and type-\(c\) high symmetry points in Fig. 1). Note that a system with degrees of freedom carrying nontrivial projective representation that locate at the honeycomb lattice sites (type-\(b\) high symmetry points) is in class 0 [11]. For lattice systems with symmetry group being either \(p4\times SO(3)\), \(p4m\times\mathbb{Z}_{2}^{T}\) or \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), there are 8 different lattice homotopy classes. Using labels similar to the above, these are classes "0", "a", "b", "c", "a+b", "a+c", "b+c" and "a+b+c", respectively, where each label represents the type of the high symmetry points at which the degrees of freedom carrying the nontrivial projective representation locate ("0" means all degrees of freedom carry the regular representation, i.e., the trivial projective representation). Note that type-\(a\) and type-\(c\) high symmetry points are physically distinct once the \(C_{4}\) rotation center is specified, although they may look identical at the first glance. To turn the above picture into useful mathematical formulations, Ref. [6] shows how to characterize each lattice homotopy class using its quantum anomaly, i.e., Lieb-Schultz-Mattis anomaly. Different lattice homotopy classes have different anomalies, and the lattice homotopy class 0 has a trivial anomaly. In the context of topological orders, the anomalies can be expressed via the anomaly indicators. For topological quantum spin liquids with \(p6\times SO(3)\) symmetry, the anomaly indicators are given in Appendix D: \[\begin{split}\mathsf{l}_{1}&=\mathcal{I}_{3}(C_{2} U_{\pi},C_{2}U_{\pi}^{\prime})\,,\\ \mathsf{l}_{2}&=\mathcal{I}_{3}(T_{1}C_{2}U_{\pi},T_ {1}C_{2}U_{\pi}^{\prime})\,.\end{split} \tag{7}\] where the expression of \(\mathcal{I}_{3}\) is given by Eq. (6), \(C_{2}\) is a 2-fold rotation symmetry (i.e., \(C_{2}\equiv C_{6}^{3}\)), while \(U_{\pi}\) and \(U_{\pi}^{\prime}\) are \(\pi\) spin rotations around two orthogonal axes. We can think of \(\mathsf{l}_{1}\) and \(\mathsf{l}_{2}\) as respectively detecting half-odd-integer spins at type-\(a\) and type-\(c\) high symmetry points, which are respectively the 2-fold rotation centers of the \(C_{2}\) and \(T_{1}C_{2}\) symmetries. More generally, the values of these anomaly indicators for the 4 lattice homotopy classes enumerated above are shown in Table 1. For topological quantum spin liquids with \(p4\times SO(3)\) symmetry, the anomaly indicators are \[\begin{split}\mathsf{l}_{1}&=\mathcal{I}_{3}(C_{2} U_{\pi},C_{2}U_{\pi}^{\prime})\,,\\ \mathsf{l}_{2}&=\mathcal{I}_{3}(T_{1}T_{2}C_{2}U_{ \pi},T_{1}T_{2}C_{2}U_{\pi}^{\prime})\,,\\ \mathsf{l}_{3}&=\mathcal{I}_{3}(T_{1}C_{2}U_{\pi},T_ {1}C_{2}U_{\pi}^{\prime})\,,\end{split} \tag{8}\] where \(C_{2}\) is still a 2-fold rotational symmetry (but \(C_{2}=C_{4}^{2}\) in this case), while \(U_{\pi}\) and \(U_{\pi}^{\prime}\) are still \(\pi\) spin rotations around two orthogonal axes. We can think of \(\mathsf{l}_{1}\), \(\mathsf{l}_{2}\) and \(\mathsf{l}_{3}\) as respectively detecting half-odd-integer spins at type-\(a\), type-\(b\) and type-\(c\) high symmetry point, which are respectively the 2-fold rotation centers of the \(C_{2}\), \(T_{1}T_{2}C_{2}\) and \(T_{1}C_{2}\) symmetries. More generally, the values of these anomaly indicators for the 8 lattice homotopy classes enumerated above are shown in Table 2. The anomaly indicators for the other symmetry groups we consider (i.e., \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\)) and their values in each lattice homotopy classes are more complicated, and they are presented in Appendix D. We remark that, strictly speaking, Eqs. (7) and (8) are anomaly indicators of topological quantum spin liquids with purely internal \(p6\times SO(3)\) or \(p4\times SO(4)\) symmetry, but the crystalline equivalence principle discussed in Sec. V still allows us to use them to classify symmetry-enriched topological quantum spin liquids. ## V Framework of classification Now we are ready to present our framework to classify symmetry-enriched topological quantum spin liquids. Our framework is based on the hypothesis of emergibility [6; 39]. Namely, suppose the anomaly of the lattice system is \(\omega\), then, by tuning the parameters of this \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \hline & 0 & a & b & c & a+b & a+c & b+c & a+b+c \\ \hline \(\mathsf{l}_{1}\) & \(1\) & \(-1\) & \(1\) & \(1\) & \(-1\) & \(1\) & \(-1\) \\ \(\mathsf{l}_{2}\) & \(1\) & \(1\) & \(-1\) & \(1\) & \(-1\) & \(1\) & \(-1\) & \(-1\) \\ \(\mathsf{l}_{3}\) & \(1\) & \(1\) & \(1\) & \(-1\) & \(1\) & \(-1\) & \(-1\) & \(-1\) \\ \hline \end{tabular} \end{table} Table 2: Values of the anomaly indicators for the 8 lattice homotopy classes with symmetry group \(p4\times SO(3)\). \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline & 0 & a & c & a+c \\ \hline \(\mathsf{l}_{1}\) & \(1\) & \(-1\) & \(1\) & \(-1\) \\ \(\mathsf{l}_{2}\) & \(1\) & \(1\) & \(1\) & \(-1\) & \(-1\) \\ \hline \end{tabular} \end{table} Table 1: Values of the anomaly indicators for the 4 lattice homotopy classes with symmetry group \(p6\times SO(3)\). lattice system, a quantum many-body state (or its low-energy effective theory) with anomaly \(\Omega\) can emerge at low energies if and only if the anomaly-matching condition holds: \(\omega=\Omega\). The "only if" part of this statement is established and well known [40]. The "if" part is hypothetical, but there is no known counterexample to it and it is supported by multiple nontrivial examples [41; 42; 43; 44]. So we will assume this hypothesis to be true and use it as our basis of analysis. With this hypothesis, the framework to classify symmetry-enriched topological quantum spin liquids, or equivalently, to obtain the possible symmetry-enriched topological quantum spin liquids that can emerge in the lattice system of interest, is as follows. 1. Given the symmetry group, which may contain both lattice symmetry and internal symmetry, we first use the crystalline equivalence principle in Sec. III.4 to translate it into a purely internal symmetry. 2. Based on the above internal symmetry and the topological quantum spin liquid, we use the method in Ref. [9] to obtain the classification of the internal symmetry enriched topological quantum spin liquids. 3. For each of the internal symmetry enriched topological quantum spin liquids, we use the method in Ref. [19] to obtain its anomaly, \(\Omega\). 4. As discussed in Sec. IV, the original lattice system has its own quantum anomaly, \(\omega\). We check if the anomaly-matching condition, \(\omega=\Omega\), holds. If it does (doesn't), then the corresponding symmetry-enriched topological quantum spin liquid can (cannot) emerge in this lattice system, according to the hypothesis of emergibility. If there is only an internal symmetry but no lattice symmetry, then step 1 in the framework can be ignored. In this case, sometimes one is only interested in anomaly-free states, then \(\omega\) in step 4 should be the trivial anomaly. For steps 3 and 4, in our context \(\Omega\) (\(\omega\)) can be represented by the values of the anomaly indicators of the topological quantum spin liquid (lattice system), so checking whether \(\omega=\Omega\) becomes checking whether the values of these two sets of anomaly indicators match. We reiterate that the above framework can be straightforwardly generalized to classify quantum states other than symmetry-enriched topological quantum spin liquids. For example, it has been used to classify some gapless quantum spin liquids in Ref. [6]. In the following sections, we will apply the above framework to obtain the classification of some representative two dimensional symmetry-enriched topological quantum spin liquids on various lattice systems. ## VI \(\mathrm{U}(1)_{2N}\) topological orders: Generalized Abelian chiral spin liquids Our first class of examples are topological quantum spin liquids with \(U(1)_{2N}\) topological orders. These are Abelian chiral states, and the \(N=1\) case is the well-known Kalmeyer-Laughlin state [45; 46]. We will classify the \(U(1)_{2N}\) topological quantum spin liquid enriched by \(p6\times SO(3)\) or \(p4\times SO(4)\) symmetry. As discussed in Sec. III.4, these symmetries can be viewed as purely internal symmetries according to the crystalline equivalence principle. Our results are summarized in Table 7. The topological properties of the \(U(1)_{2N}\) topological order can be described by either the Laughlin-\((1/2N)\) wave function or a Chern-Simons theory with Lagrangian \(\mathcal{L}=-\frac{2N}{4\pi}\epsilon^{\mu\nu\lambda}A_{\mu}\partial_{\nu}A_{\lambda}\), with \(A\) a dynamical \(U(1)\) gauge field. These states also allow a description using parton mean field. Specifically, one can consider \(2N\) species of fermionic partons with an \(SU(2N)\) gauge structure. When all species are in a Chern band with a unit Chern number, the resulting state is the \(U(1)_{2N}\) topological order.4 Footnote 4: The special case with \(N=1\) allows another parton mean field description, described by fermionic partons with a \(U(1)\) gauge structure. When these fermionic partons are in a Chern band with Chern number 2, the resulting state is the \(U(1)_{2}\) topological order. The special case with \(N=2\) also allows another parton mean field description, described by fermionic partons with a \(\mathbb{Z}_{2}\) gauge structure. When these fermionic partons form a \(d+id\) superconductor, the resulting state is the \(U(1)_{4}\) topological order [22]. The above descriptions of these topological quantum spin liquids all suffer from some disadvantages. Concretely, the Laughlin wave function is a single specific state, and it cannot describe different symmetry-enriched states. To capture the symmetry actions in the Chern-Simons theory, one needs to invoke the concept of 2-group symmetries [47], which are not exact symmetries of the physical system. Also, in the \(SU(2N)\) parton mean field description, the projective quantum number of the fermionic partons are not exactly the same as the symmetry fractionalization class of the anyons. Below we will discuss the topological properties of these states in the language of Sec. III, which does not suffer from the above disadvantages, since it can describe general symmetry-enriched \(U(1)_{2N}\) topological quantum spin liquids directly in terms of the symmetry properties of the anyons. We label anyons in \(\mathrm{U}(1)_{2N}\) by \((a)\), where \(a\) is an element in \(\{0,\dots,2N-1\}\). These are all Abelian anyons with \(d_{(a)}=1\). The fusion rule is given by addition modulo \(2N\), i.e., \[(a)\otimes(b)=([a+b]_{2N}) \tag{9}\] In this paper, we use the notation \([x]_{y}\) to denote \(x\) modulo \(y\) for any integer \(x\) and positive integer \(y\), and \([x]_{y}\) takes values in \(\{0,\ldots,y-1\}\). The \(F\)-symbols can be written as \[F^{(a)(b)(c)}=e^{\frac{i\pi}{2N}a(b+c-[b+c]_{2N})}, \tag{10}\] the \(R\)-symbols are \[R^{(a)(b)}=e^{\frac{i\pi}{2N}ab} \tag{11}\] which yield the topological spins: \[\theta_{(a)}=e^{\frac{i\pi}{2N}a^{2}} \tag{12}\] The topological symmetry of \(\mathrm{U}(1)_{2N}\) is complicated for general \(N\)[48]5. For \(N=1\), there is no nontrivial topological symmetry. For \(N\geqslant 2\), there is always a \(\mathbb{Z}_{2}\) topological symmetry generated by the charge conjugation symmetry \(C\), such that anyon \((a)\rightarrow([-a]_{2N})\) under \(C\). For this topological symmetry, we can take the \(U\)-symbols as Footnote 5: However, as far as we understand, the statement in Ref. [48] about the topological symmetry of \(\mathrm{U}(1)_{2N}\) is not true. The authors there claim that the topological symmetry is isomorphic to the automorphism group \(\mathrm{Aut}\left(\mathbb{Z}_{2N}\right)\) of \(\mathbb{Z}_{N}\), i.e., it contains all actions labeled by \(q\) where \((q,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ are enough to specify all symmetry fractionalization classes for \(\mathrm{U}(1)_{2N}\). Footnote 1: The \(U(1)_{2N}\) topological order is a unitary \(\mathbb{Z}_{2}\) topological symmetry. One can easily check that this map preserves the fusion and braiding properties of the \(U(1)_{12}\) state. Furthermore, a consistent set of \(U\)- and \(\eta\)-symbols can indeed be constructed for this symmetry. ### Example: \(\mathbb{Z}_{2}\times SO(3)\) To illustrate our calculation of the anomaly of the \(U(1)_{2N}\) topological order with \(p6\times SO(3)\) or \(p4\times SO(3)\) symmetry, let us first discuss the example where the symmetry is \(\mathbb{Z}_{2}\times SO(3)\) in detail. It turns out that the calculation of the anomaly when the symmetry is \(p6\times SO(3)\) or \(p4\times SO(3)\) can be reduced to this example, by restricting \(p6\) or \(p4\) to its various \(\mathbb{Z}_{2}\) subgroups. The anomalies associated with the \(\mathbb{Z}_{2}\times SO(3)\) symmetry are classified by \[H^{4}(\mathbb{Z}_{2}\times SO(3),\mathrm{U}(1))\cong\mathbb{Z}_{2}\,. \tag{17}\] Hence there is only one type of nontrivial anomaly, which can be detected by the anomaly indicator \(\mathsf{I}=\mathcal{I}_{3}(C_{2}U_{\pi},C_{2}U^{\prime}_{\pi})\), where \(C_{2}\) is the generator of \(\mathbb{Z}_{2}\), while \(U_{\pi}\) and \(U^{\prime}_{\pi}\) are elements of \(SO(3)\), representing the \(\pi\)-rotations about two orthogonal axes. The \(SO(3)\) symmetry cannot permute anyons because all elements of \(SO(3)\) are continuously connected to the identity element. Hence, only the generator of \(\mathbb{Z}_{2}\), denoted by \(C_{2}\) here, can permute anyons by charge conjugation, and there are two possibilities. #### iv.1.1 No anyon permutation The first possibility is that the action of \(C_{2}\) is trivial and there is no anyon permutation. For the case with \(N=1\), this is the only possibility to be considered. Then the symmetry fractionalization classes are classified by \[H^{2}_{(1)}(\mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2N}) \tag{18}\] \[= H^{2}(\mathbb{Z}_{2},\mathbb{Z}_{2N})\oplus H^{2}(SO(3),\mathbb{ Z}_{2N})\] \[= (\mathbb{Z}_{2})^{2}\,.\] Namely, there are 2 generators that generate 4 different symmetry fractionalization classes. To understand these symmetry fractionalization classes, we can directly write down representative cochains of them. A representative cochain of the first generator, which we denote by \(\widetilde{\beta}(x)\) and comes from \(H^{2}(\mathbb{Z}_{2},\mathbb{Z}_{2N})\), is: \[\widetilde{\beta}(x)(C^{i}_{2},C^{j}_{2})=\frac{i+j-[i+j]_{2}}{2}=ij\mod 2N\,, \tag{19}\] with \(i,j\in\{0,1\}\). The reason for the name of this generator is explained in Appendix C. Physically, this generator detects whether the anyon (1) carries a fractional charge under the \(\mathbb{Z}_{2}\) symmetry. The second generator, which comes from \(H^{2}(SO(3),\mathbb{Z}_{2N})\), detects whether the anyon (1) carries a half-odd-integer spin under the \(SO(3)\) symmetry, and we denote it by \(Nw_{2}\), for reasons explained in Appendix C. To have a representative cochain of \(Nw_{2}\), it is convenient to consider a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) subgroup of \(SO(3)\) generated by \(U_{\pi}\) and \(U^{\prime}_{\pi}\), and an element in this subgroup can be written as \(U^{i}_{\pi}U^{\prime j}_{\pi}\), with \(i,j\in\{0,1\}\). Then, restricting \(SO(3)\) to this \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) subgroup, the representative cochain of \(Nw_{2}\) is \[(Nw_{2})\,(U^{i_{1}}_{\pi}U^{\prime i_{2}}_{\pi},U^{j_{1}}_{\pi}U ^{\prime j_{2}}_{\pi}) \tag{20}\] \[= N(i_{1}j_{1}+i_{2}j_{2}+i_{1}j_{2})\mod 2N\,.\] So the symmetry fractionalization classes can be written as \[w=n_{1}\cdot\widetilde{\beta}(x)+n_{2}\cdot Nw_{2}\,, \tag{21}\] and labeled as \(\{n_{1},n_{2}\}\) with \(n_{1,2}\in\{0,1\}\). When the \(SO(3)\) symmetry is restricted to its \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) subgroup generated by \(U_{\pi}\) and \(U^{\prime}_{\pi}\), a representative cochain can be taken as \[w(C^{i_{1}}_{2}U^{i_{2}}_{\pi}U^{\prime i_{3}}_{\pi},C^{j_{1}}_{2 }U^{j_{2}}_{\pi}U^{\prime j_{3}}_{\pi}) \tag{22}\] \[= n_{1}i_{1}j_{1}+n_{2}N(i_{2}j_{2}+i_{3}j_{3}+i_{2}j_{3})\mod 2N\] Combining the above equation and Eq. (4), we get \[\eta_{(a)}(C_{2}U_{\pi},C_{2}U^{\prime}_{\pi})=\eta_{(a)}(C_{2}U_{ \pi},C_{2}U_{\pi}) \tag{23}\] \[= \exp\left(\frac{i\pi}{N}a(n_{1}+Nn_{2})\right),\] \[\eta_{(a)}(C_{2}U^{\prime}_{\pi},C_{2}U_{\pi})=\exp\left(\frac{i \pi}{N}an_{1}\right)\] Now we plug Eqs. (10), (11), (12) and (23) into Eq. (6) (the \(U\)-symbols therein can all be taken as 1 since there is no anyon permutation), and get the anomaly indicator of the state in symmetry fractionalization class \(\{n_{1},n_{2}\}\): \[\mathcal{I}_{3}(C_{2}U_{\pi},C_{2}U^{\prime}_{\pi})=(-1)^{n_{1}n_{2}}\,. \tag{24}\] #### iv.1.2 \(C_{2}\) acts as charge conjugation The second possibility is that \(C_{2}\) acts by charge conjugation. This possibility occurs only if \(N\geqslant 2\). Then the symmetry fractionalization classes are classified by \[H^{2}_{(2)}(\mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2N})=(\mathbb{Z}_{2})^{2}\,. \tag{25}\] There are also 2 generators that generate 4 different symmetry fractionalization classes, but these symmetry fractionalization classes are different from those in Sec. VI.1.1. Explicitly, a representative cochain of the first generator, which we denote by \(Nx^{2}\), is: \[(Nx^{2})(C_{2}^{i},C_{2}^{j})=Nij\mod 2N\,, \tag{26}\] with \(i,j\in\{0,1\}\). The second generator also detects whether the anyon (1) carries a half-odd-integer spin under the \(SO(3)\) symmetry, and we also denote it by \(Nw_{2}\). The representative cochain restricted to the \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) subgroup generated by \(U_{\pi}\) and \(U_{\pi}^{\prime}\) is still given by Eq. (20). So the symmetry fractionalization classes can be written as \[w=n_{1}\cdot Nx^{2}+n_{2}\cdot Nw_{2}\,, \tag{27}\] and also labeled as \(\{n_{1},n_{2}\}\) with \(n_{1,2}\in\{0,1\}\). When the \(SO(3)\) symmetry is restricted to its \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) subgroup generated by \(U_{\pi}\) and \(U_{\pi}^{\prime}\), a representative cochain can be taken as \[\begin{split}& w(C_{2}^{i_{1}}U_{\pi}^{i_{2}}U_{\pi}^{\prime i_{3}},C_{2}^{j_{1}}U_{\pi}^{j_{2}}U_{\pi}^{\prime i_{3}})\\ =& n_{1}Ni_{1}j_{1}+n_{2}N(i_{2}j_{2}+i_{3}j_{3}+i_{ 2}j_{3})\mod 2N\end{split} \tag{28}\] Combining the above equation and Eq. (4), we get \[\begin{split}&\eta_{(a)}(C_{2}U_{\pi},C_{2}U_{\pi}^{\prime})=\eta_{(a)} (C_{2}U_{\pi},C_{2}U_{\pi})\\ &=\exp\left(i\pi a(n_{1}+n_{2})\right),\\ &\eta_{(a)}(C_{2}U_{\pi}^{\prime},C_{2}U_{\pi})=\exp\left(i\pi an _{1}\right)\end{split} \tag{29}\] Now we plug Eqs. (10), (11), (12), (13) and (29) into Eq. (6), and get the anomaly indicator of the state in symmetry fractionalization class \(\{n_{1},n_{2}\}\): \[\mathcal{I}_{3}(C_{2}U_{\pi},C_{2}U_{\pi}^{\prime})=(-1)^{(n_{1}+1)n_{2}N}\,. \tag{30}\] Hence when \(N\) is even the anomaly is always absent, and when \(N\) is odd, \(n_{1}=0,n_{2}=1\) gives nonzero anomaly and otherwise the anomaly is absent. With this warm-up, we are ready to classify \(U(1)_{2N}\) topological quantum spin liquids enriched by \(p6\times SO(3)\) or \(p4\times SO(3)\) symmetry using the framework in Sec. V. The results are summarized in Table 7. ### \(p6\times SO(3)\) The generator \(T_{1,2}\) and \(SO(3)\) cannot permute anyons7, and only the generator \(C_{6}\) can permute anyons by charge conjugation. Hence, there are two possibilities regarding how \(p6\times SO(3)\) can permute anyons: Footnote 7: The generator \(T_{1,2}\) cannot permute anyons because \(C_{6}T_{2}C_{6}^{-1}=T_{1}^{-1}\), implying that \(T_{1}\) and \(T_{2}\) should permute anyons in opposite ways, which, combined with \(C_{6}T_{1}C_{6}^{-1}=T_{1}T_{2}\), implies that neither \(T_{1}\) nor \(T_{2}\) can permute anyons. 1. Trivial \(C_{6}\) action: no anyon permutation. In this case, the possible symmetry fractionalization classes are classified by \[H_{(1)}^{2}(p6\times SO(3),\mathbb{Z}_{2N})=\mathbb{Z}_{2N}\oplus\mathbb{Z}_{ (2N,6)}\oplus\mathbb{Z}_{2}\,,\] (31) whose elements can be written as \[w=n_{1}\cdot\widetilde{\mathscr{B}}_{xy}^{(1)}+n_{2}\cdot\widetilde{\mathscr{ B}}_{c^{2}}^{(1)}+n_{3}\cdot Nw_{2}\,,\] (32) and labeled by \(\{n_{1},n_{2},n_{3}\}\), with \(n_{1}\in\{0,\ldots,2N-1\}\), \(n_{2}\in\{0,\ldots,(2N,6)-1\}\), \(n_{3}\in\{0,1\}\). Here \(\widetilde{\mathscr{B}}_{xy}^{(1)}\), \(\widetilde{\mathscr{B}}_{c^{2}}^{(1)}\) and \(Nw_{2}\) are generators of \(\mathbb{Z}_{2N}\), \(\mathbb{Z}_{(2N,6)}\) and \(\mathbb{Z}_{2}\), respectively (the representative cochains and the reason for the names of these generators are given in Appendix C). We can identify these generators in terms of the following three topological invariants defined in Eqs. (14) and (15): \(\lambda_{1}(T_{1},T_{2})\), \(\lambda_{2}(C_{6})\) and \(\lambda_{1}(U_{\pi},U_{\pi}^{\prime})\), and the values of these topological invariants for the generators are listed in Table 3. Physically, we can think of \(\widetilde{\mathscr{B}}_{xy}^{(1)}\), \(\widetilde{\mathscr{B}}_{c^{2}}^{(1)}\) and \(Nw_{2}\) as detecting whether the anyon (1) carries projective representation under translation symmetries, \(C_{6}\) and \(SO(3)\), respectively.8 For each symmetry fractionalization class, the \(U\)- and \(\eta\)-symbols can be obtained via Eqs. (4), (5) and (13). Footnote 8: In fact, the generator \(A_{c}^{2}\) actually detects whether the anyon (1) simultaneously carries projective representation under four different symmetries, generated by \(C_{6}\), \(T_{1}C_{6}^{3}\), \(T_{2}C_{6}^{3}\) and \(T_{1}T_{2}C_{6}^{3}\), respectively. Without considering anomaly matching, \(p6\times SO(3)\) symmetric \(U(1)_{2N}\) topological quantum spin liquids are classified by \(\{n_{1},n_{2},n_{3}\}\), if no symmetry permutes anyons. Recall that two symmetry fractionalization classes related to each other by relabeling anyons are physically identical, so \(\{n_{1},n_{2},n_{3}\}\) and \(\{[-n_{1}]_{2N},[-n_{2}]_{(2N,6)},n_{3}\}\) are identified. As argued in the Introduction and Sec. IV, in systems with lattice symmetry it is important to consider anomaly matching for the classification of symmetry-enriched topological quantum spin liquids. The values of the anomaly indicators for different lattice homotopy classes with \(p6\times SO(3)\) \begin{table} \begin{tabular}{c|c|c|c} \hline \hline SF class & \(\lambda_{1}(T_{1},T_{2})\) & \(\lambda_{2}(C_{6})\) & \(\lambda_{1}(U_{\pi},U_{\pi}^{\prime})\) \\ \hline \(\widetilde{\mathscr{B}}_{xy}^{(1)}\) & \(e^{\frac{455}{2N}}\) & 1 & 1 \\ \(\widetilde{\mathscr{B}}_{c^{2}}^{(1)}\) & 1 & \(e^{\frac{2\pi i}{(2N,6)}}\) & 1 \\ \(Nw_{2}\) & 1 & 1 & \(-1\) \\ \hline \hline \end{tabular} \end{table} Table 3: Values of the topological invariants, given \(p6\times SO(3)\) symmetry with trivial action. symmetry are given in Table 1. The anomaly indicators for the \(U(1)_{2N}\) state in the symmetry fractionalization class \(\{n_{1},n_{2},n_{3}\}\) can be calculated in a way similar to Sec. VI.1, which yields 9 Footnote 9: Another way to calculate these anomaly indicators is as follows. We can restrict the \(p6\times SO(3)\) symmetry into two of its \(\mathbb{Z}_{2}\times SO(3)\) subgroups, where the \(\mathbb{Z}_{2}\) in the first subgroup is generated by \(C_{2}\equiv C_{6}^{3}\), and in the second it is generated by \(T_{1}C_{2}\). By comparing the representative cochains of \(B_{xy}\), \(A_{c}^{2}\) and \(Nw_{2}\) in Appendix C with Eq. (22), we see that, after the restriction to the first \(\mathbb{Z}_{2}\times SO(3)\) subgroup, the \(p6\times SO(3)\) symmetry fractionalization class labeled by \(\{n_{1},n_{2},n_{3}\}\) becomes the class labeled by \(\{n_{1}+n ### \(p4\times SO(3)\) \(SO(3)\) cannot permute anyons, and both the generators \(T_{1,2}\) and the generator \(C_{4}\) can permute anyons by charge conjugation. Hence, there are four possibilities regarding how \(p4\times SO(3)\) can permute anyons: 1. Trivial \(T_{1,2}\) and \(C_{4}\) action. In this case, the possible symmetry fractionalization classes are given by \[H_{(1)}^{2}(p4\times SO(3),\mathbb{Z}_{2N})=\mathbb{Z}_{2N}\oplus\mathbb{Z}_{(2 N,4)}\oplus\left(\mathbb{Z}_{2}\right)^{2}.\] (37) whose elements can be labeled as \[\begin{split}& w=n_{1}\cdot\widetilde{\mathscr{B}}_{xy}^{(1)}+n_{2} \cdot\widetilde{\mathscr{B}}_{c^{2}}^{(1)}\\ &+n_{3}\cdot\widetilde{\beta}(A_{x+y})+n_{4}\cdot Nw_{2}\,,\end{split}\] (38) with \(n_{1}\in\{0,\ldots,2N-1\}\), \(n_{2}\in\{0,\ldots,(2N,4)-1\}\), \(n_{3,4}\in\{0,1\}\). Here \(\widetilde{\mathscr{B}}_{xy}^{(1)}\), \(\widetilde{\mathscr{B}}_{c^{2}}^{(1)}\), \(\widetilde{\beta}(A_{x+y})\) and \(Nw_{2}\) are generators of \(\mathbb{Z}_{2N}\), \(\mathbb{Z}_{(2N,4)}\) and two \(\mathbb{Z}_{2}\) pieces, respectively (the representative cochains and the reason for the names of these generators are given in Appendix C). We can identify them in terms of the following four topological invariants defined in Eqs. (14) and (15): \(\lambda_{1}(T_{1},T_{2})\), \(\lambda_{2}(C_{4})\), \(\lambda_{2}(T_{1}C_{2})\) and \(\lambda_{1}(U_{\pi},U_{\pi}^{\prime})\), and we list the value of the topological invariants for the generators of symmetry fractionalization classes in Table 8. For each symmetry fractionalization class, the \(U\)- and \(\eta\)-symbols can be obtained via Eqs. (4), (5) and (13). Again, because symmetry fractionalization classes related by relabeling anyons are physically identical, different symmetry realizations on \(U(1)_{2N}\) in this case are specified by \(\{n_{1},n_{2},n_{3},n_{4}\}\), where \(\{n_{1},n_{2},n_{3},n_{4}\}\) is identified with \(\{[-n_{1}]_{2N},[-n_{2}]_{(2N,4)},n_{3},n_{4}\}\). Calculating the anomaly indicators for \(U(1)_{2N}\) state with symmetry fractionalization class labeled by \(\{n_{1},n_{2},n_{3},n_{4}\}\), we get \[\begin{split}&\text{l}_{1}=(-1)^{n_{2}n_{4}}\,,\quad\text{l}_{ 2}=(-1)^{(n_{1}+n_{2})n_{4}}\,,\\ &\text{l}_{3}=(-1)^{(n_{2}+n_{3})n_{4}}\,.\end{split}\] (39) Therefore, by matching these anomaly indicators with Table 2, we arrive at the classification in Table 9. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Symmetry group & Lattice homotopy class & \(N=1\) & odd \(N>1\) & even \(N\) \\ \hline \multirow{4}{*}{\(p6\times SO(3)\)} & 0 & 5 & \(\left(\frac{5(N(N,3)+1)}{2}\right)+(5)\) & \(\left(\frac{5N(N,3)}{2}+3\right)+(8)\) \\ \cline{2-5} & a & 1 & \(\left(\frac{N(N,3)+1}{2}\right)+(1)\) & \(\left(\frac{N(N,3)}{2}\right)+(0)\) \\ \cline{2-5} & c & 1 & \(\left(\frac{N(N,3)+1}{2}\right)+(1)\) & \(\left(\frac{N(N,3)}{2}\right)+(0)\) \\ \cline{2-5} & a+c & 1 & \(\left(\frac{N(N,3)+1}{2}\right)+(1)\) & \(\left(\frac{N(N,3)}{2}+1\right)+(0)\) \\ \hline \multirow{4}{*}{\(p4\times SO(3)\)} & 0 & 9 & \(\left(\frac{9(N+1)}{2}\right)+(9)+(9)+(9)\) & \((9N+6)+(12)+(20)+(20)\) \\ \cline{2-5} & a & 1 & \(\left(\frac{N+1}{2}\right)+(1)+(1)+(1)\) & \((N)+(0)+(4)+(0)\) \\ \cline{2-5} & b & 1 & \(\left(\frac{N+1}{2}\right)+(1)+(1)+(1)\) & \((N)+(0)+(0)+(4)\) \\ \cline{2-5} & c & 1 & \(\left(\frac{N+1}{2}\right)+(1)+(1)+(1)\) & \((N+2)+(4)+(0)+(0)\) \\ \cline{2-5} & a+b & 1 & \(\left(\frac{N+1}{2}\right)+(1)+(1)+(1)\) & \((N)+(0)+(0)+(0)\) \\ \cline{2-5} & a+c & 1 & \(\left(\frac{N+1}{2}\right)+(1)+(1)+(1)\) & \((N)+(0)+(0)+(0)\) \\ \cline{2-5} & b+c & 1 & \(\left(\frac{N+1}{2}\right)+(1)+(1)+(1)\) & \((N)+(0)+(0)+(0)\) \\ \cline{2-5} & a+b+c & 1 & \(\left(\frac{N+1}{2}\right)+(1)+(1)+(1)\) & \((N)+(0)+(0)+(0)\) \\ \hline \end{tabular} \end{table} Table 7: Number of \(p6\times SO(3)\) and \(p4\times SO(3)\) symmetry-enriched \(U(1)_{2N}\) topological quantum spin liquids. For the case with a \(p6\times SO(3)\) symmetry, each of the last two columns is written as a sum of two terms, representing the number of states where no anyon is permuted by symmetries and where \(C_{6}\) acts as charge conjugation, respectively. For the case with \(p4\times SO(3)\) symmetry, each of the last two columns is written as a sum of four terms, representing the number of states where no anyon is permuted by symmetries, where \(C_{4}\) acts as charge conjugation while \(T_{1,2}\) do not, where \(T_{1,2}\) act as charge conjugation while \(C_{4}\) does not, and where \(C_{4}\) and \(T_{1,2}\) all act as charge conjugation, respectively. The details of the symmetry fractionalization class of each state can be found in Tables 4, 6, 7, 12, 13, 14 and 15. 2. Nontrivial \(C_{4}\) action, trivial \(T_{1,2}\) action. In this case, the possible symmetry fractionalization classes are given by \[H^{2}_{(2)}(p4\times SO(3),\mathbb{Z}_{2N})=(\mathbb{Z}_{2})^{4}\,.\] (40) We can write these elements as \[\begin{split}& w=n_{1}\cdot NB_{xy}+n_{2}\cdot NB_{c^{2}}\\ &+n_{3}\cdot\widetilde{\beta}(A_{x+y})+n_{4}\cdot Nw_{2}\,,\end{split}\] (41) with \(n_{1,2,3,4}\in\{0,1\}\). Here \(NB_{xy}\), \(NB_{c^{2}}\), \(\widetilde{\beta}(A_{x+y})\) and \(Nw_{2}\) are generators of the four \(\mathbb{Z}_{2}\) pieces respectively (the representative cochains and the reason for the names of these generators are given in Appendix C). We can identify them in terms of the following four topological invariants defined in Eqs. (14), (15) and (16): \(\lambda_{1}(T_{1},T_{2})\), \(\lambda_{3}(C_{4})\), \(\lambda_{2}(T_{1}C_{2})\) and \(\lambda_{1}(U_{\pi},U_{\pi}^{\prime})\), and we list the value of the topological invariants for the generators of symmetry fractionalization classes in Table 10. For each symmetry fractionalization class, the \(U\)- and \(\eta\)-symbols can be obtained via Eqs. (4), (5) and (13). 3. Nontrivial \(T_{1,2}\) action, trivial \(C_{4}\) action. In this case, the possible symmetry fractionalization classes are given by \[H^{2}_{(3)}(p4\times SO(3),\mathbb{Z}_{2N})=\mathbb{Z}_{(2N,4)}\oplus(\mathbb{ Z}_{2})^{3}\,.\] (43) We can write these elements as \[\begin{split}& w=n_{1}\cdot\widetilde{\mathscr{B}}^{(3)}_{xy}+n_{2} \cdot NB_{c^{2}}\\ &+n_{3}\cdot NA_{x+y}^{2}+n_{4}\cdot Nw_{2}\,,\end{split}\] (44) with \(n_{1}\in\{0,\ldots,(2N,4)-1\}\), \(n_{2,3,4}\in\{0,1\}\). Here \(\widetilde{\mathscr{B}}^{(3)}_{xy}\), \(NB_{c^{2}}\), \(NA_{x+y}^{2}\) and \(Nw_{2}\) are generators of the \(\mathbb{Z}_{(2N,4)}\) and the three \(\mathbb{Z}_{2}\) pieces, respectively (the representative cochains and the reason for the names of these generators are given in Appendix C)We can identify them in terms of the following four topological invariants defined in Eqs. (14), (15) and (16): \(\lambda_{2}(C_{4})\), \(\lambda_{2}(T_{1}T_{2}C_{2})\), \(\lambda_{3}(T_{1}C_{2})\) and \(\lambda_{1}(U_{\pi},U_{\pi}^{\prime})\), and we list the value of the topological invariants for the generators of symmetry fractionalization classes in Table 11. For each symmetry fractionalization class, the \(U\)- and \(\eta\)-symbols can be obtained via Eqs. (4), (5) and (13). Again, because symmetry fractionalization classes related by relabeling anyons are physically identical, different symmetry realizations on \(U(1)_{2N}\) in this case are specified by \(\{n_{1},n_{2},n_{3},n_{4}\}\), where \(\{n_{1},n_{2},n_{3},n_{4}\}\) is identified with \(\{[-n_{1}]_{(2N,4)},n_{2},n_{3},n_{4}\}\). Calculating the anomaly indicators for \(U(1)_{2N}\) with symmetry fractionalization class labeled by \begin{table} \begin{tabular}{|c|c|c|c|} \hline \hline SF class & \(\lambda_{1}(T_{1},T_{2})\) & \(\lambda_{3}(C_{4})\) & \(\lambda_{2}(T_{1}C_{2})\) & \(\lambda_{1}(U_{\pi},U_{\pi}^{\prime})\) \\ \hline \(NB_{xy}\) & \(-1\) & \(1\) & \(1\) & \(1\) \\ \(NB_{c^{2}}\) & \(1\) & \(-1\) & \((-1)^{N}\) & \(1\) \\ \(\widetilde{\beta}(A_{x+y})\) & \(1\) & \(1\) & \(-1\) & \(1\) \\ \(Nw_{2}\) & \(1\) & \(1\) & \(1\) & -1 \\ \hline \hline \end{tabular} \end{table} Table 10: Values of the topological invariants, given \(p4\times SO(3)\) symmetry with nontrivial \(C_{4}\) action. \begin{table} \begin{tabular}{|c|c|} \hline Lattice homotopy class & Symmetry fractionalization class \\ \hline \(0\) & \(\{n_{1},n_{2},n_{3},0\}\) or \(\{n_{1},n_{2},0,1\}\) for \(N\) even, \\ & \(\{n_{1},n_{2},n_{3},0\}\) or \(\{0,0,0,1\}\) for \(N\) odd \\ \hline a & \(\{1,1,1,1\}\) for \(N\) odd \\ \hline b & \(\{1,0,0,1\}\) for \(N\) odd \\ \hline c & \(\{n_{1},n_{2},1,1\}\) for \(N\) even, \\ & \(\{0,0,1,1\}\) for \(N\) odd \\ \hline a+b & \(\{0,1,1,1\}\) for \(N\) odd \\ \hline a+c & \(\{1,1,0,1\}\) for \(N\) odd \\ \hline b+c & \(\{1,0,1,1\}\) for \(N\) odd \\ \hline a+b+c & \(\{0,1,0,1\}\) for \(N\) odd \\ \hline \hline \end{tabular} \end{table} Table 11: Classification of symmetry-enriched \(U(1)_{2N}\) topological quantum spin liquids in lattice systems with \(p4\times SO(3)\) symmetry, if no symmetry permutes anyons. \begin{table} \begin{tabular}{|c|c|} \hline Lattice homotopy class & Symmetry fractionalization class \\ \hline \(0\) & \(\{n_{1},n_{2},n_{3},0\}\) or \(\{n_{1},n_{2},0,1\}\) for \(N\) even, \\ & \(\{n_{1},n_{2},n_{3},0\}\) or \(\{0,0,0,1\}\) for \(N\) odd \\ \hline a & \(\{1,1,1,1\}\) for \(N\) odd \\ \hline b & \(\{1,0,0,1\}\) for \(N\) odd \\ \hline c & \(\{n_{1},n_{2},1,1\}\) for \(N\) even, \\ & \(\{0,0,1,1\}\) for \(N\) odd \\ \hline a+b & \(\{0,1,1,1\}\) for \(N\) odd \\ \hline a+c & \(\{1,1,0,1\}\) for \(N\) odd \\ \hline b+c & \(\{1,0,1,1\}\) for \(N\) odd \\ \hline a+b+c & \(\{0,1,0,1\}\) for \(N\) odd \\ \hline \hline \end{tabular} \end{table} Table 12: Classification of symmetry-enriched \(U(1)_{2N}\) topological quantum spin liquids in lattice systems with \(p4\times SO(3)\) symmetry, if no symmetry permutes anyons. \((n_{1},n_{2},n_{3},n_{4})\) as before, we get \[\begin{split}&\text{l}_{1}=(-1)^{(n_{1}+n_{2}N)n_{4}}\,,\quad \text{l}_{2}=(-1)^{n_{2}n_{4}N}\,,\\ &\text{l}_{3}=(-1)^{(n_{2}+n_{3}+1)n_{4}N}\,.\end{split}\] (45) Therefore, by matching these anomaly indicators with Table 2, we arrive at the classification in Table 13. 4. Nontrivial \(T_{1,2}\) and \(C_{4}\) action. In this case, the possible symmetry fractionalization classes are given by \[\begin{split}& H_{(4)}^{2}(p4\times SO(3),\mathbb{Z}_{2N})= \mathbb{Z}_{(2N,4)}\oplus(\mathbb{Z}_{2})^{3}\,.\end{split}\] (46) We can label these elements as \[\begin{split} w=& n_{1}\cdot\widetilde{\mathscr{B}}_{ xy}^{(4)}+n_{2}\cdot NB_{c^{2}}\\ &+n_{3}\cdot NA_{x+y}^{2}+n_{4}\cdot Nw_{2}\,,\end{split}\] (47) with \(n_{1}\in\{0,\dots,(2N,4)-1\}\), \(n_{2,3,4}\in\{0,1\}\). Here \(\widetilde{\mathscr{B}}_{xy}^{(4)}\), \(NB_{c^{2}}\), \(\widetilde{A_{x+y}^{2}}\) and \(Nw_{2}\) are generators of the \(\mathbb{Z}_{(2N,4)}\) and the three \(\mathbb{Z}_{2}\) pieces, respectively (the representative cochains and the reason for the names of these generators are given in Appendix C). We can identify them in terms of the following four topological invariants defined in Eqs. (14), (15) and (16): \(\lambda_{3}(C_{4})\), \(\lambda_{2}(T_{1}C_{4})\), \(\lambda_{3}(T_{1}C_{2})\) and \(\lambda_{1}(U_{\pi},U_{\pi}^{\prime})\), and we list the value of the topological invariants for the generators of symmetry fractionalization classes in Table 14. For each symmetry fractionalization class, the \(U\)- and \(\eta\)-symbols can be obtained via Eqs. (4), (5) and (13). Again, because symmetry fractionalization classes related by relabeling anyons are physically identical, different symmetry realizations on \(U(1)_{2N}\) in this case are specified by \(\{n_{1},n_{2},n_{3},n_{4}\}\), where \(\{n_{1},n_{2},n_{3},n_{4}\}\) is identified with \(\{[-n_{1}]_{(2N,4)},n_{2},n_{3},n_{4}\}\). Calculating the anomaly indicators for \(U(1)_{2N}\) with symmetry fractionalization class labeled by \((n_{1},n_{2},n_{3},n_{4})\) as before, we get \[\begin{split}&\text{l}_{1}=(-1)^{n_{2}n_{4}N}\,,\quad\text{l}_{2}=(- 1)^{(n_{1}+n_{2}N)n_{4}}\,,\\ &\text{l}_{3}=(-1)^{(n_{2}+n_{3}+1)n_{4}N}\,.\end{split}\] (48) Therefore, by matching these anomaly indicators with Table 2, we arrive at the classification in Table 15. Summarizing all cases, the total number of different \(p4\times SO(3)\) symmetry-enriched \(U(1)_{2N}\) topological quantum spin liquids is summarized in Table 7. Note that this classification is complete for \(N\leqslant 5\), but incomplete for \(N\geqslant 6\), because we have assumed that the only way the symmetry can permute anyons is via charge conjugation, while for \(N\geqslant 6\) a symmetry can in principle permute anyons in other manners. ## VII Ising\({}^{(\nu)}\) topological orders: Kitaev's non-Abelian chiral spin liquids Our next class of examples are non-Abelian chiral spin liquid states, which we dub the "Ising\({}^{(\nu)}\) states", with \(\nu\) an odd integer. Their topological properties are discussed in detail by Kitaev [22] and will be reviewed below. The exactly solvable model in Ref. [22] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \hline SF class & \(\lambda_{2}(C_{4})\) & \(\lambda_{3}(T_{1}C_{4})\) & \(\lambda_{3}(T_{1}C_{2})\) & \(\lambda_{1}(U_{\pi},U_{\pi}^{\prime})\) \\ \hline \(\widetilde{\mathscr{B}}_{xy}^{(3)}\) & \(e^{\frac{2\pi}{(2N,4)}i}\) & 1 & 1 \\ \(NB_{c^{2}}\) & \((-1)^{N}\) & \(-1\) & 1 \\ \(NA_{x+y}^{2}\) & 1 & 1 & \(-1\) & 1 \\ \(Nw_{2}\) & 1 & 1 & 1 & -1 \\ \hline \hline \end{tabular} \end{table} Table 13: Classification of symmetry-enriched \(U(1)_{2N}\) topological quantum spin liquids in lattice systems with \(p4\times SO(3)\) symmetry, if translations act as charge conjugation. \begin{table} \begin{tabular}{|c|c|c|} \hline \hline Lattice & \multicolumn{2}{|c|}{Symmetry fractionalization class} \\ \hline & \(\{n_{1},n_{2},n_{3},0\}\), \(\{0,n_{2},n_{3},1\}\), \\ & or \(\{2,n_{2},n_{3},1\}\) for \(N\) even, \\ & \(\{n_{1},n_{2},n_{3},0\}\) or \(\{0,0,1,1\}\) for \(N\) odd \\ \hline & \(\{1,n_{2},n_{3},1\}\) for \(N\) even, \\ & \(\{1,0,1,1\}\) for \(N\) odd \\ \hline & \(\{1,1,0,1\}\) for \(N\) odd \\ \hline & \(\{0,0,0,1\}\) for \(N\) odd \\ \hline & \(\{0,1,0,1\}\) for \(N\) odd \\ \hline & \(\{0,1,0,1\}\) for \(N\) odd \\ \hline & \(\{0,0,0,1\}\) for \(N\) odd \\ \hline & \(\{0,1,0,0,1\}\) for \(N\) odd \\ \hline & \(\{1,0,0,1\}\) for \(N\) odd \\ \hline & \(\{1,1,1,1\}\) for \(N\) odd \\ \hline & \(\{0,1,1,1\}\) for \(N\) odd \\ \hline \hline \end{tabular} \end{table} Table 14: Values of the topological invariants, given \(p4\times SO(3)\) symmetry with nontrivial \(T_{1,2}\) and \(C_{4}\) actions. has triggered enormous interest in realizing the Ising\({}^{(1)}\) state in real materials [49; 50; 51]. We remark that usually these Kitaev quantum spin liquids are discussed in the context of spin-orbit coupled systems, but here we consider them in systems without spin-orbit coupling for simplicity. In particular, we will classify Ising\({}^{(\nu)}\) states in lattice systems with \(p6\times SO(3)\) or \(p4\times SO(3)\) symmetry. The Ising\({}^{(\nu)}\) state has three anyons \(\{I,\sigma,\psi\}\), where the trivial anyon here is denoted \(I\), and the nontrivial fusion rules are given by \[\psi\times\psi=I,\quad\sigma\times\psi=\sigma,\quad\sigma\times\sigma=I+\psi. \tag{49}\] The nontrivial \(F\)-symbols are \[\begin{split} F_{\sigma}^{\psi\sigma\psi}&=F_{ \psi}^{\sigma\psi\sigma}=-1\\ [F_{\sigma}^{\sigma\sigma\sigma}]_{ab}&=\frac{ \varkappa_{\sigma}}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}_{ab}.\end{split} \tag{50}\] Here, the column and row labels of the matrix take values \(I\) and \(\psi\) (in this order). All other \(F\)-symbols are \(1\) if it is compatible with the fusion rule and \(0\) if it is not. \(\varkappa_{\sigma}=(-1)^{\frac{\nu^{2}-1}{8}}\) is the Frobenius-Schur indicator of \(\sigma\). The nontrivial \(R\)-symbols are \[\begin{split} R^{\psi\psi}&=-1,\quad R_{\sigma}^{\psi \sigma}=R_{\sigma}^{\varphi\psi}=(-i)^{\nu}\\ R_{I}^{\sigma}&=\varkappa_{\sigma}e^{-i\frac{\pi}{8} \nu},\quad R_{\psi}^{\sigma\sigma}=\varkappa_{\sigma}e^{i\frac{\pi}{8}\nu}. \end{split} \tag{51}\] The topological spins are \(\theta_{\psi}=-1\), \(\theta_{\sigma}=e^{i\frac{\pi}{8}\nu}\), and the chiral central charge \(c_{-}=\frac{\nu}{2}\). The topological symmetry of Ising\({}^{(\nu)}\) is trivial and no symmetry of Ising\({}^{(\nu)}\) can permute anyons. The \(U\)-symbol and a set of \(\eta\)-symbol can all be chosen to be \(1\). The symmetry fractionalization classes of \(p6\times SO(3)\) are classified by \[H^{2}(p6\times SO(3),\mathbb{Z}_{2})\cong(\mathbb{Z}_{2})^{3}\,. \tag{52}\] We can label these elements as \[w=n_{1}\cdot B_{xy}+n_{2}\cdot A_{c}^{2}+n_{3}\cdot w_{2}\,, \tag{53}\] with \(n_{1,2,3}\in\{0,1\}\). The symmetry fractionalization classes of \(p4\times SO(3)\) are given by \[H^{2}(p4\times SO(3),\mathbb{Z}_{2})\cong(\mathbb{Z}_{2})^{4}\,. \tag{54}\] We can label these elements as \[w=n_{1}\cdot B_{xy}+n_{2}\cdot B_{c^{2}}+n_{3}\cdot A_{x+y}^{2}+n_{4}\cdot w_{ 2}\,, \tag{55}\] with \(n_{1,2,3,4}\in\{0,1\}\). The representative cochains of these elements are presented in Appendix C. Physically, these generators can be viewed as detecting whether the non-Abelian anyon \(\sigma\) carries a projective quantum number under these global symmetries. For each symmetry fractionalization class, the \(U\)- and \(\eta\)-symbols can be obtained via Eqs. (4) and (5). The above discussion implies that, without considering anomaly matching, there are in total \(2^{3}=8\) different \(p6\times SO(3)\) symmetric Ising\({}^{(\nu)}\) states, and \(2^{4}=16\) different \(p4\times SO(3)\) symmetric Ising\({}^{(\nu)}\) states. Calculating the anomaly indicators for the Ising\({}^{(\nu)}\) state in a way similar to the the calculation for the \(U(1)_{2N}\) state, we find that for any symmetry fractionalization class of either \(p6\times SO(3)\) or \(p4\times SO(3)\) symmetry, all anomaly indicators always evaluate to \(1\) and hence the anomaly is always absent. Therefore, all \(p6\times SO(3)\) or \(p4\times SO(3)\) symmetry-enriched Ising\({}^{(\nu)}\) topological quantum spin liquids can emerge in lattice systems within lattice homotopy class \(0\) (including, for example, honeycomb lattice spin-\(1/2\) system or spin-\(1\) system on any lattice), but not other lattice homotopy class (including, for example, spin-\(1/2\) system on triangular, kagome, square and checkerboard lattices). We notice that in most previous discussions of the Ising\({}^{(1)}\) state in spin-orbit coupled systems, the underlying lattice systems indeed have a trivial anomaly, since they can be obtained from the lattice homotopy class \(0\) here by breaking certain symmetries. ## VIII \(Z_{n}\) topological orders: Generalized toric codes In this section, we consider the \(\mathbb{Z}_{N}\) topological order, which is the \(\mathbb{Z}_{N}\) generalization of the famous \(\mathbb{Z}_{2}\) topological order [52; 53; 54; 55; 2; 2; 56]. The case with \(N=2\) has been studied extensively in many different types of lattice systems. However, as mentioned in the Introduction, when \(N>2\) these states do not allow a description in terms of a simple parton mean field (instead, the partons have to be strongly interacting), and \begin{table} \begin{tabular}{|c|c|} \hline Lattice & \\ homotopy class & Symmetry fractionalization class \\ \hline & \(\{n_{1},n_{2},n_{3},0\}\), \(\{0,n_{2},n_{3},1\}\), \\ & or \(\{2,n_{2},n_{3},1\}\) for \(N\) even, \\ & \(\{n_{1},n_{2},n_{3},0\}\) or \(\{0,0,1,1\}\) for \(N\) odd \\ \hline a & \(\{1,1,0,1\}\) for \(N\) odd \\ \hline b & \(\{1,n_{2},n_{3},1\}\) for \(N\) even, \\ & \(\{1,0,1,1\}\) for \(N\) odd \\ \hline c & \(\{0,0,0,1\}\) for \(N\) odd \\ \hline a+b & \(\{0,1,0,1\}\) for \(N\) odd \\ \hline a+c & \(\{1,1,1,1\}\) for \(N\) odd \\ \hline b+c & \(\{1,0,0,1\}\) for \(N\) odd \\ \hline a+b+c & \(\{0,1,1,1\}\) for \(N\) odd \\ \hline \end{tabular} \end{table} Table 4: Classification of symmetry-enriched \(U(1)_{2N}\) topological quantum spin liquids in lattice systems with \(p4\times SO(3)\) symmetry, if translations and \(C_{4}\) both act as charge conjugation. they are much less explored (see examples in Refs. [57; 58; 59; 60; 61]). Our framework in Sec. V allows us to classify a general \(\mathbb{Z}_{N}\) topological quantum spin liquid enriched by a general symmetry. For concreteness, the symmetry we will consider below are one of these four: \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\), where \(p6m\) and \(p4m\) are lattice symmetries, while \(SO(3)\) and \(\mathbb{Z}_{2}^{T}\) are on-site spin rotational symmetry and time reversal symmetry, respectively. In the \(\mathbb{Z}_{N}\) topological order, there are \(N^{2}\) anyons in total, which can be labeled by two integers as \(a=(a_{e},a_{m})\), with \(a_{e},a_{m}\in\{0,\ldots,N-1\}\). Following the convention in the \(\mathbb{Z}_{2}\) toric code, we will call the anyon labeled by \((1,0)\) as \(e\), and the anyon labeled by \((0,1)\) as \(m\). The fusion rules are element-wise addition modulo \(N\), i.e., \[(a_{e},a_{m})\times(b_{e},b_{m})=\left([a_{e}+b_{e}]_{N},[a_{m}+b_{m}]_{N} \right), \tag{56}\] In a choice of gauge, the \(F\)-symbols of this topological order are all \(1\) and the \(R\)-symbols are given by \[R^{ab}=e^{i\frac{2\pi}{N}a_{m}b_{e}}\,. \tag{57}\] The topological symmetry group is complicated to determine for general \(N\)[48; 62]. For \(N=2\), the topological symmetry is \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{T}\), generated by the unitary electric-magnetic duality symmetry \(S\) that exchanges \(e\) and \(m\), i.e., \((a_{e},a_{m})\to(a_{m},a_{e})\), and an anti-unitary symmetry \(T\) which exchanges \(e\) and \(m\) the same way as \(S\). For this \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{T}\) symmetry, we can choose the \(U\)-symbols as \[U_{\mathbf{g}}(a,b;c)=\begin{cases}(-1)^{a_{m}b_{e}},&\mathbf{g}\text{ permutes anyons},\\ 1,&\text{otherwise},\end{cases} \tag{58}\] and a set of \(\eta\)-symbols as \[\eta_{a}(\mathbf{g},\mathbf{h})=\begin{cases}(-1)^{a_{e}a_{m}},&\mathbf{g}, \,\mathbf{h}\text{ permute anyons},\\ 1,&\text{otherwise}.\end{cases} \tag{59}\] with \((a_{e},a_{m}),(b_{e},b_{m})\) the anyon labels of anyons \(a,b\). For \(N\geqslant 3\), there is always a \(\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}\) topological symmetry. The anti-unitary \(\mathbb{Z}_{4}^{T}\) generated by an action \(T:(a_{e},a_{m})\to(a_{m},[-a_{e}]_{N})\), and the unitary \(\mathbb{Z}_{2}\) is generated by an action \(S:(a_{e},a_{m})\to(a_{m},a_{e})\). The two generators satisfy the relation \[S^{2}=\mathbf{1}\,,\quad T^{4}=\mathbf{1}\,,\quad STS=T^{-1}\,. \tag{60}\] For this \(\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}\) symmetry, writing a group element as \(\mathbf{g}=T^{g_{1}}S^{g_{2}}\), with \(g_{1}\in\{0,1,2,3\}\) and \(g_{2}\in\{0,1\}\), we can choose the \(U\)-symbols as \[U_{\mathbf{g}}(a,b;c)=\begin{cases}e^{\frac{2\pi}{N}i}a_{m}b_{e},&g_{1}+g_{2} \text{ is odd},\\ 1,&\text{otherwise}\end{cases}\,, \tag{61}\] and a set of \(\eta\)-symbols as \[\eta_{a}(\mathbf{g},\mathbf{h})=\begin{cases}e^{\frac{2\pi}{N}i}a_{e}a_{m},&g_ {1}+g_{2},\,h_{1}+h_{2}\text{ are odd},\\ 1,&\text{otherwise}\end{cases} \tag{62}\] For certain \(N\geqslant 3\) there can be other topological symmetries, in addition to the above \(\mathbb{Z}_{4}\rtimes\mathbb{Z}_{2}^{T}\) symmetry. For example, when \(N=5\), the action \((a_{e},a_{m})\to([3a_{e}]_{5},[3a_{m}]_{5})\) is an anti-unitary topological symmetry. For simplicity, below we will focus on the cases where \(N=2,3,4\). The analysis of the classification is similar to the previous cases. In the present case, we need to understand the anomaly indicators of the \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\) symmetries. These anomaly indicators and their values for different lattice homotopy classes can be found in Appendix D. Carrying out the procedure listed in Sec. V, we can obtain the classification. In Tables 15, we list the number of different symmetry-enriched \(\mathbb{Z}_{N}\) topological quantum spin liquids in different lattice homotopy classes under these symmetries. The precise symmetry fractionalization classes in each case can be found in Appendix C. We also upload codes using which one can 1) see all symmetry fractionalization classes of the symmetry-enriched states within each lattice homotopy class, and 2) check which lattice homotopy class a given symmetry-enriched state belongs to [63]. Below we comment on some of these results. For the case with \(N=2\) and the \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetry, the classification was carried out for spin-1/2 systems on a triangular, kagome and honeycomb lattice [13; 14; 64], which belongs to the lattice homotopy class \(a\), \(c\) and \(0\), respectively. For lattice homotopy classes \(a\) and \(c\), our results agree with those in Refs. [13; 14]. For the lattice homotopy class \(0\), using the parton-mean-field approach and assuming that one of \(e\) and \(m\) carries spin-1/2 under the \(SO(3)\) symmetry, Ref. [64] found 128 different states. We find 336 states in total, where 128 of them have one of \(e\) and \(m\) carrying half-odd-integer spin, and in the other 208 states both \(e\) and \(m\) carry integer spin, 9 of which also have symmetries permuting \(e\) and \(m\). For the case with \(N=2\) and the \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetry, Ref. [65] found 64 states on square lattice spin-1/2 system, which belongs to our lattice homotopy class \(a\), agreeing with our results. For the case with \(N=2\) and the \(p4m\times\mathbb{Z}_{2}^{T}\) symmetry, using the parton-mean-field approach, Ref. [66] found 64 states on square lattice system with Kramers doublet spins, which can all be obtained from the \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetric \(\mathbb{Z}_{2}\) topological quantum spin liquids by breaking the \(SO(3)\) symmetry. Suppose in the \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetric version of these states, the anyon \(e\) carries half-odd-integer spin under \(SO(3)\), then projective quantum numbers of \(m\) are fixed for all these 64 states [13]. In particular, \(m\) experiences no nontrivial symmetry fractionalization pattern that simultaneously involves the time reversal and lattice symmetries. The absence of such symmetry fractionalization pattern still holds in the 64 "within parton" \(p4m\times\mathbb{Z}_{2}^{T}\) symmetric states obtained by breaking \(SO(3)\). However, in addition to these 64 states, we have found \(117-64=53\) other states, with their symmetry fractionalization classes presented in Appendix E (anyons are not permuted by symmetries in all these 117 states). A common property of these 53 states is the presence of nontrivial symmetry fractionalization involving both the lattice symmetry and time reversal symmetry for the anyon \(m\), e.g., translation and time reversal may not commute for \(m\). Furthermore, for all 117 states, the \(C_{2}\equiv C_{4}^{2}\) symmetry fractionalizes on the \(m\) anyon, i.e., effectively \(C_{2}^{2}=-1\) for \(m\). Usually, the interpretation of this phenomenon is that there is a background \(e\) anyon at each square lattice site (the \(C_{4}\) center), and the mutual braiding statistics between \(e\) and \(m\) yields \(C_{2}^{2}=-1\). However, for 16 of the 53 "beyond-parton" states, \((T_{1}C_{2})^{2}=(T_{2}C_{2})^{2}=-1\) for \(m\), which seems to suggest that there are also background \(e\) anyons at the 2-fold rotation centers of \(T_{1}C_{2}\) and \(T_{2}C_{2}\), although microscopically there is no spin at those positions. So the analysis based on anomaly matching suggests that the simple picture where the fractionalization of rotational symmetries purely comes from background anyons is actually incomplete. The above example shows that even for simple states like the \(\mathbb{Z}_{2}\) topological order, the parton-mean-field approach may miss some of their symmetry enrichment patterns, and our framework in Sec. V is more general. Note that here by "parton mean field", we are referring to the ususal parton mean fields where the partons are non-interacting. If the partons are allowed to interact strongly, say, if they form nontrivial interacting symmetry-protected topological states under the projective symmetry group of the partons, symmetry-enriched states not captured by Ref. [66] may arise, but it is technically complicated to study them. Also, by using parton constructions other than the one in Ref. [66], one may also obtain states beyond those in Ref. [66], but it is challenging to make this approach systematic. We also notice that the number of \(\mathbb{Z}_{3}\) topological quantum spin liquids is nonzero only in the lattice homotopy class 0. This phenomenon is actually true for general odd \(N\). To see it, first notice all lattice homotopy classes except 0 have some mixed anomalies between the \(SO(3)\) symmetry and the lattice symmetry [6]. In order to match this anomaly, it is impossible for both \(e\) and \(m\) to carry integer spin. Suppose that \(e\) carries half-odd-integer spin, and consider threading an \(SO(3)\) monopole through the system. The monopole will be viewed as a \(\pi\) flux from the perspective of \(e\). Then the local nature of the monopole implies that it must trap an anyon that has \(\pi\) braiding statistics with \(e\). For odd \(N\), no such anyon exists, which leads to a contradiction. So \(\mathbb{Z}_{N}\) topological quantum spin liquids with \(N\) odd cannot possibly arise in lattice homotopy class other than 0. Note that the above argument does not rely on the time reversal symmetry, and it is valid no matter how the symmetries permute anyons. For \(\mathbb{Z}_{N}\) topological quantum spin liquids in systems belonging to a lattice homotopy class other than 0, which requires \(N\) to be even, anyons \((1,0)\) and \((0,N/2)\) cannot simultaneously carry half-odd-integer spin, otherwise there would be a mixed anomaly between the \(SO(3)\) and time reversal symmetries [67]. IX \(\mathrm{U}(1)_{2N}\times\mathrm{U}(1)_{-2N}\) topological orders: Generalizations of the double-semion state In this section, we consider the \(\mathrm{U}(1)_{2N}\times\mathrm{U}(1)_{-2N}\) topological order, which is the generalization of the double-semion state, i.e., the case with \(N=1\). Effectively, this state can be obtained by stacking a \(U(1)_{2N}\) state, which is discussed in Sec. VI, on its time reversal partner, the \(U(1)_{-2N}\) state. We would like to classify the \(\mathrm{U}(1)_{2N}\times\mathrm{U}(1)_{-2N}\) topological order enriched by one of these four symmetries: \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\). In a \(U(1)_{2N}\times U(1)_{-2N}\) topological quantum spin liquid, there are \(4N^{2}\) anyons in total, which can be labeled by two integers as \(a=(a_{s},a_{\bar{s}})\), with \(a_{s},a_{\bar{s}}\in\{0,\ldots,2N-1\}\). Following the convention in the double-semion state, we will call the anyon labeled by \((1,0)\) as \(s\), and the anyon labeled by \((0,1)\) as \(\bar{s}\) (note in this convention \(s\) and \(\bar{s}\) are not anti-particles of each other). The fusion rules are element-wise addition modulo \(2N\), i.e., \[(a_{s},a_{\bar{s}})\times(b_{s},b_{\bar{s}})=\left([a_{s}+b_{s}]_{2N},[a_{\bar {s}}+b_{\bar{s}}]_{2N}\right), \tag{63}\] In a choice of gauge, the \(F\)-symbols of the theory are \[F^{abc}=\exp\left(i\frac{2\pi}{N}\left(a_{s}(b_{s}+c_{s}-[b_{s}+c_{s}]_{2N})- a_{\bar{s}}(b_{\bar{s}}+c_{\bar{s}}-[b_{\bar{s}}+c_{\bar{s}}]_{2N})\right)\right) \tag{64}\] and the \(R\)-symbols are \[R^{ab}=\exp\left(i\frac{2\pi}{N}(a_{s}b_{s}-a_{\bar{s}}b_{\bar{s}})\right)\,. \tag{65}\] The topological symmetry group is complicated to determine for general \(N\), just like the \(U(1)_{2N}\) state [48; 62]. Here we list the topological symmetry group for \(N=1,2\) here. For \(N=1\), the topological symmetry is \(\mathbb{Z}_{2}^{T}\), generated by \(\tilde{S}\) exchanging \(s\) and \(\bar{s}\), i.e., \[(a_{s},a_{\bar{s}})\to(a_{\bar{s}},a_{s})\,. \tag{66}\] We can choose the \(U\)-symbols and a set of \(\eta\)-symbols all equal to \(1\). For \(N=2\), the topological symmetry is \(\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}^{T}\), generated by an order \(2\) anti-unitary symmetry \(\tilde{S}\) which exchanges \(s\) and \(\bar{s}\), \(\tilde{S}\colon(a_{s},a_{\bar{s}})\to(a_{\bar{s}},a_{s})\), and another order \(4\) anti-unitary symmetry \(T\), which permutes anyons in the following way \(T\colon(a_{s},a_{\bar{s}})\to(a_{\bar{s}},[-a_{s}]_{2N})\). The two generators satisfy the relation \[\tilde{S}^{2}=\mathbf{1}\,,\quad T^{4}=\mathbf{1}\,,\quad\tilde{S}T\tilde{S}= T^{-1}\,. \tag{67}\] An element in \(\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}^{T}\) can be written as \(T^{g_{1}}\tilde{S}^{g_{2}}\), with \(g_{1}\in\{0,\dots,3\}\) and \(g_{2}\in\{0,1\}\). To define the \(U\)-symbols, first we define the following function \[\tilde{U}(a_{s},b_{s})=\begin{cases}(-1)^{a_{s}}&b_{s}\neq 0\\ 1&b_{s}=0\end{cases} \tag{68}\] Given an element \(\mathbf{g}\in\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}^{T}\), the \(U\)-symbols can be chosen such that \[U_{\mathbf{g}}(a,b;c)=\begin{cases}1&g_{1}=0\\ \tilde{U}(a_{\bar{s}},b_{\bar{s}})&g_{1}=1\\ \tilde{U}(a_{s},b_{s})\tilde{U}(a_{\bar{s}},b_{\bar{s}})&g_{1}=2\\ \tilde{U}(a_{s},b_{s})&g_{1}=3\end{cases} \tag{69}\] And a set of \(\eta\)-symbols can be chosen to be all identity. Carrying out the procedure in Sec. V in a manner similar to the previous examples, we can obtain the classification of \(U(1)_{2}\times U(1)_{-2}\) and \(U(1)_{4}\times U(1)_{-4}\) topological quantum spin liquids enriched by \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\) or \(p4m\times\mathbb{Z}_{2}^{T}\) symmetry. The results are summarized in Table 15. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Symmetry group & Lattice homotopy class & \(\mathbb{Z}_{2}\) & \(\mathbb{Z}_{3}\) & \(\mathbb{Z}_{4}\) & \(\mathrm{U}(1)_{2}\times\mathrm{U}(1)_{-2}\) & \(\mathrm{U}(1)_{4}\times\mathrm{U}(1)_{-4}\) \\ \hline \multirow{4}{*}{\(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\)} & 0 & 336 & 8 & 16453 & 32 & 144 \\ \cline{2-6} & a & 8 & 0 & 70 & 0 & 0 \\ \cline{2-6} & c & 8 & 0 & 70 & 0 & 0 \\ \cline{2-6} & a+c & 4 & 0 & 82 & 0 & 0 \\ \hline \multirow{4}{*}{\(p6m\times\mathbb{Z}_{2}^{T}\)} & 0 & 208 & 8 & 4725 & 16 & 72 \\ \cline{2-6} & a & 13 & 0 & 61 & 0 & 0 \\ \cline{2-6} & c & 13 & 0 & 61 & 0 & 0 \\ \cline{2-6} & a+c & 12 & 0 & 167 & 0 & 0 \\ \hline \multirow{4}{*}{\(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\)} & 0 & 3653 & 9 & 886740 & 128 & 1344 \\ \cline{2-6} & a & 64 & 0 & 5008 & 0 & 0 \\ \cline{2-6} & b & 64 & 0 & 5008 & 0 & 0 \\ \cline{2-6} & c & 64 & 0 & 8872 & 0 & 0 \\ \cline{2-6} & a+b & 16 & 0 & 636 & 0 & 0 \\ \cline{2-6} & a+c & 16 & 0 & 656 & 0 & 0 \\ \cline{2-6} & b+c & 16 & 0 & 656 & 0 & 0 \\ \cline{2-6} & a+b+c & 8 & 0 & 318 & 0 & 0 \\ \hline \multirow{4}{*}{\(p4m\times\mathbb{Z}_{2}^{T}\)} & 0 & 2629 & 9 & 280852 & 64 & 672 \\ \cline{2-6} & a & 117 & 0 & 3491 & 0 & 0 \\ \cline{1-1} \cline{2-6} & b & 117 & 0 & 3491 & 0 & 0 \\ \cline{1-1} \cline{2-6} & c & 193 & 0 & 12449 & 0 & 0 \\ \cline{1-1} \cline{2-6} & a+b & 33 & 0 & 513 & 0 & 0 \\ \cline{1-1} \cline{2-6} & a+c & 34 & 0 & 610 & 0 & 0 \\ \cline{1-1} \cline{2-6} & b+c & 34 & 0 & 610 & 0 & 0 \\ \cline{1-1} \cline{2-6} & a+b+c & 21 & 0 & 309 & 0 & 0 \\ \hline \end{tabular} \end{table} Table 15: Number of various topological quantum spin liquids enriched by \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) or \(p4m\times\mathbb{Z}_{2}^{T}\) symmetry, where the third, fourth and fifth columns represent \(\mathbb{Z}_{2}\), \(\mathbb{Z}_{3}\) and \(\mathbb{Z}_{4}\) topological orders, while the last two columns represent the \(U(1)_{2}\times U(1)_{-2}\) and \(U(1)_{4}\times U(1)_{-4}\) topological orders, respectively. The details of the symmetry fractionalization classes of each state can be found in Appendix C. For \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{4}\) topological orders, we also upload codes containing the symmetry fractionalization class for each state in each lattice homotopy class [63]. For \(\mathbb{Z}_{3}\), \(U(1)_{2}\times U(1)_{-2}\), and \(U(1)_{4}\times U(1)_{-4}\) topological orders, all symmetry-enriched states are anomaly-free. We notice that in all symmetry groups considered here, \(U(1)_{2}\times U(1)_{-2}\) and \(U(1)_{4}\times U(1)_{-4}\) can only arise in the lattice homotopy class \(0\). Ref. [68] presented a physical reason for this phenomenon. If we only consider the symmetry group \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) and \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), the following simpler argument can explain it. To be concrete, suppose the symmetry group is \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), and a similar argument can be made if the symmetry group is \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\). Now suppose breaking the symmetry to \(p6\times SO(3)\). Then the system can be viewed as a \(p6\times SO(3)\) symmetric \(U(1)_{2N}\) state on top of a \(p6\times SO(3)\) symmetric \(U(1)_{-2N}\) state, and these two states must have the opposite anomalies under the \(p6\times SO(3)\) symmetry, otherwise they cannot be connected by time reversal to form the original \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetric state. Namely, after breaking \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) to \(p6\times SO(3)\), there is no remaining anomaly and the state is in lattice homotopy class \(0\). Now we ask which lattice homotopy class with a \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetry becomes the lattice homotopy class \(0\) with a \(p6\times SO(3)\) symmetry after this symmetry breaking. From the representative configurations of all lattice homotopy classes in Sec. IV, clearly only the lattice homotopy class \(0\) does. ### Discussion In this paper, we have presented a general framework in Sec. V to classify symmetry-enriched topological quantum spin liquids in two spatial dimensions. This framework applies to all topological quantum spin liquids, which may be Abelian or non-Abelian, and chiral or non-chiral. The symmetry we consider may include both lattice symmetry and internal symmetry, may contain anti-unitary symmetry, and may permute anyons. We then apply this framework to various examples in Secs. VI, VII, VIII and IX. As argued in the Introduction, our framework combines the advantages of the previous approaches in the literature, while avoiding their disadvantages. Indeed, we are able to identify symmetry-enriched topological quantum spin liquids that are not easily captured by the usual parton-mean-field approach (see examples in Sec. VIII), and we can systematically distinguish different lattice systems with the same symmetry group using their quantum anomalies. We finish this paper by discussing some open questions. * In this paper, we characterize a topological quantum spin liquid with a lattice symmetry by one with an internal symmetry via the crystalline equivalence principle in Sec. III.4. However, it is more ideal to have a theory that directly describes topological quantum spin liquids with lattice symmetries. Such a theory should be able to tell how an arbitrary symmetry acts on a state obtained by creating some anyons from the ground state and putting them at arbitrary positions. The symmetry action should be some analog of Eq. (2), but it is subtle to understand what constraints the analogs of \(U_{\mathbf{g}}(a,b;c)\) and \(\eta_{a}(\mathbf{g},\mathbf{h})\) should satisfy. So far this question has been answered if the lattice symmetry only contains translation symmetry [12], but for cases with point group symmetries it is answered in a very specific case, where the lattice symmetry is reflection, and the state only contains two anyons that are 1) anti-particles of each other, 2) transformed into each other under the reflection symmetry, and 3) located at two reflection related positions [69]. It is useful to have a complete theory that can answer this question in full generality. Such a theory is also helpful for the purpose of identifying observable signatures of different symmetry-enriched topological quantum spin liquids. * Strictly speaking, our classification is a classification of different patterns of how symmetries permute anyons and the symmetry fractionalization patterns. In principle, one should further consider how the classification is modified upon stacking an invertible state on the topological quantum spin liquid with the same symmetry. This question is subtle because some nontrivial invertible states can be trivialized in the presence of a long-range entangled state [70; 67; 9]. We leave this problem for future study. * In this paper, we focus on how symmetry permutes anyons and the symmetry fractionalization classes, which can be viewed as the bulk properties of different symmetry-enriched topological quantum spin liquids. It is also interesting to explore their boundary properties in the future. In particular, sometimes the symmetry enrichment pattern may enforce the boundary of the topological quantum spin liquid to be gapless, even if it is non-chiral [71; 72]. Similarly, it is intriguing to study the properties of defects in different symmetry-enriched topological quantum spin liquids, and examine their potential to perform quantum computation [73; 9]. * It is useful to find numerical algorithms to identify the symmetry enrichement pattern of a topological quantum spin liquid that emerges in a lattice model that is not fine-tuned, and find experimental methods to detect the symmetry enrichment pattern in experiments. Some previous proposals for various specific cases include Refs. [74; 75; 76; 77; 78], but it is useful to find algorithms and methods applicable to the general setting. * After classifying different symmetry-enriched topological quantum spin liquids and finding methods to detect them numerically and experimentally, it is important to construct explicit models that realize these topological orders. For many topological orders enriched by internal symmetries, Refs. [79; 80; 81; 82] construct their exactly solvable models with explicit Hamiltonians and ground-state wavefunctions. Moreover, there are many proposals for realizing symmetry-protected and symmetry-enriched topological states with lattice symmetries in the literature, including Refs. [83; 84; 85; 36; 86; 38; 39; 38; 36; 37; 38; 36; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78]. We anticipate that we can combine the above constructions to obtain exactly solvable models with concrete Hamiltonians that realize the symmetry-enriched topological quantum spin liquids discussed in this paper. It will be also interesting to find quantum materials and develop quantum simulators to realize these different phases, and explore interesting continuous quantum phase transitions out of them, which are beyond the conventional Landau-Ginzburg-Wilson-Fisher paradigm. * In this paper, our focus is topological quantum spin liquids in two spatial dimensions. It is interesting to generalize our work to other systems, such as fermionic systems, systems in higher dimensions, gapless systems and fractonic systems. In particular, there are many experimental candidates of \((3+1)\)-dimensional symmetry-enriched gapless \(U(1)\) quantum spin liquids in pyrochlores [89], and their classification has been discussed within the framework of projective symmetry groups [90; 91; 92; 93; 94; 95]. As discussed in the present paper, the classification based on projective symmetry groups may be incomplete. Using more general approaches, \(U(1)\) quantum spin liquids with only internal symmetries have been classified [15; 67; 70; 96], and some examples of their lattice symmetry enriched versions have been constructed [97]. However, a systematic classification of \((3+1)\)-dimensional \(U(1)\) quantum spin liquids enriched by both lattice and internal symmetries is lacking, and it is interesting to apply the idea in the present paper to those settings in the future. _Note:_ Codes for checking anomaly matching and details of realizations for \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{4}\) topological order available at [https://github.com/Weicheng-Ye/Classification-of-QSL.git](https://github.com/Weicheng-Ye/Classification-of-QSL.git). ###### Acknowledgements. We thank Maissam Barkeshli, Dominic Else, Meng Guo, Yin-Chen He, T. Senthil, Chong Wang, Fa Wang and Qing-Rui Wang for helpful discussion. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Industry Canada and by the Province of Ontario through the Ministry of Colleges and Universities. ### Translation between the characterization of reflection symmetry and time-reversal symmetry It is a common folklore that "reflection symmetry = time-reversal symmetry \(\times\) charge-conjugation". However, one precise formulation of the statement is based on the CPT theorem, which is formulated in relativistic quantum field theory and requires Lorentz symmetry as premise [98]. In the context of topological order, even though Lorentz symmetry is not explicitly present, it is also widely believed that the statement also holds true. However, the precise correspondence between reflection symmetry and time-reversal symmetry, especially the matching between the data \(\{\rho_{\mathbf{g}};U_{\mathbf{g}}(a,b;c),\eta_{a}(\mathbf{g},\mathbf{h})\}\) for these two symmetries, is little known in the literature. We summarize this correspondence in this appendix. For this purpose, we need more formal treatments of the topological order in terms of a unitary modular tensor category (UMTC), which go beyond what is reviewed in Sec. III, and we refer the interested readers to Sec. II of Ref. [19] for the basics. Following the convention of Refs. [9; 99; 69], we model the time-reversal symmetry action on the UMTC as a \(\mathbb{C}\)-anti-linear functor and the (unitary) reflection symmetry action as an _anti-monoidal_ functor (also see Ref. [23] for slightly different treatments). Therefore, mathematically speaking, in this appendix we establish a precise correspondence between the data of a \(\mathbb{C}\)-anti-linear functor and the data of an anti-monoidal functor. We believe that such correspondence will imply the correspondence of the data \(\{\rho_{\mathbf{g}};U_{\mathbf{g}}(a,b;c),\eta_{a}(\mathbf{g},\mathbf{h})\}\) for these two symmetries on the explicit wavefunctions of the topological order, and we defer it to future study. Throughout the appendix, we assume that the reflection symmetry is unitary. We can also consider anti-unitary reflection symmetry, which in the crystalline equivalence principle should correspond to a unitary symmetry which does not reflect spacetime. Following the treatment in this appendix, we can similarly establish a correspondence between a \(\mathbb{C}\)-linear functor for the unitary symmetry and a \(\mathbb{C}-\)anti-linear anti-monoidal functor for the anti-unitary reflection symmetry. The details can be worked out by following closely the treatment in this appendix and we omit them. Recall that anyon lines may be "bent" using the \(A\) and \(B\) symbols, given diagrammatically by \[\includegraphics[width=142.26378pt]{figures/bent/c- which is also what we expect. Finally, we should write down how the \(\eta\)-symbols match. This can be done by considering the consistency equation between \(U\)-symbols and \(\eta\)-symbols. For example, suppose \(\mathbf{g}\) is some unitary symmetry that does not reverse orientation, we have \[\kappa_{\mathcal{R},\mathbf{g}}(a,b;c)\equiv U_{\mathcal{R}}(a,b;c)^{-1}U_{ \mathbf{g}}(\,\overline{\mathcal{R}}\,b,\,\overline{\mathcal{R}}\,a;\, \overline{\mathcal{R}}\,c)^{-1}U_{\mathcal{R}\mathbf{g}}(a,b;c)=\kappa_{ \mathcal{T},\mathbf{g}}\left(\overline{b},\overline{a};\overline{c}\right)^{* }=\frac{\eta_{\overline{\pi}}\left(\mathcal{T},\mathbf{g}\right)^{*}\eta_{ \overline{b}}\left(\mathcal{T},\mathbf{g}\right)^{*}}{\eta_{\overline{c}} \left(\mathcal{T},\mathbf{g}\right)^{*}} \tag{112}\] Hence, the correspondence between \(\eta\)-symbols should be given by the following equation, \[\eta_{a}(\mathcal{R},\mathbf{g})=\eta_{\bar{a}}(\mathcal{T},\mathbf{g})^{*}\,. \tag{113}\] Following similar derivation, we have \[\eta_{a}(\mathbf{g},\mathcal{R}) =\frac{\eta_{\bar{a}}(\mathbf{g},\mathcal{T})^{*}}{U_{\mathbf{g}} (\bar{a},a;1)}\,, \tag{114}\] \[\eta_{a}(\mathcal{R}_{\mathbf{1}},\mathcal{R}_{\mathbf{2}}) =\eta_{a}(\mathcal{T}_{\mathbf{1}},\mathcal{T}_{\mathbf{2}})U_{ \mathcal{T}_{\mathbf{1}}}(a,\bar{a};1)\,. \tag{115}\] It is straightforward to check that the desired consistency conditions for the \(\eta\)-symbols of reflection symmetries are also satisfied. ### Wallpaper group symmetries: group structure and \(\mathbb{Z}_{2}\) cohomology In this appendix, for the readers' convenience, we collect necessary information of the wallpaper group symmetries appearing in this paper, \(p6\), \(p6m\), \(p4\) and \(p4m\), together with its \(\mathbb{Z}_{2}\) cohomology. This will be important in the identification of symmetry fractionalization classes and the calculation of anomaly matching. A complete list of the \(\mathbb{Z}_{2}\) cohomology for all the \(17\) wallpaper group symmetries is collected in Ref. [6]. The \(\mathbb{Z}_{2}\) cohomology of wallpaper group symmetries is presented in terms of its \(\mathbb{Z}_{2}\) cohomology ring. The product in the \(\mathbb{Z}_{2}\) cohomology ring is understood as the cup product. Namely, given \(\omega\in H^{k}(G,\mathbb{Z}_{2})\) and \(\eta\in H^{n}(G,\mathbb{Z}_{2})\), we can define \(\omega\cup\eta\in H^{k+n}(G,\mathbb{Z}_{2})\), which is abbreviated to \(\omega\eta\) in this paper, such that \[(\omega\cup\eta)(g_{1},\ldots,g_{k+n})=\omega(g_{1},\ldots,g_{k})\cdot\eta(g_{ k+1},\ldots,g_{l})\,. \tag{116}\] Here \(\cdot\) is simply the multiplication in \(\mathbb{Z}_{2}\). By identifying a set of generators \(A_{\bullet},B_{\bullet}\), etc, we can identify all elements in the \(\mathbb{Z}_{2}\) cohomology with the help of addition and cup product. We define a set of functions that take integers as their arguments: \[\begin{split}& P(x)=\left\{\begin{array}{ll}1,&x\text{ is odd}\\ 0,&x\text{ is even}\end{array}\right.,\ P_{c}(x)=1-P(x),\\ &[x]_{a}=\{y=x\ (\text{mod}\ a)|0\leqslant y<a\},\ P_{ab}(x)=\left\{ \begin{array}{ll}1,&x=b\ (\text{mod}\ a)\\ 0,&\text{otherwise}\end{array}\right.\end{split} \tag{117}\] When writing down the cohomology corresponding to the LSM constraints, we also need the cohomology of \(SO(3)\) and \(SO(3)\times\mathbb{Z}_{2}^{T}\). We use \(w_{2}\in H^{2}(SO(3),\mathbb{Z}_{2})\) to denote the second Stiefel-Whitney class of \(SO(3)\), and \(t\in H^{1}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) to denote the generator for the \(\mathbb{Z}_{2}\) cohomology of the time-reversal \(\mathbb{Z}_{2}^{T}\) symmetry. ### \(p6\) This group is generated by \(T_{1}\), \(T_{2}\) and \(C_{6}\), two translations with translation vectors that have the same length and an angle of \(2\pi/3\), and a \(6\)-fold rotational symmetry, such that \[C_{6}^{6}=1,\qquad C_{6}T_{1}C_{6}^{-1}=T_{1}T_{2},\qquad C_{6}T_{2}C_{6}^{-1} =T_{1}^{-1},\qquad T_{1}T_{2}=T_{2}T_{1}. \tag{118}\] An arbitrary element in \(p6\) can be written as \(g=T_{1}^{x}T_{2}^{y}C_{6}^{c}\), with \(x,y\in\mathbb{Z}\) and \(c\in\{0,1,2,3,4,5\}\). For \(g_{1}=T_{1}^{x_{1}}T_{2}^{y_{1}}C_{6}^{c_{1}}\) and \(g_{2}=T_{1}^{x_{2}}T_{2}^{y_{2}}C_{6}^{c_{2}}\), the group multiplication rule gives \[g_{1}g_{2}=T_{1}^{x_{1}+\Delta x(x_{2},y_{2},c_{1})}T_{2}^{y_{1}+\Delta y(x_{2},y_{2},c_{1})}C_{6}^{[c_{1}+c_{2}]_{6}} \tag{119}\] where \[\Delta x(x,y,c)=\left\{\begin{array}{ll}x,&c=0\\ x-y,&c=1\\ -y,&c=2\\ -x,&c=3\\ -x+y,&c=4\\ y,&c=5\end{array}\right.,\quad\Delta y(x,y,c)=\left\{\begin{array}{ll}y,&c=0\\ x,&c=1\\ x-y,&c=2\\ -y,&c=3\\ -x,&c=4\\ -x+y,&c=5\end{array}\right. \tag{100}\] The \(\mathbb{Z}_{2}\) cohomology ring of \(p6\) is \[\mathbb{Z}_{2}[A_{c},B_{xy}]/(B_{xy}^{2}=A_{c}^{2}B_{xy}) \tag{101}\] Here \(H^{1}(p6,\mathbb{Z}_{2})=\mathbb{Z}_{2}\), with generator \(\xi_{1}=A_{c}\), which have a representative cochain, \[\xi_{1}(g)=c. \tag{102}\] \(H^{2}(p6,\mathbb{Z}_{2})=\mathbb{Z}_{2}^{2}\), with generators \(\lambda_{1}=B_{xy},\quad\lambda_{2}=A_{c}^{2}\), and we can choose the representative cochains to be \[B_{xy}(g_{1},g_{2}) =P_{60}(c_{1})y_{1}x_{2}+P_{61}(c_{1})\left(\frac{x_{2}(x_{2}-1)} {2}+y_{1}x_{2}-y_{2}(x_{2}+y_{1})\right)\] \[+P_{62}(c_{1})\left(\frac{y_{2}(y_{2}+1)}{2}-x_{2}-y_{2}(x_{2}+y_ {1})\right)+P_{63}(c_{1})(-x_{2}+y_{2}-y_{1}x_{2})\] \[+P_{64}(c_{1})\left(\frac{x_{2}(x_{2}-1)}{2}+y_{2}-y_{1}x_{2}-y_{ 2}(x_{2}-y_{1})\right)+P_{65}(c_{1})\left(\frac{y_{2}(y_{2}+1)}{2}-y_{2}(x_{2} -y_{1})\right) \tag{103}\] \[A_{c}^{2}(g_{1},g_{2}) =c_{1}c_{2} \tag{104}\] According to Ref. [6], the anomalies of \(p6\times SO(3)\) symmetric lattice systems in lattice homotopy classes \(a\), \(c\) and \(a+c\) can be respectively written as \[\exp\left(\pi i(B_{xy}+A_{c}^{2})w_{2}\right)\,, \tag{105}\] \[\exp\left(\pi iB_{xy}w_{2}\right)\,,\] (106) \[\exp\left(\pi iA_{c}^{2}w_{2}\right)\,. \tag{107}\] ### p6\(m\) This group is generated by \(T_{1}\), \(T_{2}\), \(C_{6}\) and \(M\), where the first three generators have the same properties as those in \(p6\), and the last one is a mirror symmetry whose mirror axis passes through the \(C_{6}\) center and bisects \(T_{1}\) and \(T_{2}\), such that \[M^{2} =1,\qquad MC_{6}M=C_{6}^{-1},\qquad MT_{1}M=T_{2},\qquad MT_{2}M=T _{1}, \tag{108}\] \[C_{6}^{6} =1,\qquad C_{6}T_{1}C_{6}^{-1}=T_{1}T_{2},\qquad C_{6}T_{2}C_{6}^{ -1}=T_{1}^{-1},\qquad T_{1}T_{2}=T_{2}T_{1}.\] An arbitrary element in \(p6m\) can be written as \(g=T_{1}^{x}T_{2}^{y}C_{6}^{c}M^{m}\), with \(x,y\in\mathbb{Z}\), \(c\in\{0,1,2,3,4,5\}\) and \(m\in\{0,1\}\). For \(g_{1}=T_{1}^{x_{1}}T_{2}^{y_{1}}C_{6}^{c_{1}}M^{m_{1}}\) and \(g_{2}=T_{1}^{x_{2}}T_{2}^{y_{2}}C_{6}^{c_{2}}M^{m_{2}}\), the group multiplication rule gives \[g_{1}g_{2}=T_{1}^{x_{1}+\Delta x(X,Y,c_{1})}T_{2}^{y_{1}+\Delta y(X,Y,c_{1})}C_ {6}^{[c_{1}+(-1)^{m_{1}}c_{2}]_{6}}M^{P(m_{1}+m_{2})} \tag{109}\] where \(\Delta x(x,y,c)\) and \(\Delta y(x,y,c)\) are defined in Eq. (100), and \[X =P_{c}(m_{1})x_{2}-P(m_{1})y_{2} \tag{110}\] \[Y =P_{c}(m_{1})y_{2}-P(m_{1})x_{2}\] The \(\mathbb{Z}_{2}\) cohomology ring of \(p6m\) is \[\mathbb{Z}_{2}[A_{c},A_{m},B_{xy}]/\left(B_{xy}^{2}=(A_{c}^{2}+A_{c}A_{m})B_{ xy}\right) \tag{111}\] Here \(H^{1}(p6m,\mathbb{Z}_{2})=\mathbb{Z}_{2}^{2}\), with generators \(\xi_{1}=A_{c},\quad\xi_{2}=A_{m}\), which have representative cochains, \[\xi_{1}(g)=c,\qquad\xi_{2}(g)=m. \tag{111}\] \(H^{2}(p6m,\mathbb{Z}_{2})=\mathbb{Z}_{2}^{4}\), with generators \(\lambda_{1}=B_{xy},\quad\lambda_{2}=A_{c}^{2},\quad\lambda_{3}=A_{c}A_{m},\quad \lambda_{4}=A_{m}^{2}\), and we can choose the representative cochains to be \[B_{xy}(g_{1},g_{2}) =P_{60}(c_{1})\left[P_{c}(m_{1})y_{1}x_{2}+m_{1}y_{2}(x_{2}+y_{1})\right]\] \[\quad+P_{61}(c_{1})\left[P_{c}(m_{1})\left(\frac{x_{2}(x_{2}-1)}{ 2}+y_{1}x_{2}-y_{2}(x_{2}+y_{1})\right)+m_{1}\left(\frac{y_{2}(y_{2}-1)}{2}+y_ {1}(-x_{2}+y_{2})\right)\right]\] \[\quad+P_{62}(c_{1})\left[P_{c}(m_{1})\left(\frac{y_{2}(y_{2}+1)}{ 2}-x_{2}-y_{2}(x_{2}+y_{1})\right)+m_{1}\left(\frac{x_{2}(x_{2}+1)}{2}-y_{2}-y_ {1}x_{2}\right)\right]\] \[\quad+P_{63}(c_{1})\left[P_{c}(m_{1})(-x_{2}+y_{2}-y_{1}x_{2})+m_ {1}(x_{2}-y_{2}+y_{2}(x_{2}-y_{1})\right]\] \[\quad+P_{64}(c_{1})\left[P_{c}(m_{1})\left(\frac{x_{2}(x_{2}-1)}{ 2}+y_{2}-y_{1}x_{2}-y_{2}(x_{2}-y_{1})\right)+m_{1}\left(\frac{y_{2}(y_{2}-1)}{ 2}+x_{2}+y_{1}(x_{2}-y_{2})\right)\right]\] \[\quad+P_{65}(c_{1})\left[P_{c}(m_{1})\left(\frac{y_{2}(y_{2}+1)}{ 2}-y_{2}(x_{2}-y_{1})\right)+m_{1}\left(\frac{x_{2}(x_{2}+1)}{2}+y_{1}x_{2} \right)\right] \tag{112}\] \[A_{c}^{2}(g_{1},g_{2}) =c_{1}c_{2},\qquad A_{c}A_{m}(g_{1},g_{2})=m_{1}c_{2},\qquad A_{m} ^{2}(g_{1},g_{2})=m_{1}m_{2} \tag{113}\] According to Ref. [6], the anomalies of \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetric lattice systems in lattice homotopy classes \(a\), \(c\) and \(a+c\) can be respectively written as \[\exp\left(\pi i(B_{xy}+A_{c}^{2}+A_{c}A_{m})(w_{2}+t^{2})\right)\,, \tag{114}\] \[\exp\left(\pi iB_{xy}(w_{2}+t^{2})\right)\,,\] (115) \[\exp\left(\pi i(A_{c}^{2}+A_{c}A_{m})(w_{2}+t^{2})\right)\,. \tag{116}\] ### \(p4\) This group is generated by \(T_{1}\), \(T_{2}\) and \(C_{4}\), two translations with perpendicular translation vectors that have equal length, and a \(4\)-fold rotational symmetry, such that \[C_{4}^{4}=1,\qquad C_{4}T_{1}C_{4}^{-1}=T_{2},\qquad C_{4}T_{2}C_{4}^{-1}=T_{1 }^{-1},\qquad T_{1}T_{2}=T_{2}T_{1}. \tag{117}\] An arbitrary element in \(p4\) can be written as \(g=T_{1}^{x}T_{2}^{y}C_{4}^{c}\), with \(x,y\in\mathbb{Z}\) and \(c\in\{0,1,2,3\}\). For \(g_{1}=T_{1}^{x_{1}}T_{2}^{y_{1}}C_{4}^{c_{1}}\) and \(g_{2}=T_{1}^{x_{2}}T_{2}^{y_{2}}C_{4}^{c_{2}}\), the group multiplication rule gives \[g_{1}g_{2}=T_{1}^{x_{1}+\Delta x(x_{2},y_{2},c_{1})}T_{2}^{y_{1}+\Delta y(x_{2}, y_{2},c_{1})}C_{4}^{[c_{1}+c_{2}]_{4}} \tag{118}\] where \[\Delta x(x,y,c)=\left\{\begin{array}{ll}x,&c=0\\ -y,&c=1\\ -x,&c=2\\ y,&c=3\end{array}\right.,\quad\Delta y(x,y,c)=\left\{\begin{array}{ll}y,&c=0 \\ x,&c=1\\ -y,&c=2\\ -x,&c=3\end{array}\right. \tag{119}\] The \(\mathbb{Z}_{2}\) cohomology ring of \(p4\) is \[\begin{split}\mathbb{Z}_{2}[A_{c},A_{x+y},B_{c^{2}},B_{xy}]/ \big{(}A_{c}^{2}=0,\ A_{c}A_{x+y}=0,\\ B_{xy}A_{x+y}=B_{xy}A_{c},\ B_{c^{2}}A_{x+y}=A_{x+y}^{3}+B_{xy}A_{x+y},\ B_{xy}^{2}=B_{c^{2}}B_{xy} \big{)}\end{split} \tag{120}\] Here \(H^{1}(p4,\mathbb{Z}_{2})=\mathbb{Z}_{2}^{2}\), with generators \(\xi_{1}=A_{x+y},\quad\xi_{2}=A_{c}\), which have representative cochains, \[\xi_{1}(g)=x+y,\qquad\xi_{2}(g)=c. \tag{121}\] \(H^{2}(p4,\mathbb{Z}_{2})=\mathbb{Z}_{2}^{3}\), with generators \(\lambda_{1}=B_{xy},\quad\lambda_{2}=B_{c^{2}},\quad\lambda_{3}=A_{x+y}^{2}\), and we can choose the representative cochains to be \[B_{xy}(g_{1},g_{2})=P_{c}(c_{1})y_{1}x_{2}+P(c_{1})y_{2}(y_{1}+x_ {2}) \tag{111}\] \[B_{c^{2}}(g_{1},g_{2})=\frac{[c_{1}]_{4}+[c_{2}]_{4}-[c_{1}+c_{2} ]_{4}}{4}\] (112) \[A_{x+y}^{2}(g_{1},g_{2})=(x_{1}+y_{1})(x_{2}+y_{2}) \tag{113}\] According to Ref. [6], the anomalies of \(p4\times SO(3)\) symmetric lattice systems in lattice homotopy classes \(a\), \(b\), \(c\), \(a+b\), \(a+c\), \(b+c\) and \(a+b+c\) can be respectively written as \[\exp\left(\pi i(B_{xy}+B_{c^{2}}+A_{x+y}^{2})w_{2}\right)\,, \tag{114}\] \[\exp\left(\pi iB_{xy}w_{2}\right)\,,\] (115) \[\exp\left(\pi iA_{x+y}^{2}w_{2}\right)\,,\] (116) \[\exp\left(\pi i(B_{c^{2}}+A_{x+y}^{2})w_{2}\right)\,,\] (117) \[\exp\left(\pi i(B_{xy}+B_{c^{2}})w_{2}\right)\,,\] (118) \[\exp\left(\pi i(B_{xy}+A_{x+y}^{2})w_{2}\right)\,,\] (119) \[\exp\left(\pi iB_{c^{2}}w_{2}\right)\,. \tag{120}\] ### p4\(m\) This group is generated by \(T_{1}\), \(T_{2}\), \(C_{4}\) and \(M\), where the first three generators have the same properties as those in \(p4\), and the last generator \(M\) is a mirror symmetry that flips the translation vector of \(T_{1}\), such that \[M^{2}=1,\qquad MC_{4}M=C_{4}^{-1},\qquad MT_{1}M=T_{1}^{-1}, \qquad MT_{2}M=T_{2}, \tag{121}\] \[C_{4}^{4}=1,\qquad C_{4}T_{1}C_{4}^{-1}=T_{2},\qquad C_{4}T_{2} C_{4}^{-1}=T_{1}^{-1},\qquad T_{1}T_{2}=T_{2}T_{1}.\] An arbitrary element in \(p4m\) can be written as \(g=T_{1}^{x}T_{2}^{y}C_{4}^{c}M^{m}\), with \(x,y\in\mathbb{Z}\), \(c\in\{0,1,2,3\}\) and \(m\in\{0,1\}\). For \(g_{1}=T_{1}^{x_{1}}T_{2}^{y_{1}}C_{4}^{c_{1}}M^{m_{1}}\) and \(g_{2}=T_{1}^{x_{2}}T_{2}^{y_{2}}C_{4}^{c_{2}}M^{m_{2}}\), the group multiplication rule gives \[g_{1}g_{2}=T_{1}^{x_{1}+\Delta x((-1)^{m_{1}}x_{2},y_{2},c_{1})}T_{2}^{y_{1}+ \Delta y((-1)^{m_{1}}x_{2},y_{2},c_{1})}C_{4}^{[c_{1}+(-1)^{m_{1}}c_{2}]_{4}}M ^{P(m_{1}+m_{2})} \tag{122}\] where \(\Delta x(x,y,c)\) and \(\Delta y(x,y,c)\) are defined in Eq. (101). The \(\mathbb{Z}_{2}\) cohomology ring of \(p4m\) is \[\mathbb{Z}_{2}[A_{c},A_{x+y},A_{m},B_{c^{2}},B_{xy}]/ \big{(}A_{c}(A_{c}+A_{m})=0,\ A_{c}A_{x+y}=0, \tag{123}\] \[B_{xy}A_{x+y}=B_{xy}(A_{c}+A_{m}),\ B_{c^{2}}A_{x+y}=A_{x+y}^{3} +A_{m}A_{x+y}^{2}+B_{xy}A_{x+y},\] \[B_{xy}^{2}=B_{c^{2}}B_{xy}\big{)}\] Here \(H^{1}(p4m,\mathbb{Z}_{2})=\mathbb{Z}_{2}^{3}\), with generators \(\xi_{1}=A_{x+y},\quad\xi_{2}=A_{c},\quad\xi_{3}=A_{m}\), which have representative cochains, \[\xi_{1}(g)=x+y,\qquad\xi_{2}(g)=c,\qquad\xi_{3}(g)=m. \tag{124}\] \(H^{2}(p4m,\mathbb{Z}_{2})=\mathbb{Z}_{2}^{6}\), with generators \(\lambda_{1}=B_{xy},\quad\lambda_{2}=B_{c^{2}},\quad\lambda_{3}=A_{x+y}^{2}, \quad\lambda_{4}=A_{x+y}A_{m},\quad\lambda_{5}=A_{c}^{2},\quad\lambda_{6}=A_{m} ^{2}\), and we can choose the representative cochains to be \[B_{xy}(g_{1},g_{2})=P_{c}(c_{1})y_{1}x_{2}+P(c_{1})y_{2}(y_{1}+x_ {2}) \tag{125}\] \[B_{c^{2}}(g_{1},g_{2})=\frac{[c_{1}]_{4}+(-1)^{m_{1}}[c_{2}]_{4}-[ c_{1}+(-1)^{m_{1}}c_{2}]_{4}}{4}\] (126) \[A_{x+y}^{2}(g_{1},g_{2})=(x_{1}+y_{1})(x_{2}+y_{2}),\quad A_{x+y} A_{m}(g_{1},g_{2})=m_{1}(x_{2}+y_{2}),\] (127) \[A_{c}^{2}(g_{1},g_{2})=c_{1}c_{2},\quad A_{m}^{2}(g_{1},g_{2})=m _{1}m_{2} \tag{128}\] According to Ref. [6], the anomalies of \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetric lattice systems in lattice homotopy classes \(a\), \(b\), \(c\), \(a+b\), \(a+c\), \(b+c\) and \(a+b+c\) can be respectively written as \[\exp\left(\pi i(B_{xy}+B_{c^{2}}+A_{x+y}(A_{x+y}+A_{m}))(w_{2}+t^{2 })\right)\,, \tag{101}\] \[\exp\left(\pi iB_{xy}(w_{2}+t^{2})\right)\,,\] (102) \[\exp\left(\pi iA_{x+y}(A_{x+y}+A_{m})(w_{2}+t^{2})\right)\,,\] (103) \[\exp\left(\pi i(B_{c^{2}}+A_{x+y}(A_{x+y}+A_{m}))(w_{2}+t^{2}) \right)\,,\] (104) \[\exp\left(\pi i(B_{xy}+B_{c^{2}})(w_{2}+t^{2})\right)\,,\] (105) \[\exp\left(\pi i(B_{xy}+A_{x+y}(A_{x+y}+A_{m}))(w_{2}+t^{2}) \right)\,,\] (106) \[\exp\left(\pi iB_{c^{2}}(w_{2}+t^{2})\right)\,. \tag{107}\] ### Details of realizations: Anyon permutation patterns and symmetry fractionalization classes In this appendix, for the topological orders appearing in this paper, we give the full details of all possible symmetry fractionalization classes given different anyon permutation patterns, including the explicit representative cochain for each generator of the symmetry fractionalization classes. For \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{4}\) topological orders, we also upload codes using which one can 1) see all symmetry fractionalization classes of the symmetry-enriched states within each lattice homotopy class, and 2) check which lattice homotopy class a given symmetry-enriched state belongs to [63]. As for \(\mathbb{Z}_{3}\), \(U(1)_{2}\times U(1)_{-2}\) and \(U(1)_{4}\times U(1)_{-4}\) topological orders, all symmetry enrichment patterns lead to anomaly-free states. As reviewed in Sec. III.2, given a topological order and how the symmetry \(G\) permutes the anyons of this topological order, all possible symmetry fractionalization classes form a torsor over \(H^{2}(G,\mathcal{A})\), where \(\mathcal{A}\) is the group formed by abelian anyons in this topological order (to simplify the notation, in this appendix we will not write down the subscript \(\rho\) of \(H^{2}_{\rho}(G,\mathcal{A})\)). Given a reference set of \(\eta\)-symbols for \(G\), which can be chosen to come from the pullback of the \(\eta\)-symbols of the topological symmetry using Eq. (5), we can identify all other symmetry fractionalization classes from Eq. (4). Hence, all we need to do is to identify all elements in \(H^{2}(G,\mathcal{A})\). More precisely, we need to write down the representative cochains of all generators of \(H^{2}(G,\mathcal{A})\). It turns out that elements in \(H^{2}(G,\mathcal{A})\) can usually be determined by its relation to \(H^{2}(G,\mathbb{Z}_{2})\) and \(H^{2}(G,\mathbb{Z})\). The \(\mathbb{Z}_{2}\) cohomology of the wallpaper group symmetries involved in this paper has been collected in Appendix B, and the \(\mathbb{Z}_{2}\) cohomology of all wallpaper group symmetries are worked out in Ref. [6]. Also recall that we use \(t\) to denote the generator of \(H^{1}(\mathbb{Z}_{2}^{T},\mathbb{Z}_{2})\), as in Appendix B. Now we explain some technical tricks to identify the elements in \(H^{2}(G,\mathcal{A})\). Let us specialize to the case where \(\mathcal{A}=\mathbb{Z}_{N}\) (with a potential nontrivial \(G\)-action on \(\mathcal{A}\)). Consider the projection map \(p\) from \(\mathbb{Z}\) to \(\mathbb{Z}_{N}\), \[p\colon\mathbb{Z}\to\mathbb{Z}_{N}\,, \tag{108}\] which induces the map between \(\mathbb{Z}\) cohomology and \(\mathbb{Z}_{N}\) cohomology \[p_{*}\colon H^{k}(G,\mathbb{Z})\to H^{k}(G,\mathbb{Z}_{N})\,. \tag{109}\] Given an element \([\omega]\in H^{k}(G,\mathbb{Z})\) with some representative cochain \(\omega\), the representative cochain of \([p_{*}(\omega)]\) is identically \(\omega\) with the outcome understood as an element in \(\mathbb{Z}_{N}\) instead of \(\mathbb{Z}\). We will use \(\widetilde{\omega}\) to label the obtained element in \(H^{k}(G,\mathbb{Z}_{N})\). To identify an element in \(H^{k}(G,\mathbb{Z})\), usually it is helpful to consider the Bochstein homomorphism [100] associated to the short exact sequence \(1\to\mathbb{Z}\to\mathbb{Z}\to\mathbb{Z}_{2}\to 1\), \[\beta\colon H^{k-1}(G,\mathbb{Z}_{2})\to H^{k}(G,\mathbb{Z})\,, \tag{110}\] In particular, for \(k=2\), consider an element \([x]\in H^{1}(G,\mathbb{Z}_{2})\) with representative cochain \(x\), the representative cochain of \([\beta(x)]\) is given by \[\beta(x)(\mathbf{g},\mathbf{h})=\frac{x(\mathbf{g})+(-1)^{q(\mathbf{g})}x( \mathbf{h})-x(\mathbf{g}\mathbf{h})}{2}\,, \tag{111}\] where we demand that \(x(\mathbf{g})\) takes values only in \(\{0,1\}\), and \(q(\mathbf{g})\) denotes whether the \(\mathbf{g}\) action on \(\mathbb{Z}\) is trivial (\(q(\mathbf{g})=0\)) or nontrivial (\(q(\mathbf{g})=1\)). If \(N=2N^{\prime}\) is even, we can also consider the map \(i\) from \(\mathbb{Z}_{2}\) to \(\mathbb{Z}_{2N^{\prime}}\) defined by multiplication by \(N^{\prime}\), i.e., \[i\colon\mathbb{Z}_{2}\to\mathbb{Z}_{2N^{\prime}}\,. \tag{100}\] It induces the map from \(\mathbb{Z}_{2}\) cohomology to \(\mathbb{Z}_{2N^{\prime}}\) cohomology \[i_{*}\colon H^{k}(G,\mathbb{Z}_{2})\to H^{k}(G,\mathbb{Z}_{2N^{\prime}})\,. \tag{101}\] Utilizing the map \(i_{*}\), we can use elements in \(H^{k}(G,\mathbb{Z}_{2})\) to identify elements in \(H^{k}(G,\mathbb{Z}_{2N^{\prime}})\). In particular, given an element \([\omega]\in H^{k}(G,\mathbb{Z}_{2})\) with some representative cochain \(\omega\), the representative cochain of \([i_{*}(\omega)]\) is simply \(N^{\prime}\omega\). For clarity purposes, later we will omit the bracket and use \(N^{\prime}\omega\) to label the obtained element in \(H^{k}(G,\mathbb{Z}_{2N^{\prime}})\). The symmetry group \(G\) we consider usually takes the form \(G_{1}\times G_{2}\). In this situation, we can specify an element in the cohomology of \(G\) by specifying an element in the cohomology of \(G_{1}\) or \(G_{2}\). Namely, we can consider the projection, \[f\colon G\to G_{1}\,. \tag{102}\] It induces the map from the cohomology of \(G_{1}\) to the cohomology of \(G\) \[f^{*}\colon H^{k}(G_{1},\mathcal{A})\to H^{k}(G,\mathcal{A})\,. \tag{103}\] Hence, given an element \([\omega]\in H^{k}(G_{1},\mathcal{A})\) with some representative cochain \(\omega\), we can use it to specify \([f^{*}\omega]\). Writing an element in \(g\in G\) as \(g_{1}g_{2}\) with \(g_{1}\in G_{1}\) and \(g_{2}\in G_{2}\), the representative cochain of \([f^{*}\omega]\) can be identified as \[f^{*}\omega(g,h,\dots)=\omega(g_{1},h_{1},\dots)\,. \tag{104}\] In this appendix, to simplify the notation, we will not explicitly write down the \(f^{*}\) symbol in the front and use the cohomology and cochain of a subgroup to implicitly refer to the cohomology and cochain of the total group. For example, we may specify an element \([\omega_{1}]\in H^{k}(G_{1},\mathcal{A})\) and an element \([\omega_{2}]\in H^{k}(G_{2},\mathcal{A})\), then an element in \(H^{k}(G,\mathcal{A})\) written as \(\omega_{1}+\omega_{2}\) really means \([f_{1}^{*}\omega_{1}]+[f_{2}^{*}\omega_{2}]\), where \(f_{1,2}:G\to G_{1,2}\) is the projection from \(G\) to \(G_{1,2}\). It turns out that the above is enough to determine almost all symmetry fractionalization classes of our interest. In this paper, for chiral topological orders, the symmetry groups we consider are \(p6\times SO(3)\) and \(p4\times SO(3)\). For non-chiral topological orders, we explicitly discuss the symmetry fractionalization classes for \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) and \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) in this appendix, and we can simply ignore all terms involving \(w_{2}\) to get the corresponding symmetry fractionalization classes for \(p6m\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\), where \(w_{2}\) will be defined later and detects whether certain anyon carries an half-odd-integer spin under \(SO(3)\). ### \(\mathrm{U}(1)_{2n}\) In this case, we have \(\mathcal{A}=\mathbb{Z}_{2N}\). The topological symmetry of \(\mathrm{U}(1)_{2N}\) is complicated for general \(N\). For \(N=1\), there is no nontrivial topological symmetry. For \(N\geqslant 2\), there is always a \(\mathbb{Z}_{2}\) subgroup of topological symmetry generated by the charge conjugation symmetry \(C\) such that anyon \((a)\to([-a]_{2N})\) under \(C\). For this topological symmetry, we can take \[U_{C}((a),(b);([a+b]_{2N}))=\begin{cases}(-1)^{a}&b>0\\ 1&b=0\end{cases} \tag{105}\] and a set of \(\eta\)-symbols equal to \(1\). When \(2\leqslant N\leqslant 5\), this is the whole topological symmetry group. In the latter discussion we consider general \(N\geqslant 2\), but limit to the cases where \(G\) can only act as charge conjugation. For \(N=1\), we just need to ignore the cases where \(G\) can permute anyons nontrivially. We start with explaining the symmetry fractionalization classes of the warmup example in Sec. VI.1, \(\mathbb{Z}_{2}\times SO(3)\). Denote the generator of the \(\mathbb{Z}_{2}\) group as \(C_{2}\). Depending on whether \(C_{2}\) permutes anyons or not, we have two possibilities. 1. Trivial \(C_{2}\) action. The possible symmetry fractionalization classes are given by \[H^{2}(\mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2N})=\mathbb{Z}_{2}\oplus\mathbb{Z}_{ 2}\,.\] (109) We denote the generator of the first \(\mathbb{Z}_{2}\) piece by \(\widetilde{\beta}(x)\), where \(x\in H^{1}(\mathbb{Z}_{2},\mathbb{Z}_{2})\) is the nontrivial generator, and the tilde is because it comes from the image of \(p_{\star}\colon H^{2}(\mathbb{Z}_{2}\times SO(3),\mathbb{Z})\to H^{2}( \mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2N})\), with trivial \(\mathbb{Z}_{2}\times SO(3)\) action on \(\mathbb{Z}\). We can explicitly write down the representative cochain of \(\widetilde{\beta}(x)\) according to Eq. (101), \[\widetilde{\beta}(x)(C^{i}_{2},C^{j}_{2})=\frac{i+j-[i+j]_{2}}{2}=ij\,,\] (110) where \(i,j\in\{0,1\}\). We denote the generator of the second \(\mathbb{Z}_{2}\) piece by \(Nw_{2}\), where \(w_{2}\in H^{2}(SO(3),\mathbb{Z}_{2})\) is the second Stiefel-Whitney class and the \(N\) in the front is because it comes from the image of \(i_{\ast}\colon H^{2}(\mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2})\to H^{2}( \mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2N})\). The explicit representative cochain of \(w_{2}\) when restricted to the subgroup generated by the \(\pi\)-rotations about two orthogonal axes is \[w_{2}(U^{i_{1}}_{\pi}U^{\prime i_{2}}_{\pi},U^{j_{1}}_{\pi}U^{\prime j_{2}}_{ \pi})=N(i_{1}j_{1}+i_{2}j_{2}+i_{1}j_{2})\mod 2\,.\] (111) 2. Nontrivial \(C_{2}\) action. The possible symmetry fractionalization classes are given by \[H^{2}(\mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2N})=\mathbb{Z}_{2}\oplus \mathbb{Z}_{2}\,.\] (112) We denote the generator of the two \(\mathbb{Z}_{2}\) pieces by \(Nx^{2}\) and \(Nw_{2}\), because both come from the image of \(i_{\ast}\colon H^{2}(\mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2})\to H^{2}( \mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2N})\). In particular, the representative cochain of \(x^{2}\in H^{2}(\mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2})\) is still \[x^{2}(C^{i}_{2},C^{j}_{2})=\frac{i+j-[i+j]_{2}}{2}=ij\,,\] (113) and the representative cochain of \(Nx^{2}\in H^{2}(\mathbb{Z}_{2}\times SO(3),\mathbb{Z}_{2N})\) is simply multiplication of Eq. (113) by \(N\). For \(p6\times SO(3)\), there are two possible anyon permutation patterns, determined by whether \(C_{6}\) permutes anyons or not. The classification of symmetry fractionalization classes and the generators for the two possibilities are listed in Table 15. The generators with tilde come from the image of \(p_{\ast}\colon H^{2}(p6\times SO(3),\mathbb{Z})\to H^{2}(p6\times SO(3), \mathbb{Z}_{2N})\), with different actions of \(p6\times SO(3)\) on \(\mathbb{Z}\). Now we present the information about \(H^{2}(p6,\mathbb{Z})\) and the generators for completeness. 1. Trivial action on \(\mathbb{Z}\). \[H^{2}(p6,\mathbb{Z})=\mathbb{Z}\oplus\mathbb{Z}_{6}\,.\] (114) We denote the generator of the \(\mathbb{Z}\) piece and the \(\mathbb{Z}_{6}\) piece by \(\mathscr{B}^{(1)}_{xy}\) and \(\mathscr{B}^{(1)}_{c^{2}}\), respectively, which have representative cochains, \[\mathscr{B}^{(1)}_{xy}(g_{1},g_{2}) =P_{60}(c_{1})y_{1}x_{2}+P_{61}(c_{1})\left(\frac{x_{2}(x_{2}-1)}{ 2}+y_{1}x_{2}-y_{2}(x_{2}+y_{1})\right)\] \[+P_{62}(c_{1})\left(\frac{y_{2}(y_{2}+1)}{2}-x_{2}-y_{2}(x_{2}+y_{1 })\right)+P_{63}(c_{1})(-x_{2}+y_{2}-y_{1}x_{2})\] \[+P_{64}(c_{1})\left(\frac{x_{2}(x_{2}-1)}{2}+y_{2}-y_{1}x_{2}-y_{2 }(x_{2}-y_{1})\right)+P_{65}(c_{1})\left(\frac{y_{2}(y_{2}+1)}{2}-y_{2}(x_{2}-y _{1})\right)\] (115) \[\mathscr{B}^{(1)}_{c^{2}}(g_{1},g_{2}) =\frac{[c_{1}]_{6}+[c_{2}]_{6}-[c_{1}+c_{2}]_{6}}{6}\] (116) The representative cochain of \(\mathscr{B}^{(1)}_{xy}\) has identically the same expression as Eq. (100). Note that if we think of the expression as a representative cochain of \(\mathbb{Z}_{2}\) cohomology, it does not matter whether we have \(+\) sign or \(-\) sign in front of an integer, because we only care about the mod \(2\) value of the expression. However, we carefully choose the sign in Eq. (100) or Eq. (115) such that the expression is a \(\mathbb{Z}\) cochain as well. Hence, we immediately see that the \(\mathbb{Z}_{2}\) reduction of \(\mathscr{B}^{(1)}_{xy}\) is \(B_{xy}\) in Eq. (100). Likewise, the \(\mathbb{Z}_{2}\) reduction of \(\mathscr{B}^{(1)}_{c^{2}}\) is \(A^{2}_{c}\) in Eq. (100). 2. \(C_{6}\) acts nontrivially on \(\mathbb{Z}\). \[H^{2}(p6,\mathbb{Z})=0\,.\] (109) For \(p4\times SO(3)\), there are four possible anyon permutation patterns, determined by whether \(C_{4}\) and \(T_{1,2}\) permute anyons or not. The classification of symmetry fractionalization classes and the generators are listed in Table 15. Here we present the information about \(H^{2}(p4,\mathbb{Z})\) and the generators. 1. Trivial action on \(\mathbb{Z}\). \[H^{2}(p4,\mathbb{Z})=\mathbb{Z}\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{2}\,.\] (110) We denote the generators of the \(\mathbb{Z}\), \(\mathbb{Z}_{4}\) and \(\mathbb{Z}_{2}\) piece by \(\mathscr{B}^{(1)}_{xy}\), \(\mathscr{B}^{(1)}_{c^{2}}\) and \(\beta^{(1)}(A_{x+y})\), respectively, which have representative cochains, \[\mathscr{B}^{(1)}_{xy}(g_{1},g_{2})=P_{40}(c_{1})y_{1}x_{2}-P_{4 1}(c_{1})y_{2}(x_{2}+y_{1})-P_{42}(c_{1})y_{1}x_{2}+P_{43}(c_{1})y_{2}(y_{1}-x _{2})\] (111) \[\mathscr{B}^{(1)}_{c^{2}}(g_{1},g_{2})=\frac{[c_{1}]_{4}+[c_{2}]_{4 }-[c_{1}+c_{2}]_{4}}{4}\] (112) \[\beta^{(1)}(A_{x+y})(g_{1},g_{2})=\frac{[x_{1}+y_{1}]_{2}+[x_{2}+y_{2}]_{2 }-[x_{1}+\Delta x(x_{2},y_{2},c_{1})+y_{1}+\Delta y(x_{2},y_{2},c_{1})]_{2}}{2}\] (113) 2. \(C_{4}\) acts nontrivially on \(\mathbb{Z}\). \[H^{2}(p6,\mathbb{Z})=\mathbb{Z}_{2}\,.\] (114) We denote the generator by \(\beta^{(2)}(A_{x+y})\), which has a representative cochain, \[\beta^{(2)}(A_{x+y})(g_{1},g_{2})=\frac{[x_{1}+y_{1}]_{2}+(-1)^{c_{1}}[x_{2}+ y_{2}]_{2}-[x_{1}+\Delta x(x_{2},y_{2},c_{1})+y_{1}+\Delta y(x_{2},y_{2},c_{1})]_{2}}{2}\] (115) 3. \(T_{1,2}\) act nontrivially on \(\mathbb{Z}\). \[H^{2}(p6,\mathbb{Z})=\mathbb{Z}_{4}\,.\] (116) We denote the generator by \(\mathscr{B}^{(3)}_{xy}\), which has a representative cochain, \[\mathscr{B}^{(3)}_{xy}(g_{1},g_{2})=(-1)^{\tilde{x}_{1}}(P_{c}(c_{1 })P(\tilde{y}_{1})P(\tilde{x}_{2})+P(c_{1})P(\tilde{y}_{1}+\tilde{x}_{2})P( \tilde{y}_{2}))\] \[\text{with }\tilde{x}=x+P_{41}(c)+P_{42}(c),\qquad\tilde{y}=y+P_{42 }(c)+P_{43}(c)\] Note that the \(\mathbb{Z}_{2}\) reduction of \(\mathscr{B}^{(3)}_{xy}\) is actually \(B_{xy}+B_{c^{2}}+A_{x+y}^{2}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Symmetry Group & Action & \(H^{2}(G,\mathcal{A})\) & Realizations & Generators \\ \hline \multirow{2}{*}{\(p6\times SO(3)\)} & Trivial & \(\mathbb{Z}_{2N}\oplus\mathbb{Z}_{(2N,6)}\oplus(\mathbb{Z}_{2})\) & \(4(N(N,3)+1)\) & \(\widetilde{\mathscr{B}}^{(1)}_{xy}\), \(\widetilde{\mathscr{B}}^{(1)}_{c^{2}}\), \(Nw_{2}\) \\ \cline{2-5} & \(C_{6}\colon(a)\to([-a]_{2N})\) & \((\mathbb{Z}_{2})^{3}\) & 8 & \(NB_{xy}\), \(NA_{c}^{2}\), \(Nw_{2}\) \\ \hline \multirow{4}{*}{\(p4\times SO(3)\)} & Trivial & \(\mathbb{Z}_{2N}\oplus\mathbb{Z}_{(2N,4)}\oplus(\mathbb{Z}_{2})^{2}\) & \(8(N(N,2)+1)\) & \(\widetilde{\mathscr{B}}^{(1)}_{xy}\), \(\widetilde{\mathscr{B}}^{(1)}_{c^{2}}\), \(\widetilde{\beta}(A_{x+y})\), \(Nw_{2}\) \\ \cline{2-5} & \(C_{4}\colon(a)\to([-a]_{2N})\) & \((\mathbb{Z}_{2})^{4}\) & 16 & \(NB_{xy}\), \(NB_{c^{2}}\), \(\widetilde{\beta}(A_{x+y})\), \(Nw_{2}\) \\ \cline{1-1} \cline{2-5} & \(T_{1,2}\colon(a)\to([-a]_{2N})\) & \(\mathbb{Z}_{(2N,4)}\oplus(\mathbb{Z}_{2})^{3}\) & \(8((N,2)+1)\) & \(\widetilde{\mathscr{B}}^{(3)}_{xy}\), \(NB_{c^{2}}\), \(NA_{x+y}^{2}\), \(Nw_{2}\) \\ \cline{1-1} \cline{2-5} & \(T_{1,2},C_{4}\colon(a)\to([-a]_{2N})\) & \(\mathbb{Z}_{(2N,4)}\oplus(\mathbb{Z}_{2})^{3}\) & \(8((N,2)+1)\) & \(\widetilde{\mathscr{B}}^{(4)}_{xy}\), \(NB_{c^{2}}\), \(NA_{x+y}^{2}\), \(Nw_{2}\) \\ \hline \end{tabular} \end{table} Table 15: All possible symmetry fractionalization classes of \(G=p6\times SO(3)\) and \(G=p4\times SO(3)\) for U(1)\({}_{2N}\), given all possible anyon permutation patterns. All generators with a tilde come from \(H^{2}(G,\mathbb{Z})\) via Eq. (110), and all generators with \(N\) in the front come from \(H^{2}(G,\mathbb{Z}_{2})\) via Eq. (111). When counting the number of realizations in each case, overcounts due to the equivalence from relabeling anyons have been taken care of. To simplify the notation, in this table sometimes a single symbol can have different meanings. For example, \(\widetilde{\mathscr{B}}^{(1)}_{xy}\) for \(p6\times SO(3)\) is different from \(\widetilde{\mathscr{B}}^{(1)}_{xy}\) for \(p4\times SO(3)\), and their precise meanings and expressions can be found in the discussion regarding the \(p6\times SO(3)\) and \(p4\times SO(3)\) symmetries in this appendix. 4. Both \(T_{1,2}\) and \(C_{4}\) act nontrivially on \(\mathbb{Z}\). \[H^{2}(p6,\mathbb{Z})=\mathbb{Z}_{4}\,.\] (102) We denote the generator by \(\mathscr{B}_{xy}^{(4)}\), which has a representative cochain, \[\mathscr{B}_{xy}^{(4)}(g_{1},g_{2})=(-1)^{x_{1}}(P_{c}(c_{1})P(y_{1})P(x_{2})+P (c_{1})P(y_{1}+x_{2})P(y_{2}))\] (103) ### Ising\({}^{(\nu)}\) In this case, we have \(\mathcal{A}=\mathbb{Z}_{2}\), with no nontrivial topological symmetry. The classification of symmetry fractionalization classes and the generators are listed in Table 15. ### \(\mathbb{Z}_{2}\) topological order In this case, we have \(\mathcal{A}=(\mathbb{Z}_{2})^{2}\). The topological symmetry is \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}^{T}\). The unitary generator of the topological symmetry is the unitary electric-magnetic duality symmetry \(S\) that exchanges \(e\) and \(m\), i.e., \[S\colon(a_{e},a_{m})\to(a_{m},a_{e})\,, \tag{104}\] while the anti-unitary generator is simply the anti-unitary electric-magnetic duality symmetry that permutes anyons in the same way as \(S\). We can choose the \(U\)-symbol such that \[U_{\mathbf{g}}(a,b;c)=\begin{cases}(-1)^{a_{m}b_{e}}&\mathbf{g}\text{ permutes anyons},\\ 1&\text{otherwise}.\end{cases} \tag{105}\] And a set of \(\eta\)-symbols can be chosen such that \[\eta_{a}(\mathbf{g}_{1},\mathbf{g}_{2})=\begin{cases}(-1)^{a_{e}a_{m}}& \mathbf{g}_{1},\,\mathbf{g}_{2}\text{ permute anyons},\\ 1&\text{otherwise}.\end{cases} \tag{106}\] The classification of symmetry fractionalization classes and the generators are listed in Table 15. In the following, we explicitly comment on the cases involving symmetries that permute \(e\) and \(m\). Given a group \(G\) with some element that permutes \(e\) and \(m\), we have the following short exact sequence, \[1\to\tilde{G}\to G\to\mathbb{Z}_{2}\to 1\,, \tag{107}\] where \(\tilde{G}\) is the subgroup of \(G\) that does not permute anyons. From the Serre spectral sequence [100], we immediately see that \[H^{k}(G,\mathbb{Z}_{2}\oplus\mathbb{Z}_{2})\cong H^{k}(\tilde{G},\mathbb{Z}_ {2})\,. \tag{108}\] Specifically, given an element \([\tilde{\omega}]\in H^{2}(\tilde{G},\mathbb{Z}_{2})\) with a representative cochain \(\omega\), we can write down the representative cochain \(\omega\in H^{2}(\tilde{G},\mathbb{Z}_{2}\oplus\mathbb{Z}_{2})\) as follows. First, choose an element not in \(\tilde{G}\) such that \(x^{2}=\mathbf{1}\). Then every \(g\in G\) can be decomposed as \(g=\tilde{g}x^{i},i\in\{0,1\}\), where \(i=0\) if \(g\) is an element in \(G\) and \(i=1\) otherwise, and \(\tilde{g}\) is an element in \(\tilde{G}\). Then we can write down the representative cochain \(w(g_{1},g_{2})\) of \(G\) from the representative cochain \(\tilde{w}\) of \(\tilde{G}\), i.e., \[\omega(g_{1},g_{2})=\left(\tilde{w}(\tilde{g}_{1},x^{i_{1}}\tilde{g}_{2}x_{1}^ {i_{1}}),\tilde{w}(x\tilde{g}_{1}x,xx^{i_{1}}\tilde{g}_{2}xx^{i_{1}})\right)\,. \tag{109}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Symmetry Group & Action & \(H^{2}(G,\mathcal{A})\) & Realizations & Generators \\ \hline \(p6\times SO(3)\) & Trivial & \((\mathbb{Z}_{2})^{3}\) & 8 & \(B_{xy}\), \(A_{c}^{2}\), \(w_{2}\) \\ \hline \(p4\times SO(3)\) & Trivial & \((\mathbb{Z}_{2})^{4}\) & 16 & \(B_{xy}\), \(B_{c^{2}}\), \(A_{x+y}^{2}\), \(w_{2}\) \\ \hline \hline \end{tabular} \end{table} Table 15: All possible symmetry fractionalization classes of \(p6\times SO(3)\) and \(p4\times SO(3)\) for Ising\({}^{(\nu)}\), where \(\nu\) is an odd integer. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Symmetry Group & Action & \(H^{2}(G,\mathcal{A})\) & Realizations & Generators \\ \hline \multirow{8}{*}{\(p6m\times SO(3)\times\mathbb{Z}_{2}^{2}\)} & \multirow{8}{*}{Trivial} & \multirow{8}{*}{\((\mathbb{Z}_{2})^{16}\)} & \multirow{8}{*}{32896} & \((B_{xy},0)\), \((A_{x}^{2},0)\), \((A_{x}A_{m},0)\), \((A_{m}^{2},0)\), \\ & & & \((A_{x}t,0)\), \((A_{m}t,0)\), \((t^{2},0)\), \((w_{2},0)\), \\ & & & \((0,B_{xy})\), \((0,A_{x}^{2})\), \((0,A_{x}A_{m})\), \((0,A_{m}^{2})\), \\ & & & \((0,A_{\cdot}t)\), \((0,A_{m}t)\), \((0,t^{2})\),\((0,w_{2})\) \\ \cline{2-5} & \(\mathcal{T}\colon(a_{x},a_{m})\to(a_{m},a_{x})\) & & \\ \cline{2-5} & \(M,\mathcal{T}\colon(a_{x},a_{m})\to(a_{m},a_{x})\) & \((\mathbb{Z}_{2})^{5}\) & 32 & \((B_{xy},B_{xy})\), \((A_{x}^{2},A_{x}^{2})\), \\ & \(C_{6},\mathcal{T}\colon(a_{x},a_{m})\to(a_{m},a_{x})\) & & \\ \cline{2-5} & \(C_{6},M,\mathcal{T}\colon(a_{x},a_{m})\to(a_{m},a_{x})\) & & \\ \cline{2-5} & \(M\colon(a_{x},a_{m})\to(a_{m},a_{x})\) & \((\mathbb{Z}_{2})^{5}\) & 32 & \((B_{xy},B_{xy})\), \((A_{x}^{2},A_{x}^{2})\), \((A_{x}t,A_{x}t)\), \((t^{2},t^{2})\), \((w_{2},w_{2})\) \\ \cline{2-5} & \(C_{6},M\colon(a_{x},a_{m})\to(a_{m},a_{x})\) & \((\mathbb{Z}_{2})^{5}\) & 32 & \((B_{xy},B_{xy})\), \((A_{m}^{2},A_{m}^{2})\), \((A_{m}t,A_{m}t)\), \((t^{2},t^{2})\), \((w_{2},w_{2})\) \\ \hline \multirow{8}{*}{Trivial} & \multirow{8}{*}{\((\mathbb{Z}_{2})^{22}\)} & \multirow{8}{*}{2098176} & \multirow{8}{*}{128} & \multirow{8}{*}{\((B_{xy},0)\), \((B_{x},0)\), \((A_{x+y}^{2},0)\), \((A_{x+y}^{2}A_{m},0)\), \((A_{x+y}^{2}A_{m},0)\), \((A_{x+y}^{2}t,0)\), \((A_{x+y}^{2}t,0)\), \((A_{x+y}^{2}t,0)\), \((A_{x}^{2},A_{x}^{2})\), \((A_{x}^{2},A_{m}^{2})\), \((w_{2},0)\), \((0,B_{xy})\), \((0,A_{x+y}^{ We can think of the second term as the representative cochain obtained from the conjugation action of \(x\) on \(\tilde{w}\). It is straightforward to check that \(w\) satisfies the cocycle equation and thus is the desired representative cochain. Therefore, for each case where some symmetry actions permute anyons, to identify the symmetry fractionalization classes, we need to identify \(\tilde{G}\) that does not permute anyons. By simply calculating the cohomology of \(\tilde{G}\) we can identify all the possible symmetry fractionalization classes. Still, usually there can be some simplification, because sometimes we can identify \(\omega\) and write down its representative cochain directly in terms of the \(\mathbb{Z}_{2}\) cohomology of \(G\). When this happens, to keep notations consistent, we will still label \(\omega\) using the \(\mathbb{Z}_{2}\) cohomology of \(G\). When we have to use the cohomology of \(\tilde{G}\), we will use A or B to emphasize that it refers to an element in the cohomology of the subgroup \(\tilde{G}\). When the time-reversal symmetry \(\mathcal{T}\) permutes anyons, no matter how other symmetries act on anyons, \(\tilde{G}\) will be isomorphic to \(p6m\times SO(3)\) or \(p4m\times SO(3)\).10 The symmetry fractionalization classes can be identified accordingly. Footnote 10: To see it, suppose any generator of \(p6m\) or \(p4m\) permutes anyons, we can combine this generator with \(\mathcal{T}\) to form a generator of \(\tilde{G}\). It is easy to check that the resulting group is indeed isomorphic to \(p6m\times SO(3)\) or \(p4m\times SO(3)\). When the time-reversal symmetry does not permute anyons, \(\tilde{G}\) will be the product of a subgroup \(\tilde{G}_{s}\) of \(p6m\) or \(p4m\) and \(SO(3)\times\mathbb{Z}_{2}^{T}\). For the cases involving \(p6m\), we have the following three possibilities: 1. \(M\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=p6\), generated by \(T_{1}\), \(T_{2}\) and \(C_{6}\). 2. \(C_{6}\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=p31m\), generated by \(T_{1}\), \(T_{2}\), \(C_{6}^{2}\) and \(M\). 3. \(C_{6},M\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=p3m1\), generated by \(T_{1}\), \(T_{2}\), \(C_{6}^{2}\) and \(C_{6}^{3}M\). In these cases, it turns out that we can write down the representative cochains directly in terms of the representative cochains of the \(\mathbb{Z}_{2}\) cohomology of \(p6m\), and this is what we present in Table 19. For the cases involving \(p4m\), we have the following seven possibilities. In these cases, the explicit representative cochains may not come from the representative cochains of the \(\mathbb{Z}_{2}\) cohomology of \(p4m\), and we really need the expressions of the representative cochains for each \(\tilde{G}_{s}\). The \(\mathbb{Z}_{2}\) cohomology of these \(\tilde{G}_{s}\) and the representative cochains of their generators can all be found in Appendix E of Ref. [6], and we follow the notation there, except we change e.g., \(A_{x}\) to \(\mathtt{A}_{x}\) and use a wilde tilde to emphasize that we are refering to the cochain and cohomology of a subgroup. 1. \(M\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=p4\), generated by \(T_{1}\), \(T_{2}\) and \(C_{4}\). 2. \(C_{4}\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=pmm\), generated by \(T_{1}\), \(T_{2}\), \(C_{4}^{2}\) and \(M\). Now we have 10 elements in the cohomology that are "asymmetric", and we can write down the representative cochains of them with the help of the representative cochains of \(pmm\) and Eq. (100). 3. \(C_{4},M\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=cmm\), generated by \(T_{1}\), \(T_{2}\), \(C_{4}^{2}\) and \(C_{4}^{3}M\). Now we have four elements in the cohomology that are asymmetric. 4. \(T_{1,2}\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=p4m\), generated by \(T_{1}T_{2}\), \(T_{1}^{-1}T_{2}\), \(C_{4}\) and \(C_{4}M\). Now we have six elements in the cohomology that are asymmetric. 5. \(T_{1,2},M\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=p4g\), generated by \(T_{1}T_{2}\), \(T_{1}^{-1}T_{2}\), \(C_{4}\) and \(T_{2}M\). 6. \(T_{1,2},C_{4}\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=p4g\), generated by \(T_{1}T_{2}\), \(T_{1}^{-1}T_{2}\), \(T_{1}C_{4}\) and \(T_{1}T_{2}M\). 7. \(T_{1,2},C_{4},M\colon(a_{e},a_{m})\to(a_{m},a_{e})\). Then \(\tilde{G}_{s}=p4m\), generated by \(T_{1}T_{2}\), \(T_{1}^{-1}T_{2}\), \(T_{1}C_{4}\) and \(T_{1}T_{2}C_{4}M\). Now we have six elements in the cohomology that are asymmetric. ### \(\mathbb{Z}_{N}\) topological order (\(N\geqslant 3\)) In this case, we have \(\mathcal{A}=(\mathbb{Z}_{N})^{2}\). The topological symmetry of \(\mathbb{Z}_{N}\) is complicated for general \(N\). For \(N\geqslant 3\), there is always a subgroup \(\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}\). The unitary \(\mathbb{Z}_{2}\) is generated by the electric-magnetic duality symmetry \(S\) that exchanges \(e\) and \(m\), i.e., \[S\colon(a_{e},a_{m})\to(a_{m},a_{e})\,, \tag{101}\] and the anti-unitary generator \(T\) of \(\mathbb{Z}_{4}^{T}\) permutes anyons in the following way \[T\colon(a_{e},a_{m})\to(a_{m},[-a_{e}]_{N})\,. \tag{104}\] The two generators satisfy the relation \[S^{2}=\mathbf{1}\,,\quad T^{4}=\mathbf{1}\,,\quad STS=T^{-1}\,. \tag{105}\] An element in \(\mathbf{g}\in\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}\) can be labeled by \((g_{1},g_{2})\) with \(g_{1}\in\{0,\ldots,3\},g_{2}\in\{0,1\}\), which corresponds to the element \(T^{g_{1}}S^{g_{2}}\). Given such element \(\mathbf{g}\), the \(U\)-symbols can be chosen such that \[U_{\mathbf{g}}(a,b;c)=\begin{cases}e^{i\frac{2\pi}{3}a_{m}b_{e}}&g_{1}+g_{2} \equiv 1\mod 2\\ 1&g_{1}+g_{2}\equiv 0\mod 2\end{cases} \tag{106}\] A specific choice of \(\eta\)-symbols can be chosen such that \[\eta_{a}(\mathbf{g},\mathbf{h})=\begin{cases}e^{i\frac{2\pi}{3}a_{m}b_{e}}&g_ {1}+g_{2}\equiv h_{1}+h_{2}\equiv 1\mod 2\\ 1&\text{otherwise}\end{cases} \tag{107}\] For \(N=3,4\), this is the full topological symmetry group. To determine the anyon permutation patterns of \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) and \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), we just need to specify how the generators of the symmetry groups permute anyons. Because \(\mathcal{T}^{2}=\mathbf{1}\), \(\mathcal{T}\) can act on anyons in two ways: either \(\mathcal{T}\colon(a_{e},a_{m})\to([-a_{e}]_{N},a_{m})\), or \(\mathcal{T}\colon(a_{e},a_{m})\to(a_{e},[-a_{m}]_{N})\). Because these two cases are related by relabling anyons using the electic-magnetic duality \(S\), we can specialize to the cases \(\mathcal{T}\colon(a_{e},a_{m})\to(a_{e},[-a_{m}]_{N})\), and we only need to consider how \(p6m\) or \(p4m\) permutes anyons. For \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) there are four possible anyon permutation patterns, while for \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), there are eight possible anyon permutation patterns. The corresponding classification of symmetry fractionalization classes and the generators for \(N=3,4\) are listed in Tables 14 and 15, respectively. Specifically, since the symmetries cannot permute \(e\) and \(m\), \(H^{2}(G,\mathbb{Z}_{N}\times\mathbb{Z}_{N})\) simply becomes the direct sum of two \(H^{2}(G,\mathbb{Z}_{N})\) pieces, with the actions on two \(\mathbb{Z}_{N}\) pieces corresponding to symmetry actions on \(e\) or \(m\), respectively. As discussed at the beginning of the appendix, \(H^{2}(G,\mathbb{Z}_{N})\) can all be obtained from the \(\mathbb{Z}\) cohomology or \(\mathbb{Z}_{2}\) cohomology of the symmetry groups. In particular, to obtain the full data of symmetry fractionalization classes, we need the cohomology and representative cochains of \(H^{2}(p6m,\mathbb{Z})\) and \(H^{2}(p4m,\mathbb{Z})\) with all possible actions on \(\mathbb{Z}\), which we present here for completeness. For \(p6m\), we have 1. Trivial action on \(\mathbb{Z}\). \[H^{2}(p6m,\mathbb{Z})=(\mathbb{Z}_{2})^{2}\,.\] (108) We denote the generators of the two \(\mathbb{Z}_{2}\) pieces by \(\beta^{(1)}(A_{c})\) and \(\beta^{(1)}(A_{m})\), respectively, which have representative cochains, \[\beta^{(1)}(A_{c})(g_{1},g_{2})=\frac{[c_{1}]_{2}+[c_{2}]_{2}-[c_ {1}+c_{2}]_{2}}{2}\] (109) \[\beta^{(1)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+[m_{2}]_{2}-[m_{1} +m_{2}]_{2}}{2}\] (110) 2. \(M\) acts nontrivially on \(\mathbb{Z}\). \[H^{2}(p6m,\mathbb{Z})=\mathbb{Z}\oplus\mathbb{Z}_{6}\,.\] (111) We denote the generators of the \(\mathbb{Z}\) piece and the \(\mathbb{Z}_{6}\) piece by \(\mathscr{B}_{xy}\) and \(\mathscr{B}_{c^{2}}\), respectively, which have representative cochains, \[\mathscr{B}^{(2)}_{xy}(g_{1},g_{2}) =P_{60}(c_{1})\left[P_{c}(m_{1})y_{1}x_{2}+m_{1}y_{2}(x_{2}+y_{1})\right]\] \[+P_{61}(c_{1})\left[P_{c}(m_{1})\left(\frac{x_{2}(x_{2}-1)}{2}+y_{ 1}x_{2}-y_{2}(x_{2}+y_{1})\right)+m_{1}\left(\frac{y_{2}(y_{2}-1)}{2}+y_{1}(-x _{2}+y_{2})\right)\right]\] \[+P_{62}(c_{1})\left[P_{c}(m_{1})\left(\frac{y_{2}(y_{2}+1)}{2}-x_ {2}-y_{2}(x_{2}+y_{1})\right)+m_{1}\left(\frac{x_{2}(x_{2}+1)}{2}-y_{2}-y_{1}x _{2}\right)\right]\] \[+P_{63}(c_{1})\left[P_{c}(m_{1})(-x_{2}+y_{2}-y_{1}x_{2})+m_{1}(x _{2}-y_{2}+y_{2}(x_{2}-y_{1})]\right.\] \[+P_{64}(c_{1})\left[P_{c}(m_{1})\left(\frac{x_{2}(x_{2}-1)}{2}+y_ {2}-y_{1}x_{2}-y_{2}(x_{2}-y_{1})\right)+m_{1}\left(\frac{y_{2}(y_{2}-1)}{2}+x _{2}+y_{1}(x_{2}-y_{2})\right)\right]\] \[+P_{65}(c_{1})\left[P_{c}(m_{1})\left(\frac{y_{2}(y_{2}+1)}{2}-y_ {2}(x_{2}-y_{1})\right)+m_{1}\left(\frac{x_{2}(x_{2}+1)}{2}+y_{1}x_{2}\right)\right] \tag{114}\] \[\mathscr{B}^{(2)}_{c^{2}}(g_{1},g_{2}) =\frac{[c_{1}]_{6}+(-1)^{m_{1}}[c_{2}]_{6}-[c_{1}+(-1)^{m_{1}}c_{ 2}]_{6}}{6} \tag{115}\] Note that the \(\mathbb{Z}_{2}\) reduction of \(\mathscr{B}^{(2)}_{c^{2}}\) is actually \(A_{c}^{2}+A_{c}A_{m}\). 3. \(C_{6}\) acts nontrivially on \(\mathbb{Z}\). \[H^{2}(p6m,\mathbb{Z})=\mathbb{Z}_{2}\,.\] (116) We denote the generator by \(\beta^{(1)}(A_{m})\), which has a representative cochain, \[\beta^{(3)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+(-1)^{c_{1}}[m_{2}]_{2}-[m_ {1}+m_{2}]_{2}}{2}\] (117) 4. Both \(C_{6}\) and \(M\) act nontrivially on \(\mathbb{Z}\). \[H^{2}(p6m,\mathbb{Z})=\mathbb{Z}_{2}\,.\] (104) \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Symmetry Group & Action & \(H^{2}(G,\mathcal{A})\) & Realizations & Generators \\ \hline \multirow{9}{*}{\(M\colon(a_{e},a_{m})\to([-a_{e}]_{A})\)} & \multirow{3}{*}{\((\mathbb{Z}_{2})^{16}\)} & \multirow{3}{*}{\(2^{16}\)} & \multirow{3}{*}{\(2^{16}\)} & \multirow{3}{*}{\((\mathbb{Z}_{2},\pi_{0})\), \((\bar{\beta}(A_{e},\Delta_{0}),2(2A_{e},\alpha_{0}),(\bar{\beta}(A_{e},\Delta_{ 0}))\), \( We denote the generator by \(\beta^{(4)}(A_{m})\), which has a representative cochain, \[\beta^{(4)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+(-1)^{c_{1}+m_{1}}[m_{2}]_{2}-[m _{1}+m_{2}]_{2}}{2} \tag{111}\] For \(p4m\), we have 1. Trivial action on \(\mathbb{Z}\). \[H^{2}(p4m,\mathbb{Z})=(\mathbb{Z}_{2})^{3}\,.\] (112) We denote the generators of the three \(\mathbb{Z}_{2}\) pieces by \(\beta^{(1)}(A_{x+y})\), \(\beta^{(1)}(A_{c})\) and \(\beta^{(1)}(A_{m})\), respectively, which have representative cochains, \[\beta^{(1)}(A_{x+y})(g_{1},g_{2})=\frac{[x_{1}+y_{1}]_{2}+[x_{2}+ y_{2}]_{2}-[x_{1}+\Delta x+y_{1}+\Delta y]_{2}}{2}\] (113) \[\beta^{(1)}(A_{c})(g_{1},g_{2})=\frac{[c_{1}]_{2}+[c_{2}]_{2}-[c_{ 1}+(-1)^{m_{1}}c_{2}]_{2}}{2}\] (114) \[\beta^{(1)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+[m_{2}]_{2}-[m_{1}+m_{2}] _{2}}{2}\] (115) 2. \(M\) acts nontrivially on \(\mathbb{Z}\). \[H^{2}(p4m,\mathbb{Z})=\mathbb{Z}\oplus\mathbb{Z}_{4}\oplus\mathbb{Z}_{2}\,.\] (116) We denote the generators of the \(\mathbb{Z}\), \(\mathbb{Z}_{4}\) and \(\mathbb{Z}_{2}\) piece by \(\mathscr{B}^{(2)}_{xy}\), \(\mathscr{B}^{(2)}_{c^{2}}\) and \(\beta^{(2)}(A_{x+y})\), respectively, which have representative cochains, \[\mathscr{B}^{(2)}_{xy}(g_{1},g_{2})=P_{40}(c_{1})(-1)^{m_{1}}y_{1 }x_{2}-P_{41}(c_{1})y_{2}(y_{1}+(-1)^{m_{1}}x_{2})\] (117) \[\qquad\qquad-P_{42}(c_{1})(-1)^{m_{1}}y_{1}x_{2}+P_{43}(c_{1})y_{ 2}(y_{1}-(-1)^{m_{1}}x_{2})\] (118) \[\mathscr{B}^{(2)}_{c^{2}}(g_{1},g_{2})=\frac{[c_{1}]_{4}+(-1)^{m_{ 1}}[c_{2}]_{4}-[c_{1}+(-1)^{m_{1}}c_{2}]_{4}}{4}\] (119) \[\beta^{(2)}(A_{x+y})(g_{1},g_{2})=\frac{[x_{1}+y_{1}]_{2}+(-1)^{m_{ 1}}[x_{2}+y_{2}]_{2}-[x_{1}+\Delta x+y_{1}+\Delta y]_{2}}{2}\] (120) 3. \(C_{4}\) acts nontrivially on \(\mathbb{Z}\). \[H^{2}(p4m,\mathbb{Z})=\left(\mathbb{Z}_{2}\right)^{2}.\] (121) We denote the generators of the two \(\mathbb{Z}_{2}\) pieces by \(\beta^{(3)}(A_{x+y})\) and \(\beta^{(3)}(A_{m})\), respectively, which have representative cochains, \[\beta^{(3)}(A_{x+y})(g_{1},g_{2})=\frac{[x_{1}+y_{1}]_{2}+(-1)^{c_ {1}}[x_{2}+y_{2}]_{2}-[x_{1}+\Delta x+y_{1}+\Delta y]_{2}}{2}\] (122) \[\beta^{(3)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+(-1)^{c_{1}}[m_{2} ]_{2}-[m_{1}+m_{2}]_{2}}{2}\] (123) 4. Both \(C_{4}\) and \(M\) act nontrivially on \(\mathbb{Z}\). \[H^{2}(p4m,\mathbb{Z})=\left(\mathbb{Z}_{2}\right)^{2}.\] (124) We denote the generators of the two \(\mathbb{Z}_{2}\) pieces by \(\beta^{(4)}(A_{x+y})\) and \(\beta^{(4)}(A_{m})\), respectively, which have representative cochains, \[\beta^{(4)}(A_{x+y})(g_{1},g_{2})=\frac{[x_{1}+y_{1}]_{2}+(-1)^{c_ {1}+m_{1}}[x_{2}+y_{2}]_{2}-[x_{1}+\Delta x+y_{1}+\Delta y]_{2}}{2}\] (125) \[\beta^{(4)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+(-1)^{c_{1}+m_{ 1}}[m_{2}]_{2}-[m_{1}+m_{2}]_{2}}{2}\] (126) 5. \(T_{1,2}\) acts nontrivially on \(\mathbb{Z}\). \[H^{2}(p4m,\mathbb{Z})=(\mathbb{Z}_{2})^{2}\,.\] (104) We denote the generator of the two \(\mathbb{Z}_{2}\) pieces by \(\beta^{(5)}(A_{c})\) and \(\beta^{(5)}(A_{m})\), respectively, which have representative cochains, \[\beta^{(5)}(A_{c})(g_{1},g_{2})=\frac{[c_{1}]_{2}+(-1)^{x_{1}+y_{ 1}}[c_{2}]_{2}-[c_{1}+(-1)^{m_{1}}c_{2}]_{2}}{2}\] (105) \[\beta^{(5)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+(-1)^{x_{1}+y_{ 1}}[m_{2}]_{2}-[m_{1}+m_{2}]_{2}}{2}\] (106) 6. Both \(T_{1,2}\) and \(M\) act nontrivially on \(\mathbb{Z}\). \[H^{2}(p4m,\mathbb{Z})=\mathbb{Z}_{4}\oplus\mathbb{Z}_{2}\,.\] (107) We denote the generators of the \(\mathbb{Z}_{4}\) and \(\mathbb{Z}_{2}\) piece by \(\mathscr{B}^{(6)}_{xy}\) and \(\beta^{(6)}(A_{c})\), respectively, which have representative cochains, \[\mathscr{B}^{(6)}_{xy}(g_{1},g_{2})=(-1)^{\bar{x}_{1}}(P_{c}(c_{1})P(\tilde{y} _{1})P(\tilde{x}_{2})+P(c_{1})P(\tilde{y}_{1}+\tilde{x}_{2})P(\tilde{y}_{2}))\] (108) with \(\tilde{x}=x+P_{41}(c)+P_{c}(m)P_{42}(c)+P(m)P_{40}(c),\qquad\tilde{y}=y+P_{42 }(c)+P_{c}(m)P_{43}(c)+P(m)P_{41}(c)\) \[\beta^{(6)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+(-1)^{x_{1}+y_{ 1}+m_{1}}[m_{2}]_{2}-[m_{1}+m_{2}]_{2}}{2}\] (109) Note that the \(\mathbb{Z}_{2}\) reduction of \(\mathscr{B}^{(7)}_{xy}\) is actually \(B_{xy}+B_{c^{2}}+A_{x+y}(A_{x+y}+A_{m})\). 7. Both \(T_{1,2}\) and \(C_{4}\) act nontrivially on \(\mathbb{Z}\). \[H^{2}(p4m,\mathbb{Z})=\mathbb{Z}_{4}\oplus\mathbb{Z}_{2}\,.\] (110) We denote the generators of the \(\mathbb{Z}_{4}\) and \(\mathbb{Z}_{2}\) pieces by \(\beta^{(7)}(A_{x+y})\) and \(\beta^{(7)}(A_{m})\), respectively, which have representative cochains, \[\mathscr{B}^{(7)}_{xy}(g_{1},g_{2})=(-1)^{x_{1}}(P_{c}(c_{1})P(y_ {1})P(x_{2})+P(c_{1})P(y_{1}+x_{2})P(y_{2}))\] (111) \[\beta^{(7)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+(-1)^{x_{1}+y_{ 1}+c_{1}}[m_{2}]_{2}-[m_{1}+m_{2}]_{2}}{2}\] (112) 8. All of \(T_{1,2}\), \(C_{4}\) and \(M\) act nontrivially on \(\mathbb{Z}\). \[H^{2}(p4m,\mathbb{Z})=(\mathbb{Z}_{2})^{2}\,.\] (113) We denote the generators of the two \(\mathbb{Z}_{2}\) pieces by \(\beta^{(8)}(A_{c})\) and \(\beta^{(8)}(A_{m})\), respectively, which have representative cochains, \[\beta^{(8)}(A_{c})(g_{1},g_{2})=\frac{[c_{1}]_{2}+(-1)^{x_{1}+y_{ 1}+c_{1}+m_{1}}[c_{2}]_{2}-[c_{1}+(-1)^{m_{1}}c_{2}]_{2}}{2}\] (114) \[\beta^{(8)}(A_{m})(g_{1},g_{2})=\frac{[m_{1}]_{2}+(-1)^{x_{1}+y_{ 1}+c_{1}+m_{1}}[m_{2}]_{2}-[m_{1}+m_{2}]_{2}}{2}\] (115) ### \(\mathrm{U}(1)_{2}\times\mathrm{U}(1)_{-2}\) (Double Semion) In this case, we have \(\mathcal{A}=(\mathbb{Z}_{2})^{2}\). The topological symmetry group is \(\mathbb{Z}_{2}^{T}\), generated by \(\tilde{S}\) exchanging \(s\) and \(\bar{s}\), i.e., \[\tilde{S}\colon(a_{s},a_{\bar{s}})\to(a_{\bar{s}},a_{s})\,. \tag{116}\] We can choose the \(U\)-symbols and a set of \(\eta\)-symbols all equal to \(1\). Therefore, the anyon permutation patterns are completely fixed. The classification of symmetry fractionalization patterns and the generators are listed in Table 14. It turns out that all symmetry fractionalization classes lead to anomaly-free states. ### \(\mathrm{U}(1)_{4}\times\mathrm{U}(1)_{-4}\) In this case, we have \(\mathcal{A}=(\mathbb{Z}_{4})^{2}\). For \(N=2\), the topological symmetry is \(\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}^{T}\), generated by an order 2 anti-unitary symmetry \(\tilde{S}\) which exchanges \(s\) and \(\bar{s}\), i.e., \[\tilde{S}\colon(a_{s},a_{\bar{s}})\to\left(a_{\bar{s}},a_{s}\right), \tag{102}\] and another order 4 anti-unitary symmetry \(T\), which permutes anyons in the following way \[T\colon(a_{s},a_{\bar{s}})\to(a_{\bar{s}},[-a_{s}]_{2N})\,. \tag{111}\] The two generators satisfy the relation \[\tilde{S}^{2}=\mathbf{1}\,,\quad T^{4}=\mathbf{1}\,,\quad\tilde{S}T\tilde{S}=T^ {-1}\,. \tag{112}\] An element in \(\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}^{T}\) can be written as \(T^{q_{1}}\tilde{S}^{q_{2}}\), with \(g_{1}\in\{0,\dots,3\}\) and \(g_{2}\in\{0,1\}\). To define the \(U\)-symbols, first we define the following function \[\tilde{U}(a_{s},b_{s})=\begin{cases}(-1)^{a_{s}}&b_{s}\neq 0\\ 1&b_{s}=0\end{cases} \tag{113}\] Given an element \(\mathbf{g}\in\mathbb{Z}_{4}^{T}\rtimes\mathbb{Z}_{2}^{T}\), the \(U\)-symbols can be chosen such that \[U_{\mathbf{g}}(a,b;c)=\begin{cases}1&g_{1}=0\\ \tilde{U}(a_{\bar{s}},b_{\bar{s}})&g_{1}=1\\ \tilde{U}(a_{s},b_{s})\tilde{U}(a_{\bar{s}},b_{\bar{s}})&g_{1}=2\\ \tilde{U}(a_{s},b_{s})&g_{1}=3\end{cases} \tag{114}\] And a set of \(\eta\)-symbols can be chosen to be all identity. Because \(\mathcal{T}^{2}=\mathbf{1}\), \(\mathcal{T}\) can act on anyons in two ways: either \(\mathcal{T}\colon(a_{s},a_{\bar{s}})\to(a_{\bar{s}},a_{s})\), or \(\mathcal{T}\colon(a_{s},a_{\bar{s}})\to([-a_{\bar{s}}]_{4},[-a_{s}]_{4})\). Because these two cases are related by relabling anyons using \(T\tilde{S}\), we can specialize to the cases \(\mathcal{T}\colon(a_{s},a_{\bar{s}})\to(a_{\bar{s}},a_{s})\), and we only need to consider how \(p6m\) or \(p4m\) permutes anyons. For \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) there are four possible anyon permutation patterns, while for \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), there are eight possible anyon permutation patterns. The corresponding classification of symmetry fractionalization classes and the generators are listed in Table 13. Specifically, since \(\mathcal{T}\) permutes \(s\) and \(\bar{s}\), \(H^{2}(p6m\times SO(3)\times\mathbb{Z}_{2}^{T},\mathbb{Z}_{4}\oplus\mathbb{Z}_{4})\) or \(H^{2}(p6m\times SO(3)\times\mathbb{Z}_{2}^{T},\mathbb{Z}_{4}\oplus\mathbb{Z}_{4})\) is isomorphic to \(H^{2}(p6m\times SO(3),\mathbb{Z}_{4})\) or \(H^{2}(p4m\times SO(3),\mathbb{Z}_{4})\), respectively, with the \(\mathbb{Z}_{4}\) corresponding to the diagonal \((0,0),(1,1),(2,2),(3,3)\) anyons. As discussed at the beginning of the appendix, \(H^{2}(G,\mathbb{Z}_{4})\) can all be obtained from the \(\mathbb{Z}\) cohomology or \(\mathbb{Z}_{2}\) cohomology of the symmetry groups, and we list the \(\mathbb{Z}\) cohomolgy of \(p6m\) or \(p4m\) in Appendix C.4. It turns out that all symmetry fractionalization classes lead to anomaly-free states. ### Anomaly indicators In this appendix, we first write down the anomaly indicators for \(\mathbb{Z}_{2}^{T}\), \(\mathbb{Z}_{2}^{T}\times\mathbb{Z}_{2}^{T}\), \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) and \(SO(3)\times\mathbb{Z}_{2}^{T}\) symmetries, where \(\mathbb{Z}_{2}^{T}\) denotes an anti-unitary order 2 symmetry group. These anomaly indicators are all derived in Ref. [19] (also see Ref. [69; 101] for the \(\mathbb{Z}_{2}^{T}\) symmetry). As explained in Ref. [19] in detail (see Sec. VI therein), the anomaly indicators of many other groups, including \(p6\times SO(3)\), \(p4\times SO(3)\), \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\), \(p6m\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\), can be obtained by restricting these groups to some of their \(\mathbb{Z}_{2}^{T}\), \(\mathbb{Z}_{2}^{T}\times\mathbb{Z}_{2}^{T}\) and \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) subgroups. So we can use the known anomaly indicators in Ref. [19] to write down all anomaly indicators for all symmetry groups appearing in this paper, and identify the anomaly accordingly. These anomaly indicators are also recorded in this appendix. Using the expressions of the anomaly of each lattice homotopy class for these symmetries in Ref. [6], which are written in terms of group cohomology, we can further obtain the values of the anomaly indicators for each lattice homotopy class of these symmetries. * \(\mathbb{Z}_{2}^{T}\) The anomalies for the group \(\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \(\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}\), and the two \(\mathbb{Z}_{2}\) factors correspond to the "in-cohomology" and "beyond-cohomology" piece of the anomaly, respectively. The anomaly indicator for the beyond-cohomology piece is given by \[\mathcal{I}_{0}=\frac{1}{D}\sum_{a}d_{a}^{2}\theta_{a}\,.\] (115) This is also related to the chiral central charge \(c_{-}\) by the formula \(\mathcal{I}_{0}=\exp\left(2\pi i\frac{c_{-}}{8}\right)\). The anomaly indicator for the in-cohomology piece is given by \[\mathcal{I}_{1}(\mathcal{T})=\frac{1}{D}\sum_{\begin{subarray}{c}a=a \\ \tau=a\end{subarray}}d_{a}\theta_{a}\times\eta_{a}(\mathcal{T},\mathcal{T})\] (116) where \(\mathcal{T}\) is the generator of the \(\mathbb{Z}_{2}^{T}\) symmetry. * \(\mathbb{Z}_{2}^{T}\times\mathbb{Z}_{2}^{T}\) The anomalies for the group \(\mathbb{Z}_{2}^{T}\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{4}\). Suppose the two anti-unitary generators of \(\mathbb{Z}_{2}^{T}\times\mathbb{Z}_{2}^{T}\) are \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\). The four anomaly indicators can be given by \(\mathcal{I}_{0}\), \(\mathcal{I}_{1}(\mathcal{T}_{1})\), \(\mathcal{I}_{1}(\mathcal{T}_{2})\) and \(\mathcal{I}_{2}\left(\mathcal{T}_{1},\mathcal{T}_{2}\right)=\frac{1}{D^{3}} \sum_{\begin{subarray}{c}a,b,c,x,y,u,v\\ \mu_{\pi}\mu_{\pi}\nu_{\mu}\nu_{\nu}\bar{\nu}_{\mu}\bar{\nu}_{\mu}\bar{\nu}_{ \nu}^{*}\rho\sigma\alpha\beta\gamma\delta\\ \mathcal{T}_{1}a\times\mathcal{T}_{2}c\times c\to a\\ \mathcal{T}_{1}c\times c\times b\to 7\bar{\nu}_{2}b\end{subarray}}d_{c}d_{v} \frac{\theta_{v}}{\theta_{a}\theta_{b}}\left(R_{u}^{\tau_{1},c_{7}\tau_{2}c} \right)_{\rho\sigma}\] \[\times\left(F_{v}^{a,\tau_{1}\tau_{2},c_{7}\bar{\nu}_{2}}y\right) ^{*}_{\left(\tau_{1}x,\bar{\mu}_{\pi},\alpha\right)\left(b,\bar{\mu}_{\eta}, \tau\right)}\left(F_{\tau_{2}\bar{\nu}_{2}}^{\tau_{2},c_{7}\tau_{1},c_{7}}y \right)^{*}_{\left(u,\rho,\beta\right)\left(\tau_{2}b,\mu_{\eta},\bar{\nu}_{ \eta}\right)}\] (123) \[\times\left(F_{x}^{\tau_{1},\tau_{1},c_{7},\tau_{2}}c\right)^{*}_{ \left(\tau_{1}a,\bar{\mu}_{\pi},\mu_{\pi}\right)\left(u,\sigma,\gamma\right)} \left(F_{v}^{\tau_{1},x,u,y\right)}\ast_{\left(x,\gamma,\delta\right)\left( \tau_{2}y,\beta,\alpha\right)}\left(F_{v}^{\tau,c,b\right)}(a,\nu_{\pi,\tau}) (y,\nu_{y},\delta)\] \[\times U_{\mathcal{T}_{1}}^{-1}(\mathcal{T}_{1}a,\mathcal{T}_{2}c; x)_{\mu_{\pi}\bar{\mu}_{\pi}}U_{\mathcal{T}_{1}}^{-1}(x,c;a)_{\nu_{\pi}\bar{\nu}_{ \mu}}U_{\mathcal{T}_{2}}^{-1}(\mathcal{T}_{1}c,y;\bar{\tau}_{2}b)^{*}_{\mu_{ \eta}\bar{\mu}_{\eta}}U_{\mathcal{T}_{2}}^{-1}(c,b;y)^{*}_{\nu_{\eta}\bar{\nu }_{\eta}}\times\eta_{a}(\mathcal{T}_{1},\mathcal{T}_{1})\eta_{b}(\mathcal{T}_ {2},\mathcal{T}_{2})\frac{\eta_{c}(\mathcal{T}_{2},\mathcal{T}_{1})}{\eta_{c} (\mathcal{T}_{1},\mathcal{T}_{2})}\] * \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) The anomalies for the group \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{2}\). Suppose the two generators of \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) are \(C_{1}\) and \(C_{2}\). The two anomaly indicators can be given by \(\mathcal{I}_{3}(C_{1},C_{2})\) and \(\mathcal{I}_{3}(C_{2},C_{1})\), where \[\mathcal{I}_{3}\left(C_{1},C_{2}\right)=\frac{1}{D^{2}}\sum_{ \begin{subarray}{c}a,b,x,u\\ \mu\nu\bar{\mu}b\rho\sigma\alpha\\ c_{1}a=a\\ a\times b\times c^{1}b\to c_{2}a\end{subarray}}d_{b}\frac{\theta_{x}}{\theta_{a} }\left(R_{u}^{b,C_{1}b}\right)_{\rho\sigma}\left(F_{C_{2}a}^{a,b,C_{1}b}\right) ^{*}_{\left(x,\bar{\mu},\bar{\nu}\right)\left(u,\sigma,\alpha\right)}\left(F_{ C_{2}a}^{a,C_{1}b,b}\right)_{\left(C_{1}x,\mu,\nu\right)\left(u,\rho,\alpha \right)}\] (124) \[\times U_{C_{1}}^{-1}(a,b;x)_{\bar{\mu}\mu}U_{C_{1}}^{-1}(x,^{C_{1} }b;^{C_{2}}a)_{\bar{\nu}\nu}\times\frac{1}{\eta_{b}(C_{1},C_{1})}\frac{\eta_{ a}(C_{2},C_{1})}{\eta_{a}(C_{1},C_{2})}\] * \(SO(3)\times\mathbb{Z}_{2}^{T}\) The anomalies for the group \(SO(3)\times\mathbb{Z}_{2}^{T}\equiv SO(3)\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{4}\). Suppose that the generator of \(\mathbb{Z}_{2}^{T}\) is \(\mathcal{T}\) and let \(U_{\pi}\) be a \(\pi\) rotation in \(SO(3)\). The four anomaly indicators can be given by \(\mathcal{I}_{0}\), \(\mathcal{I}(\mathcal{T})\), \(\mathcal{I}(\mathcal{T}U_{\pi})\), and \[\mathcal{I}_{4}=\frac{1}{D}\sum_{a}d_{a}^{2}\theta_{a}e^{i2\pi q_{a}}\] (125) where \(q_{a}\in\{0,\frac{1}{2}\}\) denotes whether anyon \(a\) carries linear representation (\(q_{a}=0\)) or spinor representation (\(q_{a}=\frac{1}{2}\)) under \(SO(3)\) symmetry. From these known anomaly indicators, we can construct the anomaly indicators of the symmetry groups appearing in this paper by restricting to subgroups. We need to find a complete list of subgroups such that all nontrivial elements are nonzero after restricting to at least one such subgroup. If the condition is satisfied, we indeed find a complete set of anomaly indicators. * \(p6\times SO(3)\) The anomalies for the group \(p6\times SO(3)\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{2}\). The two anomaly indicators can be given by \[\mathfrak{l}_{1}=\mathcal{I}_{3}(C_{2}U_{\pi},C_{2}U_{\pi}^{\prime})\,,\quad \mathfrak{l}_{2}=\mathcal{I}_{3}(T_{1}C_{2}U_{\pi},T_{1}C_{2}U_{\pi}^{\prime})\,.\] (126) The values of these anomaly indicators in each lattice homotopy class with \(p6\times SO(3)\) symmetry are given in Table 1. * \(p4\times SO(3)\) The anomalies for the group \(p4\times SO(3)\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{3}\). The three anomaly indicators can be given by \[\mathfrak{l}_{1}=\mathcal{I}_{3}(C_{2}U_{\pi},C_{2}U_{\pi}^{\prime})\,,\quad \mathfrak{l}_{2}=\mathcal{I}_{3}(T_{1}T_{2}C_{2}U_{\pi},T_{1}T_{2}C_{2}U_{\pi}^{ \prime})\,,\quad\mathfrak{l}_{3}=\mathcal{I}_{3}(T_{1}C_{2}U_{\pi},T_{1}C_{2}U_{ \pi}^{\prime})\,.\] (127) The values of these anomaly indicators in each lattice homotopy class with \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetry are given in Table 17. The anomalies for the group \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\) The anomalies for the group \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{22}\). The complete list of anomaly indicators can be given by \[\begin{split}&\mathsf{l}_{0}=\mathcal{I}_{0}\\ &\mathsf{l}_{1}=\mathcal{I}_{1}(\mathcal{T})\quad\mathsf{l}_{2}= \mathcal{I}_{1}(M)\quad\mathsf{l}_{3}=\mathcal{I}_{1}(C_{2}\mathcal{T})\quad \mathsf{l}_{4}=\mathcal{I}_{1}(C_{2}M)\\ &\mathsf{l}_{5}=\mathcal{I}_{2}(\mathcal{T},C_{2}\mathcal{T}) \quad\mathsf{l}_{6}=\mathcal{I}_{2}(\mathcal{T},M)\quad\mathsf{l}_{7}= \mathcal{I}_{2}(C_{2}\mathcal{T},M)\quad\mathsf{l}_{8}=\mathcal{I}_{2}(C_{2} \mathcal{T},C_{2}M)\quad\mathsf{l}_{9}=\mathcal{I}_{2}(M,C_{2}M)\\ &\mathsf{l}_{10}=\mathcal{I}_{1}(T_{1}T_{2}C_{2}\mathcal{T}) \quad\mathsf{l}_{11}=\mathcal{I}_{2}(M,T_{1}T_{2}C_{2}\mathcal{T})\quad \mathsf{l}_{12}=\mathcal{I}_{1}(\mathcal{T},T_{1}T_{2}C_{2}\mathcal{T})\quad \mathsf{l}_{13}=\mathcal{I}_{1}(M,T_{1}T_{2}C_{2}M)\\ &\mathsf{l}_{14}=\mathcal{I}_{1}(\mathcal{T}U_{\pi})\quad\mathsf{ l}_{15}=\mathcal{I}_{1}(MU_{\pi})\quad\mathsf{l}_{16}=\mathcal{I}_{1}(C_{2} \mathcal{T}U_{\pi})\quad\mathsf{l}_{17}=\mathcal{I}_{1}(C_{2}MU_{\pi})\\ &\mathsf{l}_{18}=\mathcal{I}_{1}(T_{1}T_{2}C_{2}\mathcal{T}U_{ \pi})\quad\mathsf{l}_{19}=\mathcal{I}_{4}\quad\mathsf{l}_{20}=\mathcal{I}_{3} (C_{2}U_{\pi},U_{\pi}^{\prime})\quad\mathsf{l}_{21}=\mathcal{I}_{3}(M \mathcal{T}U_{\pi},U_{\pi}^{\prime})\end{split}\] (111) The values of these anomaly indicators in each lattice homotopy class with \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetry are given in Table 18. The anomalies for the group \(p6m\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{14}\). The complete set of anomaly indicators for \(p6m\times\mathbb{Z}_{2}^{T}\) can be obtained by simply ignoring all anomaly indicators involving \(U_{\pi}\), i.e., this set consists of \(\mathsf{l}_{0}\) to \(\mathsf{l}_{13}\). The values of these anomaly indicators in each lattice homotopy class with \(p6m\times\mathbb{Z}_{2}^{T}\) symmetry are also given by Table 18 (after removing \(\mathsf{l}_{14}\) to \(\mathsf{l}_{21}\)). * \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) and \(p4m\times\mathbb{Z}_{2}^{T}\) The anomalies for the group \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{31}\). The complete list of anomaly indicators can be given by \[\begin{split}&\mathsf{l}_{0}=\mathcal{I}_{0}\\ &\mathsf{l}_{1}=\mathcal{I}_{1}(\mathcal{T})\quad\mathsf{l}_{2}= \mathcal{I}_{1}(M)\quad\mathsf{l}_{3}=\mathcal{I}_{1}(C_{2}\mathcal{T})\quad \mathsf{l}_{4}=\mathcal{I}_{1}(C_{4}M)\\ &\mathsf{l}_{5}=\mathcal{I}_{2}(\mathcal{T},C_{2}\mathcal{T}) \quad\mathsf{l}_{6}=\mathcal{I}_{2}(\mathcal{T},M)\quad\mathsf{l}_{7}= \mathcal{I}_{2}(\mathcal{T},C_{4}M)\quad\mathsf{l}_{8}=\mathcal{I}_{2}(C_{2} \mathcal{T},M)\quad\mathsf{l}_{9}=\mathcal{I}_{2}(C_{2}\mathcal{T},C_{4}M)\\ &\mathsf{l}_{10}=\mathcal{I}_{1}(T_{1}M)\quad\mathsf{l}_{11}= \mathcal{I}_{1}(T_{1}C_{2}\mathcal{T})\quad\mathsf{l}_{12}=\mathcal{I}_{1}(T_ {1}T_{2}C_{2}\mathcal{T})\\ &\mathsf{l}_{13}=\mathcal{I}_{2}(\mathcal{T},T_{1}M)\quad\mathsf{ l}_{14}=\mathcal{I}_{2}(\mathcal{T},T_{1}C_{2}\mathcal{T})\quad\mathsf{l}_{15}= \mathcal{I}_{2}(\mathcal{T},T_{1}T_{2}C_{2}\mathcal{T})\\ &\mathsf{l}_{16}=\mathcal{I}_{2}(T_{2}C_{2}\mathcal{T},M)\quad \mathsf{l}_{17}=\mathcal{I}_{2}(M,T_{2}C_{2}M)\quad\mathsf{l}_{18}=\mathcal{I} _{2}(T_{1}T_{2}^{-1}C_{2}\mathcal{T},C_{4}M)\quad\mathsf{l}_{19}=\mathcal{I}_{2 }(T_{1}T_{2}C_{2}\mathcal{T},T_{1}M)\\ &\mathsf{l}_{20}=\mathcal{I}_{4}\quad\mathsf{l}_{21}=\mathcal{I}_{1 }(\mathcal{T}U_{\pi})\quad\mathsf{l}_{22}=\mathcal{I}_{1}(MU_{\pi})\quad\mathsf{ l}_{23}=\mathcal{I}_{1}(C_{2}\mathcal{T}U_{\pi})\quad\mathsf{l}_{24}= \mathcal{I}_{1}(C_{4}MU_{\pi})\\ &\mathsf{l}_{25}=\mathcal{I}_{1}(T_{1}MU_{\pi})\quad\mathsf{l}_{26 }=\mathcal{I}_{1}(T_{1}C_{2}\mathcal{T}U_{\pi})\quad\mathsf{l}_{27}=\mathcal{I} _{1}(T_{1}T_{2}C_{2}\mathcal{T}U_{\pi})\\ &\mathsf{l}_{28}=\mathcal{I}_{3}(C_{4}M\mathcal{T}U_{\pi},U_{\pi}^{ \prime})\quad\mathsf{l}_{29}=\mathcal{I}_{3}(M\mathcal{T}U_{\pi},U_{\pi}^{ \prime})\quad\mathsf{l}_{30}=\mathcal{I}_{3}(T_{1}M\mathcal{T}U_{\pi},U_{\pi}^{ \prime})\end{split}\] (112) The values of these anomaly indicators in each lattice homotopy class with \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\) symmetry are given in Table 19. The anomalies for the group \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{14}\). The complete set of anomaly indicators for the group \(p6m\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{14}\). The complete set of anomaly indicators for the group \(p6m\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{31}\). The complete list of anomaly indicators for the group \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{31}\). The complete list of anomaly indicators for the group \(p6m\times SO(3)\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{31}\). The anomalies for the group \(p4m\times\mathbb{Z}_{2}^{T}\) in (2+1)-d are classified by \((\mathbb{Z}_{2})^{20}\). The complete set of anomaly indicators for \(p4m\times\mathbb{Z}_{2}^{T}\) can be obtained by simply ignoring all anomaly indicators involving \(U_{\pi}\), i.e., this set consists of \(\mathsf{l}_{0}\) to \(\mathsf{l}_{19}\). The values of these anomaly indicators in each lattice homotopy class with \(p4m\times\mathbb{Z}_{2}^{T}\) symmetry can be obtained from Table 14 (after removing \(\mathsf{l}_{20}\) to \(\mathsf{l}_{30}\)). Symmetry fractionalization classes of the "beyond-parton" \(\mathbb{Z}_{2}\) topological quantum spin liquids In Sec. VIII, we have found 117 different \(p4m\times\mathbb{Z}_{2}^{T}\) symmetry-enriched \(\mathbb{Z}_{2}\) topological quantum spin liquids in lattice homotopy class a, where 64 of them were identified using the parton-mean-field approach [66], while the other 53 are beyond the usual parton mean field. It turns out that there is no anyon permutaion for any of these 117 states. In this appendix, we present the details of the symmetry fractionalization class of each of these 53 "beyond-parton" states, summarized in Table 15. According to Table 15, without anyon permutation the symmetry fractionalization classes are classified by \((\mathbb{Z}_{2})^{20}\), which can be viewed as 10 different quantum numbers for each of \(e\) and \(m\). These quantum numbers are recorded in Table 15 for each of the 53 states. Their physical meanings are clear. For example, the column for \((C_{2})^{2}\) being 1 (\(-1\)) for an anyon means this anyon carries trivial (nontrivial) projective quantum number under \(C_{2}\), which roughly speaking says that \(C_{2}^{2}=-1\) for this anyon. Similarly, \(T_{1}\mathcal{T}T_{1}^{-1}\mathcal{T}^{-1}\) being 1 (\(-1\)) for an anyon means that the translation \(T_{1}\) and time reversal \(\mathcal{T}\) commute (anti-commute) for this anyon. From Table 15 we can see that in all these 53 states, the anyon \(e\) is a Kramers doublet under the time reversal symmetry, while the anyon \(m\) is a Kramers singlet. Furthermore, for the anyon \(m\) there is always some nontrivial symmetry fractionalization simultaneously involving the lattice and time reversal symmetries. For example, translation and time reversal may not commute on \(m\). In contrast, in all 64 states identified using the parton-mean-field approach in Ref. [66], \(m\) experiences no symmetry fractionalization that involves both lattice and time reversal symmetries. Moreover, we remark that for all 117 states, the \(C_{2}\equiv C_{4}^{2}\) symmetry fractionalizes on the \(m\) anyon, i.e., effectively \(C_{2}^{2}=-1\) for \(m\). Usually, the interpretation of this phenomenon is that there is a background \(e\) anyon at each square lattice site (the \(C_{4}\) center), and the mutual braiding statistics between \(e\) and \(m\) yields \(C_{2}^{2}=-1\). However, for 16 of the 53 "beyond-parton" states, \((T_{1}C_{2})^{2}=(T_{2}C_{2})^{2}=-1\) for \(m\), which seems to suggest that there are also background \(e\) anyons at the 2-fold rotation centers of \(T_{1}C_{2}\) and \(T_{2}C_{2}\), although microscopically there is no spin at those positions. So the analysis based on anomaly matching suggests that the simple picture where the fractionalization of rotational symmetries purely comes from background anyons is actually incomplete. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline & 0 & a & b & c & a+b & a+c & b+c & a+b+c \\ \hline \(\mathsf{l}_{3}\) & \(1\) & \(-1\) & \(1\) & \(1\) & \(-1\) & \(-1\) & \(1\) & \(-1\) \\ \(\mathsf{l}_{5}\) & \(1\) & \(-1\) & \(1\) & \(1\) & \(-1\) & \(-1\) & \(1\) & \(-1\) \\ \(\mathsf{l}_{11}\) & \(1\) & \(1\) & \(1\) & \(-1\) & \(1\) & \(-1\) & \(-1\) & \(-1\) \\ \(\mathsf{l}_{12}\) & \(1\) & \(1\) & \(-1\) & \(1\) & \(-1\) & \(1\) & \(-1\) & \(-1\) \\ \(\mathsf{l}_{14}\) & \(1\) & \(1\) & \(1\) & \(-1\) & \(1\) & \(-1\) & \(-1\) & \(-1\) \\ \(\mathsf{l}_{15}\) & \(1\) & \(1\) & \(-1\) & \(1\) & \(-1\) & \(1\) & \(-1\) & \(-1\) \\ \hline \end{tabular} \end{table} Table 15: Values of the anomaly indicators for the 8 lattice homotopy classes with symmetry group \(p4m\times SO(3)\times\mathbb{Z}_{2}^{T}\). The anomaly indicators not listed in the table are all 1 for all lattice homotopy classes. \begin{table} \begin{tabular}{|
2309.12111
Passage Summarization with Recurrent Models for Audio-Sheet Music Retrieval
Many applications of cross-modal music retrieval are related to connecting sheet music images to audio recordings. A typical and recent approach to this is to learn, via deep neural networks, a joint embedding space that correlates short fixed-size snippets of audio and sheet music by means of an appropriate similarity structure. However, two challenges that arise out of this strategy are the requirement of strongly aligned data to train the networks, and the inherent discrepancies of musical content between audio and sheet music snippets caused by local and global tempo differences. In this paper, we address these two shortcomings by designing a cross-modal recurrent network that learns joint embeddings that can summarize longer passages of corresponding audio and sheet music. The benefits of our method are that it only requires weakly aligned audio-sheet music pairs, as well as that the recurrent network handles the non-linearities caused by tempo variations between audio and sheet music. We conduct a number of experiments on synthetic and real piano data and scores, showing that our proposed recurrent method leads to more accurate retrieval in all possible configurations.
Luis Carvalho, Gerhard Widmer
2023-09-21T14:30:02Z
http://arxiv.org/abs/2309.12111v1
# Passage Summarization With Recurrent Models ###### Abstract Many applications of cross-modal music retrieval are related to connecting sheet music images to audio recordings. A typical and recent approach to this is to learn, via deep neural networks, a joint embedding space that correlates short fixed-size snippets of audio and sheet music by means of an appropriate similarity structure. However, two challenges that arise out of this strategy are the requirement of strongly aligned data to train the networks, and the inherent discrepancies of musical content between audio and sheet music snippets caused by local and global tempo differences. In this paper, we address these two shortcomings by designing a cross-modal recurrent network that learns joint embeddings that can summarize longer passages of corresponding audio and sheet music. The benefits of our method are that it only requires weakly aligned audio - sheet music pairs, as well as that the recurrent network handles the non-linearities caused by tempo variations between audio and sheet music. We conduct a number of experiments on synthetic and real piano data and scores, showing that our proposed recurrent method leads to more accurate retrieval in all possible configurations. ## 1 Introduction The abundance of music-related content in various digital formats, including studio and live audio recordings, scanned sheet music, and metadata, among others, calls for efficient technologies for cross-linking between documents of different modalities. In this work, we explore a cross-modal task referred to as audio - sheet music passage retrieval. We define it as follows: given an audio fragment as a query, search within an image database and retrieve the corresponding sheet music passage; or vice versa, find the appropriate recording fragment given a query in the form of some snippet of (scanned) sheet music. A fundamental step in audio-sheet music retrieval concerns defining a suitable shared representation that permits the comparison between items of different modalities in a convenient and effective way. The conventional approaches for linking audio recordings to their respective printed scores are based on handcrafted mid-level representations [1, 2]. These are usually pitch-class profiles, like chroma-based features [3, 4], symbolic fingerprints [5], or the bootleg score [6, 7], which is a coarse mid-level codification of the main note-heads in a sheet music image. However extracting such representations requires a series of pre-processing stages that are prone to errors, for example optical music recognition on the sheet music side [8, 9, 10], and automatic music transcription on the audio part [11, 12, 13]. A promising approach [14, 15] has been proposed to eliminate these problematic pre-processing steps by learning a shared low-dimensional embedding space directly from audio recordings and printed scores. This is achieved by optimizing a cross-modal convolutional network (CNN) to project short snippets of audio and sheet music onto a latent space, in which the cosine distances between semantically related snippets are minimized, whereas non-related items of either modality are projected far from each other. Then the retrieval procedure is reduced to simple nearest-neighbour search in the shared embedding space, which is a simple and fast algorithm. A first limitation of this strategy relates to its supervised nature: it requires strongly-aligned data in order to generate matching audio-sheet snippet pairs for training, which means fine-grained mappings between note onsets and corresponding note positions in the score. Obtaining such annotations is tedious and time-consuming, and also Figure 1: Distribution of system durations in around 40,000 examples from the MSMD. More than 25% of the passages are longer than ten seconds. requires specialized annotators with musical training. As a result, embedding learning approaches have been trained with synthetic data, in which recordings, sheet music images, and their respective alignments are rendered from symbolic scores. This leads to poor generalization in scenarios with real music data, as shown in [16]. Moreover, the snippets in both modalities have to be fixed in size, meaning that the amount of actual musical content in the fragments can vary considerably depending on note durations and the tempo in which the piece is played. For example, a sheet excerpt with longer notes played slowly would correspond to a considerably larger duration in audio than one with short notes and a faster tempo. This leads to generalization problems caused by differences between what the model sees during training and test time; [17] attempted to address this limitation by introducing a soft-attention mechanism to the network. In this paper we address the two aforementioned limitations by proposing a recurrent cross-modal network that learns compact, fixed-size representations from longer variable-length fragments of audio and sheet music. By removing the fixed-size fragment constraint, we can adjust the lengths of fragments during training so that cross-modal pairs can span the same music content, leading to a more robust representation. Moreover, by operating with longer music passages, it is possible to rely solely on weakly-annotated data for training, since we now require only the starting and ending positions of longer-context music fragments within music documents, in order to extract audio-sheet passages to prepare a train set. This is a remarkable advantage compared for example to other approaches based on [14], where fine-detailed alignments are indispensable to generate short audio-sheet snippet pairs. The rest of the paper is structured as follows. In Section 2 we describe the model proposed to learn joint representations from cross-modal passages. Section 3 presents a series of experiments on artificial and real data and Section 4 summarizes and concludes the work. ## 2 Audio-sheet passage retrieval For the purposes of this paper, and in order to be able to use our annotated corpora for the experiments, we define a "passage" as the musical content corresponding to one line of sheet music (also known as a "system"). System-level annotation of scores are much easier to come by than note-precise score-recording alignments, making it relatively easy to compile large collections of training data for our approach. Our definition of passages resembles that of "musical themes", which has been used under a cross-modal retrieval scenario with symbolic queries in a number of previous works [18, 19]. To illustrate the temporal discrepancies between passages, we show in Figure 1 the distribution of time duration of the systems from all pieces of the MSMD dataset [14] (later we will elaborate more on this database). In this dataset, we observe that systems can cover from less than five to more than 25 seconds of musical audio. This important temporal aspect motivates us to propose the network depicted in Figure 2 to learn a common latent representation from pairs of audio-sheet passages. The architecture has two independent recurrent-convolutional pathways, which are responsible for encoding sheet music (Figure 1(a)) and audio (Figure 1(b)) passages. The key component of this approach is the introduction of two recurrent layers that, inspired by traditional sequence-to-sequence models [22], are trained to summarize a variable-length sequences into context vectors, that we conveniently refer to as embedding vectors. Defining a pair of corresponding passages in the form of image (sheet music) and log-magnitude spectro Figure 2: Diagram of the proposed network. Two independent pathways are trained to encode sheet music (a) and audio (b) passages by minimizing a contrastive loss function (c). gram (audio) as \(\mathbf{X}\) and \(\mathbf{Y}\), respectively, two sequences \((\mathbf{x}_{1},\mathbf{x}_{2},\dots,\mathbf{x}_{\mathbf{N}})\) and \((\mathbf{y}_{1},\mathbf{y}_{2},\dots,\mathbf{y}_{\mathbf{M}})\) are generated by sequentially cutting out short snippets from \(\mathbf{X}\) and \(\mathbf{Y}\). The shapes of the short sheet and audio snippets are respectively \(160\times 180\) (pixels)1 and \(92\times 20\) (frequency bins \(\times\) frames), which corresponds to one second of audio. After that, each individual snippet is encoded by a VGG-style CNN [23] into a 32-dimensional vector, as shown in Figure 2, generating two sequences of encoded snippets, one for the audio passage, and the other for the sheet passage (note that each modality has its own dedicated CNN encoder). The architecture of the CNN encoders are detailed in Table 1. Footnote 1: In our approach, all sheet music pages are initially re-scaled to a \(1181\times 835\) resolution Then each sequence is fed to a recurrent layer in order to learn the spatial and temporal relations between subsequent snippets, which are inherent in music. After experimenting with two typical simple recurrent layers, namely long short-term memory cells (LSTM) [24] and gated recurrent units (GRU) [25], we observed on average better results with GRUs, and we decided for the latter for our architecture. Each of the two GRUs is designed with 128 hidden units, where the hidden state of each GRU after the last step is the context vector that summarizes the passages. Finally a fully connected layer (FC) is applied over each context vector, in order to encode the final passage embeddings \((\mathbf{x}_{\mathrm{emb}},\mathbf{y}_{\mathrm{emb}})\) with the desired dimension. During training, a triplet (contrastive) loss function [26] is used to minimize the distances between embeddings from corresponding passages of audio and sheet music and maximize the distance between non-corresponding ones. Defining \(\mathrm{d}(\cdot)\) as the cosine distance, the loss function is given by: \[\mathcal{L}=\sum_{k=1}^{K}\max\Bigl{\{}0,\alpha+\mathrm{d}\Bigl{(}\mathbf{x}_{ \mathrm{emb}},\mathbf{y}_{\mathrm{emb}}\Bigr{)}-\mathrm{d}\Bigl{(}\mathbf{x}_{ \mathrm{emb}},\mathbf{y}_{\mathrm{emb}}^{k}\Bigr{)}\Bigr{\}}, \tag{1}\] where \(\mathbf{y}_{\mathrm{emb}}^{k}\) for \(k\in{1,2,\dots,K}\) are contrastive (negative) examples from \(K\) non-matching passages in the same training mini-batch. This contrastive loss is applied to all \((\mathbf{x}_{\mathrm{emb}},\mathbf{y}_{\mathrm{emb}})\) pairs within each mini-batch iteration. The margin parameter \(\alpha\in\mathbb{R}_{+}\), in combination with the \(\max\left\{\cdot\right\}\) function, penalizes matching snippets that were poorly embedded. For the sake of simplicity, we leave the remaining details concerning the design of the networks, such as learning hyper-parameters, to our repository where our method will be made publicly available,2 as well as the trained models derived in this work. Footnote 2: [https://github.com/luisfvc/lcasr](https://github.com/luisfvc/lcasr) ## 3 Experiments In this section we conduct experiments on different audio-sheet music scenarios. We first elaborate on the main dataset used for training and evaluation and define the steps of the passage retrieval task. Then we select four experiment setups and present the results. We train our models with the Multi-Modal Sheet Music Dataset (MSMD) [14], which is a collection of classical piano pieces with multifaceted data, including score sheets (PDF) engraved via Lilypond 3 and corresponding audio recordings rendered from MIDI with several types of piano soundfonts. With over 400 pieces from over 50 composers, including Bach, Beethoven and Schubert, and covering more than 15 hours of audio, the MSMD has audio-sheet music alignments which allow us to obtain corresponding cross-modal pairs of musical passages. From the MSMD we were able to derive roughly 5,000 audio-sheet passages for training, which is scaled up to around 40,000 different pairs after data augmentation: audios are re-rendered with different soundfonts and have their tempo changed between 90% and 110%. Then we generate a test set of 534 pairs from a separate set of music pieces, that were rendered with a soundfont that was not seen during training. Later, in 3.2, we will also consider real scanned scores and real audio recordings. Footnote 3: [http://www.lilypond.org](http://www.lilypond.org) To perform cross-modal passage retrieval, we first embed all audio-sheet pairs in the shared space using our trained model depicted in Figure 2. Then the retrieval is conducted by using the cosine distance and nearest-neighbor search within the space. For example, in case of using an audio passage as a query to find the appropriate sheet music fragment, the pairwise cosine distances between the query embedding and all the sheet music passage embeddings are computed. Finally, the retrieval results are obtained by means of a ranked list through sorting the distances in ascending order. As for evaluation metrics, we look at the _Recall@k_ (R@k), _Mean Reciprocal Rank_ (MRR) and the _Median Rank_ (MR). The R@k measures the ratio of queries which were correctly retrieved within the top \(k\) results. The MRR is defined as the average value of the reciprocal rank over all queries. MR is the median position of the correct match in the ranked list. \begin{table} \begin{tabular}{c c} \hline \hline **Audio CNN encoder** & **Sheet-Image CNN encoder** \\ input: \(92\times 20\) & input: \(160\times 180\) \\ \hline 2x Conv(3, pad-1)-24 - BN & 2x Conv(3, pad-1)-\(24\) - BN \\ MaxPooling(2) & MaxPooling(2) \\ 2x Conv(3, pad-1)-48 - BN & 2x Conv(3, pad-1)-\(18\) - BN \\ MaxPooling(2) & MaxPooling(2) \\ 2x Conv(3, pad-1)-96 - BN & 2x Conv(3, pad-1)-\(96\) - BN \\ MaxPooling(2) & MaxPooling(2) \\ 2x Conv(3, pad-1)-96 - BN & 2x Conv(3, pad-1)-\(96\) - BN \\ MaxPooling(2) & MaxPooling(2) \\ Conv(1, pad-0)-32 - BN & Conv(1, pad-0)-32 - BN \\ FC(32) & FC(32) \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of the two convolutinal encoders. Each side is responsible for their respective modality. Conv(3, pad-1)-\(24\): 3\(\times\)3 convolution, 24 feature maps and zero-padding of 1. BN: Batch normalization [20]. We use ELU activation functions [21] after all convolutional and fully-connected layers. ### Experiment 1: Embedding dimension In the first round of experiments, we investigate the effect of the final embedding dimension in the retrieval task. We consider the values in \(\{16,32,64,128,256,512,1024\}\) and train the model of Figure 2 with the same hyperparameters. Then we perform the retrieval task in both search directions: audio-to-sheet music (A2S) and sheet music-to-audio (S2A). Figure 3 presents the MRR of the snippet retrieval results evaluated on the 534 audio-sheet music passage pairs of the MSMD testset. A first and straightforward observation is that in all cases the S2A direction indicates better retrieval quality. We observe the performance increasing together with the embedding dimensionality until it stagnates at 64-D, and the MRR does not improve on average for higher-dimensional embeddings. For this reason, we select the model that generates 64-dimensional embeddings as the best one, which will be evaluated more thoroughly in the next experiments. ### Experiment 2: Real data and improved models In this section, we conduct an extensive series of experiments comparing our proposed recurrent network and some improved models thereof with baseline methods, and extend the evaluation to real-world piano data. Given that our training data are entirely synthetic, we wish to investigate the generalization of our models from synthetic to real data. To this end, we evaluate on three datasets: on a (1) fully artificial one, and on datasets consisting (2) partially and (3) entirely of real data. For (1) we use the test split of MSMD and for (2) and (3) we combine the Zeilinger and Magaloff Corpora [27] with a collection of commercial recordings and scanned scores that we have access to. These data account for more than a thousand pages of sheet music scans with mappings to both MIDI files and over 20 hours of classical piano recordings. Then, besides the MSMD (I), we define two additional evaluation sets: (II) _RealScores_Synth_: a partially real set, with _scanned_ (real) scores of around 300 pieces aligned to _synthesized_ MIDI recordings. And (III) _RealScores_Rec_: an entirely real set, with _scanned_ (real) scores of around 200 pieces and their corresponding _real audio_ recordings. As a baseline (BL), we implement the method from [14] and adapt their short-snippet-voting strategy to identify and retrieve entire music recordings and printed scores so it can operate with passages. 4 In essence, short snippets are sequentially cut out from a passage query and embedded, and are compared to all embedded snippets which were selected from passages in a search dataset of the counterpart modality, resulting in a ranked list based on the cosine distance for each passage snippet. Then the individual ranked lists are combined into a single ranking, in which the passage with most similar snippets is retrieved as the best match. Footnote 4: The reasons we did not use the attention-based method from [17] as a baseline comparison are twofold. First we intend to compare the exact original snippet embedding architecture with and without a recurrent encoder, and adding the attention mechanism to a baseline model would introduce a significant number of additional trainable parameters, making the comparison unfair. Second, the purpose of the attention model is to compensate the musical content discrepancy between audio and sheet snippets, which is not the case for musical passages as defined here: pairs of audio–sheet music passages comprise the exact musical content (that is the reason why fragments are not fixed in time). Additionally, we investigate whether our models can benefit from pre-trained cross-modal embeddings. Since both CNN encoders of our proposed network architecture (see Figure 2) are the same as in [14], we re-designed the baseline cross-modal network to accommodate our snippet dimensions (\(160\times 180\) and \(92\times 20\), for sheet and audio, respectively) and trained a short-snippet embedding model also with the MSMD, as a pre-training step, and then loaded the two CNN encoders of our recurrent network with their respective pre-trained weights before training. Our hypothesis is that, by initializing the CNN encoders with parameters that were optimized to project short pairs of matching audio-sheet snippets close together onto a common latent space, models with better embedding capacity can be obtained. After loading the two CNNs with pre-trained weights, we can either freeze (FZ) them during training or just fine-tune (FT) on them. Therefore, in our experiments, we refer to these modifications of our proposed vanilla recurrent network (RNN) as RNN-FZ and RNN-FT, respectively. Moreover, an additional CCA (canonical correlation analysis) layer [28] is used in [14] to increase the correlation of corresponding pairs in the embedding space. This CCA layer is refined in a post-training step, and we investigate whether this refinement process is beneficial to our network. In our experiments we refer to models that were initialized with pre-trained parameters from networks that had their CCA layer refined as RNN-FZ-CCA and RNN-FT-CCA. Table 2 presents the results for all data configurations and models defined previously. To keep our experiments consistent and the comparison fair, we randomly select 534 passage pairs from sets (II) and (III) to create the retrieval scenario for their respective experiments. An evident observation from the table is the considerable performance drop as we transition from synthetic to real music data. For all the models, the MRR drops at least Figure 3: Mean Reciprocal Rank (MRR) for different embedding dimensions, evaluated in both search directions. 0.2 points to a partially real test set, and drops more than 0.3 points when moving to the entirely real data. Moreover, as mentioned in Subsection 3.1, the passage retrieval metrics of the S2A direction are better than those of A2S for all models and scenarios. Our recurrent model RNN and its variants outperform the baseline approach in all retrieval scenarios for all evaluation metrics. In our findings, we did not see noticeable improvements when the pre-loaded encoders were frozen during training. In fact, for some configurations (scenarios I and III) the evaluation metrics were slightly worse than those from the vanilla RNN model. When the CNN encoders are pre-loaded and enabled for fine-tuning, we observe the largest improvements over RNN and subsequently over BL. Moreover, the models initialized with pre-trained weights from CCA-refined networks (RNN-FT-CCA) achieved the best overall results, for all test datasets and search directions. In addition to the overall absolute improvements, we observe that the performance drop between synthetic and real datasets shrinks with our proposed models, specially with RNN-FT-CCA. In comparison with the baseline, the I-to-III MRR gap is reduced by 0.036 and 0.06 points in the directions A2S and S2A, respectively. The results we obtained and summarized in Table 2 indicate that introducing a recurrent layer to learn longer contexts of musical content is beneficial in our cross-modal retrieval problem. However the real-data generalization problem is still evident, and in Section 4 we discuss potential solutions to address such issues. ### Experiment 3: Global tempo variations In this experiment, we investigate the robustness of our system to global tempo changes. To this end, the pieces of the MSMD test dataset are re-rendered with different tempo ratios \(\rho\in\{0.5,0.66,1,1.33,2\}\) (\(\rho=0.5\) means the tempo was halved and \(\rho=2\) stands for doubling the original tempo). A similar study was conducted in [17] for retrieval of short audio-sheet snippets. Table 3 summarizes the MRR values obtained for each tempo re-rendering, where the baseline method is compared with our proposed recurrent model. We notice the general trend that the MRR gets worse as the tempo ratio is farther from \(\rho=1\) (original tempo). This behavior is somehow expected because the new tempo renditions are more extreme than the tempo changes the model has seen during training. Besides the better MRR values of the proposed network, an important improvement concerns the performance drop when changing from \(\rho=1\) to \(\rho=0.5\) (slower renditions). The MRR gap between these tempo ratios drops from 0.12 to 0.1 and from 0.09 to 0.07 points for the A2S and S2A directions, respectively, when comparing our net \begin{table} \begin{tabular}{l c c c c|c c c c c c} & \multicolumn{6}{c}{**Audio-to-Score (A2S)**} & \multicolumn{6}{c}{**Score-to-Audio (S2A)**} \\ \cline{2-11} & **R@1** & **R@10** & **R@25** & **MRR** & **MR** & **R@1** & **R@10** & **R@25** & **MRR** & **MR** \\ \hline \hline I & MSMD (Fully synthetic) & & & & & & & & \\ \hline BL & 47.56 & 81.68 & 90.80 & 0.592 & 1 & 51.37 & 83.51 & 92.59 & 0.628 & 1 \\ RNN & 51.12 & 84.46 & 92.88 & 0.627 & 1 & 54.30 & 85.95 & 94.94 & 0.670 & 1 \\ RNN-FT & 55.27 & 87.98 & 95.02 & 0.651 & 1 & 56.32 & 87.12 & 96.44 & 0.697 & 1 \\ RNN-FT-CCA & **60.04** & **89.66** & **97.73** & **0.692** & **1** & **62.11** & **91.44** & **98.41** & **0.734** & **1** \\ RNN-FZ & 50.76 & 84.20 & 92.11 & 0.619 & 1 & 52.90 & 85.21 & 94.12 & 0.658 & 1 \\ RNN-FZ-CCA & 52.67 & 86.46 & 92.88 & 0.635 & 1 & 55.67 & 86.30 & 95.34 & 0.682 & 1 \\ \hline \hline II & RealScores\_Synth (Sheet music scans and synthetic recordings) & & & & & & & & \\ \hline BL & 20.19 & 55.47 & 74.99 & 0.343 & 7 & 25.15 & 70.27 & 83.11 & 0.391 & 5 \\ RNN & 25.09 & 61.24 & 78.27 & 0.374 & 5 & 30.15 & 72.47 & 86.89 & 0.439 & 3 \\ RNN-FT & 28.87 & 66.41 & 81.32 & 0.447 & 4 & 33.98 & 75.47 & 88.51 & 0.462 & 2 \\ RNN-FT-CCA & **33.36** & **69.49** & **83.88** & **0.481** & **3** & **37.35** & **79.22** & **89.95** & **0.538** & **1** \\ RNN-FZ & 25.83 & 62.02 & 79.74 & 0.376 & 5 & 31.45 & 74.87 & 87.26 & 0.442 & 3 \\ RNN-FZ-CCA & 26.82 & 63.33 & 80.19 & 0.391 & 5 & 33.55 & 75.71 & 88.79 & 0.467 & 2 \\ \hline \hline III & RealScores\_Rec (Sheet music scans and real recordings) & & & & & & & & \\ \hline BL & 15.67 & 31.46 & 48.12 & 0.226 & 29 & 18.30 & 36.71 & 54.94 & 0.266 & 18 \\ RNN & 19.11 & 35.98 & 53.65 & 0.278 & 21 & 22.76 & 39.95 & 57.47 & 0.303 & 15 \\ RNN-FT & 22.39 & 39.53 & 57.19 & 0.338 & 18 & 26.76 & 42.77 & 59.38 & 0.371 & 7 \\ RNN-FT-CCA & **26.62** & **44.81** & **60.01** & **0.362** & **7** & **29.84** & **46.71** & **60.88** & **0.435** & **4** \\ RNN-FZ & 17.65 & 33.12 & 52.98 & 0.252 & 22 & 19.13 & 37.51 & 55.57 & 0.277 & 17 \\ RNN-FZ-CCA & 18.38 & 35.81 & 54.51 & 0.279 & 21 & 22.30 & 38.95 & 58.82 & 0.285 & 16 \\ \end{tabular} \end{table} Table 2: Results of audio-sheet music passage retrieval, performed in both search directions, and evaluated in three types of data: (I) fully synthetic, (II) partially real and (III) entirely real. Boldfaced rows represent the best performing model per dataset. work with the baseline. This indicates that the recurrent model is more robust to global tempo variations and can operate well with longer audio passages. ### Experiment 4: Qualitative analysis To get a better understanding of the behavior of our proposed network, in this last experiment we take a closer look at the shared embedding space properties. Figure 4 shows the distribution of the pairwise cosine distances between the passage pairs from the MSMD test set, in relation to the duration (in seconds) of their respective audio passages. Moreover, we scale the point sizes in the plot so they are proportional to their individual precision values (inverse of the rank values), when considering the S2A experimental setup. An interesting behavior in this visualization is the size of the points increasing as the cosine distance decreases. It is expected that passage pairs with smaller distances between them, meaning that they are closer together in the embedding space, would be lead to better retrieval ranks. Another interesting aspect of this distribution concerns the proportion of larger cosine distances as the audio duration of the passages increases. For example, between five and ten seconds, there are more large points observed than smaller ones, while between 20 and 25 seconds, the proportion is roughly equal. This indicates that, in our test set, embeddings from shorter passages of audio are still located closer to their sheet counterparts in comparison with longer audio passages, despite our efforts to design a recurrent networks that learns from longer temporal contexts. ## 4 Conclusion and Future Work We have presented a novel cross-modal recurrent network for learning correspondences between audio and sheet music passages. Besides requiring only weakly-aligned music data for training, this approach overcomes the problems of intrinsic global and local tempo mismatches of previous works that operate on short and fixed-size fragments. Our proposed models were validated in a series of experiments under different retrieval scenarios and generated better results when comparing with baseline methods, for all possible configurations. On the other hand, a serious generalization gap to real music data was observed, which points us to the next stages of our research. A natural step towards making deep-learning-based cross-modal audio-sheet music retrieval more robust would be to include real and diverse data that can be used for training models. However such data with suitable annotations are scarce, and recent advances in end-to-end full-page optical music recognition [29] can be a possible solution to learn correspondences on the score page level. Moreover, the powerful transformers [30] are potential architectures to learn correspondences from even longer audio recordings, accommodating typical structural differences between audio and sheet music, such as jumps and repetitions. ## 5 Acknowledgments This work is supported by the European Research Council (ERC) under the EU's Horizon 2020 research and innovation programme, grant agreement No. 101019375 (_Whither Music?_), and the Federal State of Upper Austria (LIT AI Lab).
2310.10665
Privacy Preservation in Artificial Intelligence and Extended Reality (AI-XR) Metaverses: A Survey
The metaverse is a nascent concept that envisions a virtual universe, a collaborative space where individuals can interact, create, and participate in a wide range of activities. Privacy in the metaverse is a critical concern as the concept evolves and immersive virtual experiences become more prevalent. The metaverse privacy problem refers to the challenges and concerns surrounding the privacy of personal information and data within Virtual Reality (VR) environments as the concept of a shared VR space becomes more accessible. Metaverse will harness advancements from various technologies such as Artificial Intelligence (AI), Extended Reality (XR), Mixed Reality (MR), and 5G/6G-based communication to provide personalized and immersive services to its users. Moreover, to enable more personalized experiences, the metaverse relies on the collection of fine-grained user data that leads to various privacy issues. Therefore, before the potential of the metaverse can be fully realized, privacy concerns related to personal information and data within VR environments must be addressed. This includes safeguarding users' control over their data, ensuring the security of their personal information, and protecting in-world actions and interactions from unauthorized sharing. In this paper, we explore various privacy challenges that future metaverses are expected to face, given their reliance on AI for tracking users, creating XR and MR experiences, and facilitating interactions. Moreover, we thoroughly analyze technical solutions such as differential privacy, Homomorphic Encryption (HE), and Federated Learning (FL) and discuss related sociotechnical issues regarding privacy.
Mahdi Alkaeed, Adnan Qayyum, Junaid Qadir
2023-09-19T11:56:12Z
http://arxiv.org/abs/2310.10665v1
# Privacy Preservation in Artificial Intelligence and Extended Reality (AI-XR) Metaverses: A Survey ###### Abstract The metaverse is a nascent concept that envisions a virtual universe, a collaborative space where individuals can interact, create, and participate in a wide range of activities. Privacy in the metaverse is a critical concern as the concept evolves and immersive virtual experiences become more prevalent. The metaverse privacy problem refers to the challenges and concerns surrounding the privacy of personal information and data within Virtual Reality (VR) environments as the concept of a shared VR space becomes more accessible. Metaverse will harness advancements from various technologies such as Artificial Intelligence (AI), Extended Reality (XR), Mixed Reality (MR), and 5G/6G-based communication to provide personalized and immersive services to its users. Moreover, to enable more personalized experiences, the metaverse relies on the collection of fine-grained user data that leads to various privacy issues. Therefore, before the potential of the metaverse can be fully realized, privacy concerns related to personal information and data within VR environments must be addressed. This includes safeguarding users' control over their data, ensuring the security of their personal information, and protecting in-world actions and interactions from unauthorized sharing. In this paper, we explore various privacy challenges that future metavers are expected to face, given their reliance on AI for tracking users, creating XR and MR experiences, and facilitating interactions. Moreover, we thoroughly analyze technical solutions such as differential privacy, Homomorphic Encryption (HE), and Federated Learning (FL) and discuss related sociotechnical issues regarding privacy. Machine Learning, Metaverse, Artificial Intelligence, Virtual Reality, Extended Reality, Mixed Reality, Homomorphic Encryption, and Federated Learning. ## I Introduction The metaverse refers to a virtual world that combines physical reality with digital technology leveraging Artificial Intelligence (AI) and Extended Reality (XR), creating a network of interconnected and immersive environments. It blends Virtual Reality (VR), Augmented Reality (AR), permitting users to engage in multi-sensory interactions with virtual objects, people, and environments [1]. AR is a technology that superimposes digital information, such as images, sounds, and other data, onto the real-world environment in real-time [2]. AR enhances the user's perception of the physical world by overlaying computer-generated sensory input such as graphics, audio, and haptic feedback. Mixed Reality (MR) is a form of immersive technology that blends virtual and physical worlds to create a new reality that allows users to interact with digital objects and environments as if they were real. VR completely immerses users in a simulated environment, whereas MR allows users to interact with both the physical world and virtual objects simultaneously [3]. In the metaverse, users can engage in real-time communication and dynamic interactions, such as socializing with friends, playing multiplayer games, and exploring open virtual worlds. However, as the concept of a metaverse becomes more prevalent in our society, privacy concerns arising from the collection and utilization of fine-grained personal information by metaverse creators and platforms--e.g., users' location, browsing history, personal preferences [4]--assume grave importance [5]. The metaverse may collect a range of essential information from its users, including user profile information, interaction data, biometric data, payment information, and device information [6]. Users may need to provide payment information to purchase virtual items and assets or access premium features, and device information such as IP addresses, device IDs, and operating system details may also be collected [5]. Biometric data such as facial recognition or voice prints may be collected to verify a user's identity or enable voice chat. Such data can be used for targeted advertising, data mining, and other purposes. Additionally, metaverse data could be accessed by cybercriminals maliciously through hacking posing potential harm to individuals through privacy violation. To address these concerns, AI-XR platforms and creators should adopt strict privacy policies and implement measures for robust security, including encryption, and two-factor authentication to protect user data from unauthorized access. To protect user privacy in the metaverse, users should be cautious about sharing personal information in the metaverse. Interaction data collected by the metaverse could include information on the communities users join, virtual items they interact with, and purchases they make. Additionally, privacy measures must be regularly reviewed and updated to keep pace with new developments in this constantly evolving technology [12]. ML can also play its role in enhancing privacy in AI-XR metaverses in diverse ways (e.g., in developing advanced encryption algorithms, anomaly detection for detecting unusual behavior, and differential privacy techniques that provide strong privacy guarantees by adding noise to the data) to detect malicious activities [13, 14]. ML can be utilized to identify and flag suspicious activity [15] and detect malicious actions such as spam or phishing attempts [11]. ML can assist in automating data anonymization and de-identification processes, enabling organizations to protect sensitive information while still utilizing it for analytic and research purposes. ML can also monitor and analyze vast quantities of data in real
2309.08149
Private Inputs for Leader-Follower Game with Feedback Stackelberg Strategy
In this paper, the two-player leader-follower game with private inputs for feedback Stackelberg strategy is considered. In particular, the follower shares its measurement information with the leader except its historical control inputs while the leader shares none of the historical control inputs and the measurement information with the follower. The private inputs of the leader and the follower lead to the main obstacle, which causes the fact that the estimation gain and the control gain are related with each other, resulting that the forward and backward Riccati equations are coupled and making the calculation complicated. By introducing a kind of novel observers through the information structure for the follower and the leader, respectively, a kind of new observer-feedback Stacklberg strategy is designed. Accordingly, the above-mentioned obstacle is also avoided. Moreover, it is found that the cost functions under the presented observer-feedback Stackelberg strategy are asymptotically optimal to the cost functions under the optimal feedback Stackelberg strategy with the feedback form of the state. Finally, a numerical example is given to show the efficiency of this paper.
Yue Sun, Hongdan Li, Huanshui Zhang
2023-09-15T04:32:26Z
http://arxiv.org/abs/2309.08149v1
# Private Inputs for Leader-Follower Game with Feedback Stackelberg Strategy ###### Abstract In this paper, the two-player leader-follower game with private inputs for feedback Stackelberg strategy is considered. In particular, the follower shares its measurement information with the leader except its historical control inputs while the leader shares none of the historical control inputs and the measurement information with the follower. The private inputs of the leader and the follower lead to the main obstacle, which causes the fact that the estimation gain and the control gain are related with each other, resulting that the forward and backward Riccati equations are coupled and making the calculation complicated. By introducing a kind of novel observers through the information structure for the follower and the leader, respectively, a kind of new observer-feedback Stackelberg strategy is designed. Accordingly, the above-mentioned obstacle is also avoided. Moreover, it is found that the cost functions under the presented observer-feedback Stackelberg strategy are asymptotically optimal to the cost functions under the optimal feedback Stackelberg strategy with the feedback form of the state. Finally, a numerical example is given to show the efficiency of this paper. feedback Stackelberg strategy, private inputs, observers, asymptotic optimality. ## I Introduction In the traditional control model, centralized control is a basic concept and has been extensively studied from time-invariant system to time-variant system and system with time-delay [1, 2, 3]. However, with the development of wireless sensor network and artificial intelligence, the centralized control will no longer be applicable due to the fact that the achievable bandwidth would be limited by long delays induced by the communication between the centralized controller [4]. The task of effectively controlling multiple decision-makers systems in the absence of communication channels is increasingly an interesting and challenging control problem. Correspondingly, the decentralized control of large-scale systems arises accordingly, which has widespread implementation in electrical power distribution networks, cloud environments, multi-agent systems, reinforcement learning and so on [5, 6, 7, 8], where decisions are made by multiple different decision-makers who have access to different information. Decentralized control can be traced back to 1970s [9, 10, 11]. The optimization of decentralized control can be divided into two categories. The first category is the decentralized control for multi-controllers with one associated cost function [12, 13, 14]. Nayyar studied decentralized stochastic control with partial history observations and control inputs sharing in [15] by using the common information approach and the \(n\)-step delayed sharing information structure was investigated in [16]. [17] focused on decentralized control in networked control system with asymmetric information by solving the forward and backward coupled Riccati equations through forward iteration, where the historical control inputs was shared unilaterally compared with the information structure shared with each other in [15, 16]. [18] designed decentralized strategies for mean-field system, which was further shown to have asymptotic robust social optimality. The other category is the decentralized control for game theory [23, 24, 25]. Two-criteria LQG decision problems with one-step delay observation sharing pattern for stochastic discrete-time system in Stackelberg strategy and Nash equilibrium strategy were considered in [19] and [20], respectively. Necessary conditions for an optimal Stackelberg strategy with output feedback form were given in [21] with incomplete information of the controllers. [22] investigated feedback risk-sensitive Nash equilibrium solutions for two-player nonzero-sum games with complete state observation and shared historical control inputs. Static output feedback incentive Stackelberg game with markov jump for linear stochastic systems was taken into consideration in [26] and a numerical algorithm was further proposed which guaranteed local convergence. Noting that the information structure in the decentralized control systems mentioned above has the following feature, that is, all or part of historical control inputs of the controllers are shared with the other controllers. However, the case, where the controllers have its own private control inputs, has not been addressed in decentralized control system, which has applications in a personalized healthcare setting, in the states of a virtual keyboard user (e.g., Google GBoard users) and in the social robot for second language education of children [27]. It should be noted that the information structure where the control information are unavailable to the other decision makers will cause the estimation gain depends on the control gain and vice versa, which means the forward and backward Riccati equations are coupled, and make the calculation more complicated. Motivated by [28], which focused on the LQ optimal control problem of linear systems with private input and measurement information by using a kind of novel observers to overcome the obstacle, in this paper, we are concerned with the feedback Stackelberg strategy for two-player game with private control inputs. In particular, the follower shares its measurement information to the leader, while the leader doesn't share any information to the follower due to the hierarchical relationship and the historical control inputs for the follower and the leader are both private, which is the main obstacle in this paper. To overcome the problem, firstly, the novel observers based on the information structure of each controller are proposed. Accordingly, a new kind of observer-feedback Stackelberg strategy for the follower and the leader is designed. Finally, it proved that the associated cost functions for the follower and the leader under the proposed observer-feedback Stackelberg strategy are asymptotically optimal as compared with the cost functions under the optimal feedback Stackelberg strategy with the feedback form of the state obtained in [29]. The outline of this paper is given as follows. The problem formulation is given in Section II. The observers and the observer-feedback Stackelberg strategy with private inputs are designed in Section III. The asymptotical optimal analysis is shown in Section IV. Numerical examples are presented in Section V. Conclusion is given in Section VI. _Notations_: \(\mathbb{R}^{n}\) represents the space of all real \(n\)-dimensional vectors. \(A^{\prime}\) means the transpose of the matrix \(A\). A symmetric matrix \(A>0\) (or \(A\geq 0\)) represents that the matrix \(A\) is positive definite (or positive semi-definite). \(\|x\|\) denotes the Euclidean norm of vector \(x\), i.e., \(\|x\|^{2}=x^{\prime}x\). \(\|A\|\) denotes the Euclidean norm of matrix \(A\), i.e., \(\|A\|=\sqrt{\lambda_{max}(A^{\prime}A)}\). \(\lambda(A)\) represents the eigenvalues of the matrix \(A\) and \(\lambda_{max}(A)\) represents the largest eigenvalues of the matrix \(A\). \(I\) is an identity matrix with compatible dimension. \(0\) in block matrix represents a zero matrix with appropriate dimensions. ## II Problem Formulation Consider a two-player leader-follower game described as: \[x(k+1) = Ax(k)+B_{1}u_{1}(k)+B_{2}u_{2}(k), \tag{1}\] \[y_{1}(k) = H_{1}x(k),\] (2) \[y_{2}(k) = H_{2}x(k), \tag{3}\] where \(x(k)\in\mathbb{R}^{n}\) is the state with initial value \(x(0)\). \(u_{1}(k)\in\mathbb{R}^{m_{1}}\) and \(u_{2}(k)\in\mathbb{R}^{m_{2}}\) are the two control inputs of the follower and the leader, respectively. \(y_{i}(k)\in\mathbb{R}^{s_{i}}\) is the measurement information. \(A\), \(B_{i}\) and \(H_{i}\) (\(i=1,2\)) are constant matrices with compatible dimensions. The associated cost functions for the follower and the leader are given by \[J_{1} = \sum_{k=0}^{\infty}[x^{\prime}(k)Q_{1}x(k)+u_{1}^{\prime}(k)R_{1 1}u_{1}(k) \tag{4}\] \[+u_{2}^{\prime}(k)R_{12}u_{2}(k)],\] \[J_{2} = \sum_{k=0}^{\infty}[x^{\prime}(k)Q_{2}x(k)+u_{1}^{\prime}(k)R_{21 }u_{1}(k)\] (5) \[+u_{2}^{\prime}(k)R_{22}u_{2}(k)],\] where the weight matrices are such that \(Q_{i}\geq 0\), \(R_{ij}\geq 0\) (\(i\neq j\)) and \(R_{ii}>0\) (\(i,j=1,2\)) with compatible dimensions. Feedback Stackelberg strategy with different information structure for controllers had been considered since 1970s in [29], where the information structure satisfied that the controller shared all or part of historical inputs to the other. To the best of our knowledge, there has been no efficiency technique to deal with the case of private inputs for controllers. The difficultly lies in the unavailability of other controllers' historical control inputs, which leads to the fact that the estimation gain depends on the control gain and makes the forward and backward Riccati equations coupled. In this paper, our goal is that by designing the novel observers based on the measurements and private inputs for the follower and the leader, respectively, we will show the proposed observer-feedback Stackelberg strategy is asymptotic optimal to the deterministic case in [29]. Mathematically, by denoting \[Y_{i}(k) = \{y_{i}(0),...,y_{i}(k)\},\] \[U_{i}(k-1) = \{u_{i}(0),...,u_{i}(k-1)\},\] \[F_{1}(k) = \{Y_{1}(k),U_{1}(k-1)\}, \tag{6}\] \[F_{2}(k) = \{Y_{1}(k),Y_{2}(k),U_{2}(k-1)\}, \tag{7}\] we will design the observer-feedback Stackelberg strategy based on the information \(\mathcal{F}_{i}(k)\), where \(u_{i}(k)\) is \(\mathcal{F}_{i}(k)\)-casual for \(i=1,2\) in this paper. The following assumptions will be used in this paper. **Assumption 1**: _System \((A,B)\) is stabilizable with \(B=\big{[}\begin{array}{c}B_{1}&B_{2}\end{array}\big{]}\) and system \((A,Q_{i})\) (\(i=1,2\)) is observable._ By denoting the admissible controls sets \(\mathcal{U}_{i}\) (i=1, 2) for the feedback Stackelberg strategy of the follower and the leader: \[\mathcal{U}_{1} = \{\,u_{1}:\Omega\times[0,N]\times\mathbb{R}^{n}\times U_{2} \longrightarrow U_{1}\},\] \[\mathcal{U}_{2} = \{\,u_{2}:\Omega\times[0,N]\times\mathbb{R}^{n}\longrightarrow U_{ 2}\}, \tag{8}\] where \(U_{1}\) and \(U_{2}\) represent the strategy for the follower and the leader, respectively, the definition of the feedback Stackelberg strategy [30] is given. **Definition 1**: \((u_{1}^{*}(k),u_{2}^{*}(k))\in\mathcal{U}_{1}\times\mathcal{U}_{2}\) _is the optimal feedback Stackelberg strategy, if there holds that:_ \[J_{1}(u_{1}^{*}(k,u_{2}^{*}(k)),u_{2}^{*}(k)) \leq J_{1}(u_{1}(k,u_{2}^{*}(k)),u_{2}^{*}(k)),\forall u_{1}\in \mathcal{U}_{1},\] \[J_{2}(u_{1}^{*}(k,u_{2}^{*}(k)),u_{2}^{*}(k)) \leq J_{2}(u_{1}^{*}(k,u_{2}(k)),u_{2}(k)),\forall u_{2}\in \mathcal{U}_{2}.\] _Firstly, the optimal feedback Stackelberg strategy in deterministic case with perfect information structure is given, that is, the information structure of the follower and the leader both satisfy_ \[Y_{k}=\{x(0),...,x(k),u_{i}(0),...,u_{i}(k-1),\quad i=1,2\}.\] **Lemma 1**: _Under Assumption 1, the optimal feedback Stackelberg strategy with the information structure for the follower and the leader satisfying \(Y_{k}\), is given by_ \[u_{1}(k) = K_{1}x(k), \tag{9}\] \[u_{2}(k) = K_{2}x(k), \tag{10}\] _where the feedback gain matrices \(K_{1}\) and \(K_{2}\) satisfy_ \[K_{1} = -\Gamma_{1}^{-1}Y_{1}, \tag{11}\] \[K_{2} = -\Gamma_{2}^{-1}Y_{2}, \tag{12}\] with \[\Gamma_{1} =R_{11}+B_{1}^{\prime}P_{1}B_{1},\] \[\Gamma_{2} =R_{22}+B_{2}^{\prime}M_{1}^{\prime}P_{2}M_{1}B_{2}+B_{2}^{\prime} S^{\prime}R_{21}SB_{2},\] \[M_{1} =I-B_{1}S,\quad S=\Gamma_{1}^{-1}B_{1}^{\prime}P_{1},\] \[Y_{1} =B_{1}^{\prime}P_{1}A+B_{1}^{\prime}P_{1}B_{2}K_{2},\] \[Y_{2} =B_{2}^{\prime}M_{1}^{\prime}P_{2}M_{1}A+B_{2}^{\prime}S^{\prime} R_{21}SA,\] where \(P_{1}\) and \(P_{2}\) satisfy the following two-coupled algebraic Riccati equations: \[P_{1} = Q_{1}+(A+B_{2}K_{2})^{\prime}P_{1}(A+B_{2}K_{2}) \tag{13}\] \[-Y_{1}^{\prime}\Gamma_{1}^{-1}Y_{1}+K_{2}^{\prime}R_{12}K_{2},\] \[P_{2} = Q_{2}+A^{\prime}M_{1}^{\prime}P_{2}M_{1}A+A^{\prime}S^{\prime} R_{21}SA\] (14) \[-Y_{2}^{\prime}\Gamma_{2}^{-1}Y_{2}.\] The optimal cost functions for feedback Stackelberg strategy are such that \[J_{1}^{*} = x^{\prime}(0)P_{1}x(0), \tag{15}\] \[J_{2}^{*} = x^{\prime}(0)P_{2}x(0). \tag{16}\] _Proof 1:_ The optimal feedback Stackelberg strategy for deterministic case with perfect information structure for the follower and the leader in finite-time horizon has been shown in (18)-(28) with \(\theta(t)=\Pi_{1}(t)=\Pi_{2}(t)=0\) in [29]. By using the results in Theorem 2 in [3], the results obtained in [29] can be extended into infinite horizon, i.e., (18)-(28) in [29] are convergent to the algebraic equations obtained in (11)-(12) and (13)-(14) in Lemma 1 of this paper by using the monotonic boundedness theorem. This completes the proof. _Remark 1:_\(P_{1}>0\) and \(P_{2}>0\) in (13)-(14) can be shown accordingly by using Theorem 2 in [3], which guaranteed the invertibility of \(\Gamma_{1}\) and \(\Gamma_{2}\). _Remark 2:_ Compared with [29], where the historical control inputs of the follower and the leader are shared with each other, the historical control inputs of this paper are private, leading to the main obstacle. ## III The observer-feedback Stackelberg strategy Based on the discussion above, we are in position to consider the leader-follower game with private inputs, i.e., \(u_{i}(k)\) is \(F_{i}(k)\)-casual. _Remark 3:_ As pointed out in [17], the information structure in decentralized control, where one of the controllers (C1) doesn't share the historical control inputs to the other controller (C2) while C2 shares its historical control inputs with C1, is a challenge problem due to the control gain and estimator gain are coupled. The difficulty with private inputs for the follower and the leader is even more complicated due to the unavailability of the historical control inputs of each controller. Considering the private inputs of the follower and the leader, the observers \(\hat{x}_{i}(k)\) (\(i=1,2\)) are designed as follows: \[\hat{x}_{1}(k+1) = A\hat{x}_{1}(k)+B_{1}u_{1}^{*}(k)+B_{2}K_{2}\hat{x}_{1}(k) \tag{17}\] \[+L_{1}[y_{1}(k)-H_{1}\hat{x}_{1}(k)],\] \[\hat{x}_{2}(k+1) = A\hat{x}_{2}(k)+B_{1}K_{1}\hat{x}_{2}(k)+B_{2}u_{2}^{*}(k)\] \[+L_{2}[y_{2}(k)-H_{2}\hat{x}_{2}(k)],\] where the observer gain matrices \(L_{1}\) and \(L_{2}\) are chosen to make the observers stable. Accordingly, the observer-feedback Stackelberg strategy is designed as follows: \[u_{1}^{*}(k) = K_{1}\hat{x}_{1}(k), \tag{19}\] \[u_{2}^{*}(k) = K_{2}\hat{x}_{2}(k), \tag{20}\] where \(K_{1}\) and \(K_{2}\) are given in (11)-(12), respectively. For convenience of future discussion, some symbols will be given beforehand. \[\mathcal{A} = \left[\begin{array}{cc}A+B_{2}K_{2}-L_{1}H_{1}&-B_{2}K_{2}\\ -B_{1}K_{1}&A+B_{1}K_{1}-L_{2}H_{2}\end{array}\right], \tag{21}\] \[\mathcal{B} = \left[\begin{array}{cc}-B_{1}K_{1}&-B_{2}K_{2}\end{array}\right]\] \[= \left[\begin{array}{cc}B_{1}S(A+B_{2}K_{2})&-B_{2}K_{2}\end{array} \right],\] \[\bar{A} = \left[\begin{array}{cc}A+B_{1}K_{1}+B_{2}K_{2}&\mathcal{B}\\ 0&\mathcal{A}\end{array}\right],\] \[\tilde{x}(k) = \left[\begin{array}{cc}\tilde{x}_{1}^{\prime}(k)&\tilde{x}_{2}^ {\prime}(k)\end{array}\right]^{\prime},\] \[\tilde{x}_{i}(k) = x(k)-\hat{x}_{i}(k),\quad i=1,2.\] Subsequently, the stability of the observers \(\hat{x}_{i}(k)\) (\(i=1,2\)) and the stability of the closed-loop system (1) under the designed observer-feedback Stackelberg strategy (19)-(20) are shown, respectively. _Theorem 1:_ If there exist optional gain matrices \(L_{1}\) and \(L_{2}\) such that the matrix \(\mathcal{A}\) is stable, then, the observers \(\hat{x}_{i}(k)\) for \(i=1,2\) are stable with the controllers of the follower and the leader satisfying (19)-(20), i.e., there holds \[\lim_{k\rightarrow\infty}\|x(k)-\hat{x}_{i}(k)\|=0. \tag{22}\] _Proof 2:_ By substituting the observer-feedback controllers (19)-(20) into (1), then \(x(k+1)\) is recalculated as: \[x(k+1) = Ax(k)+B_{1}K_{1}\hat{x}_{1}(k)+B_{2}K_{2}\hat{x}(k) \tag{23}\] \[= [A+B_{1}K_{1}+B_{2}K_{2}]x(k)-B_{1}K_{1}\tilde{x}_{1}(k)\] \[-B_{2}K_{2}\tilde{x}_{2}(k).\] Accordingly, by adding (19)-(20) into the observers (17)-(18) and combining with (23), the derivation of \(\tilde{x}_{i}(k)\) for \(i=1,2\) are given as \[\tilde{x}_{1}(k+1) = (A+B_{2}K_{2}-L_{1}H_{1})\tilde{x}_{1}(k)-B_{2}K_{2}\tilde{x}_{2}( k),\] \[\tilde{x}_{2}(k+1) = (A+B_{1}K_{1}-L_{2}H_{2})\tilde{x}_{1}(k)-B_{1}K_{1}\tilde{x}_{1}( k),\] that is \[\tilde{x}(k+1)=\mathcal{A}\tilde{x}(k). \tag{24}\] Subsequently, if there exist matrices \(L_{1}\) and \(L_{2}\) making \(\mathcal{A}\) stable, then, the stability of the matrix \(\mathcal{A}\) means that \[\lim_{k\rightarrow\infty}\tilde{x}(k)=0,\] i.e., (22) is established. That is to say, the observers \(\hat{x}_{i}(k)\) are stable under (19)-(20). The proof is completed. _Remark 4:_ Noting that in Theorem 1 the key point lies in that how to select \(L_{i}\) (\(i=1,2\)) so that the eigenvalues of the matrix \(\mathcal{A}\) are within the unit circle. The following analysis gives an method to find \(L_{i}\). According to the Lyapunov stability criterion, i.e., \(\mathcal{A}\) is stable if and only if for any positive definite matrix \(Q\), \(\mathcal{A}^{\prime}P\mathcal{A}-P=-Q\) admits a solution such that \(P>0\). Thus, if there exists a \(P>0\) such that \[\mathcal{A}^{\prime}P\mathcal{A}-P<0, \tag{25}\] then \(\mathcal{A}\) is stable. Following from the elementary row transformation, one has \[\left(\begin{array}{cc}I&I\\ 0&I\end{array}\right)\left(\begin{array}{cc}I&0\\ 0&\mathcal{A}^{\prime}\end{array}\right)\left(\begin{array}{cc}-P&\mathcal{ A}^{\prime}P\\ P\mathcal{A}&-P\end{array}\right)\left(\begin{array}{cc}I&0\\ 0&\mathcal{A}\end{array}\right)\] \[\times\left(\begin{array}{cc}I&0\\ I&I\end{array}\right)=\left(\begin{array}{cc}\mathcal{A}^{\prime}P\mathcal{A }-P&0\\ 0&-\mathcal{A}^{\prime}P\mathcal{A}\end{array}\right)<0,\] that is, \(\mathcal{A}^{\prime}P\mathcal{A}-P<0\) is equivalent to the following matrix inequality \[\left(\begin{array}{cc}-P&\mathcal{A}^{\prime}P\\ P\mathcal{A}&-P\end{array}\right)<0. \tag{26}\] Noting that \(\mathcal{A}\) is related with \(L_{i}\), in order to use the linear matrix inequality (LMI) Toolbox in Matlab to find \(L_{i}\), (26) will be transmit into a LMI form. Let \[P=\left(\begin{array}{cc}P&0\\ 0&P\end{array}\right),\quad\tilde{W}=\left(\begin{array}{cc}W_{1}&0\\ 0&W_{2}\end{array}\right),\] and rewrite \(\mathcal{A}\) in (21) as \(\mathcal{A}=\tilde{A}-\tilde{L}\tilde{H}\), where \[\mathcal{A} = \left(\begin{array}{cc}A+B_{2}K_{2}&-B_{2}K_{2}\\ -B_{1}K_{1}&A+B_{1}K_{1}\end{array}\right),\] \[\tilde{L} = \left(\begin{array}{cc}L_{1}&0\\ 0&L_{2}\end{array}\right),\quad\tilde{H}=\left(\begin{array}{cc}H_{1}&0\\ 0&H_{2}\end{array}\right).\] To this end, we have \[P\mathcal{A}=P\tilde{A}-P\tilde{L}\tilde{H}=P\tilde{A}-\tilde{W}\tilde{H},\] with \(\tilde{W}=P\tilde{L}\). Based on the discussion above, it concludes that \(\mathcal{A}\) is stable if there exists a \(P>0\) such that the following LMI: \[\left(\begin{array}{cc}-P&(P\tilde{A}-\tilde{W}\tilde{H})^{\prime}\\ P\tilde{A}-\tilde{W}\tilde{H}&-P\end{array}\right)<0. \tag{27}\] In this way, by using the LMI Toolbox in Matlab, \(L_{i}\) can be found according, which stabilizes \(\mathcal{A}\) where \(L_{i}=P^{-1}W_{i}\). Under the observer-feedback controllers (19)-(20), the stability of (1) is given. **Theorem 2**: _Under Assumption 1 and if there exists \(L_{i}\) stabilizing \(\mathcal{A}\), then the closed-loop system (1) is stable with the observer-feedback controllers (19)-(20)._ **Proof 3**: _According to (23), the closed-loop system (1) is reformulated as_ \[x(k+1) = [A+B_{1}K_{1}+B_{2}K_{2}]x(k)+\mathcal{B}\tilde{x}(k). \tag{28}\] Together with (24), we have \[\left[\begin{array}{c}x(k+1)\\ \tilde{x}(k+1)\end{array}\right]=\bar{A}\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right]. \tag{29}\] _The stability of \(A+B_{1}K_{1}+B_{2}K_{2}\) is guaranteed by the stabilizability of \((A,B)\) and the observability of \((A,Q_{i})\) for \(i=1,2\). Following from Theorem 1, \(\mathcal{A}\) is stabilized by selecting appropriate gain matrices \(L_{1}\) and \(L_{2}\). Subsequently, the stability of the closed-loop system (1) is derived. This completes the proof._ ## IV The asymptotical optimal analysis The stability of the state and the observers, i.e., \(x(k)\) and \(\hat{x}_{i}\) for \(i=1,2\) has been shown in Theorem 1 and Theorem 2 under the observer-feedback controllers (19)-(20). To shown the rationality of the design of the observer-feedback controllers (19)-(20), the asymptotical optimal analysis relating with the cost functions under (19)-(20) is given. To this end, denote the cost functions for the follower and the leader satisfying \[J_{1}(s,M) = \sum_{k=s}^{M}[x^{\prime}(k)Q_{1}x(k)+u_{1}^{\prime}(k)R_{11}u_{1}(k) \tag{30}\] \[+u_{2}^{\prime}(k)R_{12}u_{2}(k)],\] \[J_{2}(s,M) = \sum_{k=s}^{M}[x^{\prime}(k)Q_{2}x(k)+u_{1}^{\prime}(k)R_{21}u_{1} (k)\] (31) \[+u_{2}^{\prime}(k)R_{22}u_{2}(k)].\] Now, we are in position to show that the observer-feedback Stackelberg strategy (19)-(20) is asymptotical optimal to the optimal feedback Stackelberg strategy presented in Lemma 1. **Theorem 3**: _Under Assumption 1, the corresponding cost functions (30)-(31) under the observer-feedback Stackelberg strategy (19)-(20) with \(L_{i}\) (\(i=1,2\)) selected from Theorem 1 are given by_ \[J_{1}^{\star}(s,\infty) = x^{\prime}(s)P_{1}x(s) \tag{32}\] \[+\sum_{k=s}^{\infty}\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right]^{\prime}\left[\begin{array}{cc}0&T_{1}\\ T_{1}^{\prime}&S_{1}\end{array}\right]\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right],\] \[J_{2}^{\star}(s,\infty) = x^{\prime}(s)P_{2}x(s)\] (33) \[+\sum_{k=s}^{\infty}\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right]^{\prime}\left[\begin{array}{cc}0&T_{2}\\ T_{2}^{\prime}&S_{2}\end{array}\right]\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right],\] _where_ \[S_{1} = \mathcal{B}^{\prime}P_{1}\mathcal{B}-\left[\begin{array}{cc}K_{1} ^{\prime}R_{11}K_{1}&0\\ 0&K_{2}^{\prime}R_{12}K_{2}\end{array}\right],\] \[S_{2} = \mathcal{B}^{\prime}P_{2}\mathcal{B}-\left[\begin{array}{cc}K_{1} ^{\prime}R_{21}K_{1}&0\\ 0&K_{2}^{\prime}R_{22}K_{2}\end{array}\right],\] \[T_{1} = (A+B_{2}K_{2})^{\prime}M_{1}^{\prime}P_{1}\mathcal{B},\] \[T_{2} = (A+B_{2}K_{2})^{\prime}M_{1}^{\prime}P_{2}\mathcal{B}.\] _Moreover, the differences, which are denoted as \(\delta J_{1}(s,\infty)\) and \(\delta J_{2}(s,\infty)\), between (32)-(33) and the optimal cost functions (15)-(16) obtained in Lemma 1 under the optimal feedback Stackelberg strategy are such that_ \[\delta J_{1}(s,\infty) = J_{1}^{\star}(s,\infty)-J_{1}^{\star}(s,\infty) \tag{34}\] \[= \sum_{k=s}^{\infty}\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right]^{\prime}\left[\begin{array}{cc}0&T_{1}\\ T_{1}^{\prime}&S_{1}\end{array}\right]\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right],\] \[\delta J_{2}(s,\infty) = J_{2}^{\star}(s,\infty)-J_{2}^{\star}(s,\infty)\] (35) \[= \sum_{k=s}^{\infty}\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right]^{\prime}\left[\begin{array}{cc}0&T_{2}\\ T_{2}^{\prime}&S_{2}\end{array}\right]\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right].\] **Proof 4**: _The proof will be divided into two parts. The first part is to consider the cost function of the follower under the observer-feedback controllers (19)-(20). Following from (23), system (1) it can be rewritten as_ \[x(k+1) = [A+B_{1}K_{1}+B_{2}K_{2}]x(k)-B_{1}K_{1}\tilde{x}_{1}(k)\] \[-B_{2}K_{2}\tilde{x}_{2}(k)\] \[= (I-B_{1}S)(A+B_{2}K_{2})x(k)+\mathcal{B}\tilde{x}(k), \tag{36}\] where \(K_{1}\) in (11) have been used in the derivation of the last equality. Firstly, we will prove \(J_{1}^{*}(s,\infty)\) satisfies (32). Combing (36) with (13), one has \[x^{\prime}(k)P_{1}x(k)-x(k+1)^{\prime}P_{1}x(k+1)\] \[= x^{\prime}(k)[P_{1}-(A+B_{2}K_{2})^{\prime}(I-B_{1}S)^{\prime}P_ {1}(I-B_{1}S)\] \[\times(A+B_{2}K_{2})]x(k)-x^{\prime}(k)(A+B_{2}K_{2})^{\prime}M_{1 }^{\prime}P_{1}\mathcal{B}\tilde{x}(k)\] \[-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{1}M_{1}(A+B_{2}K_{2 })x(k)-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{1}\mathcal{B}\tilde{x}(k)\] \[= x^{\prime}(k)[Q_{1}+K_{2}^{\prime}R_{12}K_{2}-(A+B_{2}K_{2})^{ \prime}P_{1}B_{1}\Gamma_{1}^{-1}B_{1}^{\prime}P_{1}\] \[\times(A+B_{2}K_{2})+(A+B_{2}K_{2})^{\prime}P_{1}B_{1}S(A+B_{2}K_ {2})\] \[+(A+B_{2}K_{2})^{\prime}S^{\prime}B_{1}^{\prime}P_{1}(A+B_{2}K_{2 })-(A+B_{2}K_{2})^{\prime}S^{\prime}\] \[\times B_{1}^{\prime}P_{1}B_{1}S(A+B_{2}K_{2})]x(k)-x^{\prime}(k)( A+B_{2}K_{2})^{\prime}M_{1}^{\prime}\] \[\times P_{1}\mathcal{B}\tilde{x}(k)-\tilde{x}^{\prime}(k) \mathcal{B}^{\prime}P_{1}M_{1}(A+B_{2}K_{2})x(k)\] \[-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{1}\mathcal{B}\tilde {x}(k)\] \[= x^{\prime}(k)[Q_{1}+K_{2}^{\prime}R_{12}K_{2}+K_{1}^{\prime}(R_{ 11}+B_{1}^{\prime}P_{1}B_{1})K_{1}\] \[-K_{1}^{\prime}B_{1}^{\prime}P_{1}B_{1}K_{1}]x(k)-x^{\prime}(k)(A +B_{2}K_{2})^{\prime}M_{1}^{\prime}P_{1}\mathcal{B}\tilde{x}(k)\] \[-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{1}M_{1}(A+B_{2}K_{2 })x(k)-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{1}\mathcal{B}\tilde{x}(k)\] \[= x^{\prime}(k)[Q_{1}+K_{1}^{\prime}R_{11}K_{1}+K_{2}^{\prime}R_{ 12}K_{2}]x(k)\] \[-x^{\prime}(k)(A+B_{2}K_{2})^{\prime}M_{1}^{\prime}P_{1}\mathcal{ B}\tilde{x}(k)-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{1}M_{1}\] \[\times(A+B_{2}K_{2})x(k)-\tilde{x}^{\prime}(k)\mathcal{B}^{ \prime}P_{1}\mathcal{B}\tilde{x}(k). \tag{37}\] Substituting (37) from \(k=s\) to \(k=M\) on both sides, we have \[x^{\prime}(s)P_{1}x(s)-x^{\prime}(M+1)P_{1}x(M+1)\] \[= J_{1}(s,M)+\sum_{k=s}^{M}\tilde{x}^{\prime}(k)\left[\begin{array} []{cc}K_{1}^{\prime}R_{11}K_{1}&0\\ 0&K_{2}^{\prime}R_{12}K_{2}\end{array}\right]\tilde{x}(k) \tag{38}\] \[-\sum_{k=s}^{M}\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right]^{\prime}\left[\begin{array}{cc}0&T_{1}\\ T_{1}^{\prime}&\mathcal{B}^{\prime}P_{1}\mathcal{B}\end{array}\right]\left[ \begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right].\] According to Theorem 2, the stability of (1) means that \[\lim_{M\rightarrow\infty}x^{\prime}(M+1)P_{1}x(M+1)=0.\] Thus, following from (38) and letting \(M\rightarrow\infty\), (32) can be obtained exactly. The second part is to consider the cost function of the leader under the observer-feedback controllers (19)-(20), that is, we will show that \(J_{2}^{*}(s,\infty)\) satisfies (33). Following from (36), it derives \[x^{\prime}(k)P_{2}x(k)-x(k+1)^{\prime}P_{2}x(k+1)\] \[= x^{\prime}(k)[P_{2}-(A+B_{2}K_{2})^{\prime}M_{1}^{\prime}P_{2}M_ {1}(A+B_{2}K_{2})]x(k)\] \[-x^{\prime}(k)(A+B_{2}K_{2})^{\prime}M_{1}^{\prime}P_{2}\mathcal{ B}\tilde{x}(k)-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{2}M_{1}(A\] \[+B_{2}K_{2})x(k)-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{2} \mathcal{B}\tilde{x}(k)\] \[= x^{\prime}(k)[Q_{2}+A^{\prime}S^{\prime}R_{21}SA-Y_{2}^{*} \Gamma_{2}^{-1}Y_{2}\] \[-A^{\prime}M_{1}^{\prime}P_{2}M_{1}B_{2}K_{2}-K_{2}^{\prime}B_{2}^ {\prime}M_{1}^{\prime}P_{2}M_{1}A\] \[-K_{2}^{\prime}B_{2}^{\prime}M_{1}^{\prime}P_{2}M_{1}B_{2}K_{2}]x(k -\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{2}\mathcal{B}\tilde{x}(k)\] \[-x^{\prime}(k)(A+B_{2}K_{2})^{\prime}M_{1}^{\prime}P_{2}\mathcal{B} \tilde{x}(k)\] \[-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{2}M_{1}(A+B_{2}K_{2})x (k), \tag{39}\] where the algebraic Riccati equation (14) has been used in the derivation of the last equality. For further optimization, we make the following derivation: \[x^{\prime}(k)P_{2}x(k)-x(k+1)^{\prime}P_{2}x(k+1)\] \[= x^{\prime}(k)[Q_{2}+K_{1}^{\prime}R_{21}K_{1}+K_{2}^{\prime}R_{22 }K_{2}]x(k)\] \[+x^{\prime}(k)[-(A+B_{2}K_{2})^{\prime}S^{\prime}R_{21}S(A+B_{2}K_{2 })\] \[-K_{2}^{\prime}R_{22}K_{2}+A^{\prime}S^{\prime}R_{21}SA-Y_{2}^{2} \Gamma_{2}^{-1}Y_{2}\] \[-A^{\prime}M_{1}^{\prime}P_{2}M_{1}B_{2}K_{2}-K_{2}^{\prime}B_{2}^ {\prime}M_{1}^{\prime}P_{2}M_{1}A\] \[-K_{2}^{\prime}B_{2}^{\prime}M_{1}^{\prime}P_{2}M_{1}B_{2}K_{2}]x(k -x^{\prime}(k)(A+B_{2}K_{2})^{\prime}\] \[\times M_{1}^{\prime}P_{2}\mathcal{B}\tilde{x}(k)-\tilde{x}^{\prime }(k)\mathcal{B}^{\prime}P_{2}M_{1}(A+B_{2}K_{2})x(k)\] \[-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{2}\mathcal{B}\tilde{x}(k)\] \[= x^{\prime}(k)[Q_{2}+K_{1}^{\prime}R_{21}K_{1}+K_{2}^{\prime}R_{22 }K_{2}]x(k)\] \[-x^{\prime}(k)(A+B_{2}K_{2})^{\prime}M_{1}^{\prime}P_{2}\mathcal{B} \tilde{x}(k)-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_{2}M_{1}\] \[\times(A+B_{2}K_{2})x(k)-\tilde{x}^{\prime}(k)\mathcal{B}^{\prime}P_ {2}\mathcal{B}\tilde{x}(k). \tag{40}\] Substituting (40) from \(k=s\) to \(k=M\) on both sides, one has \[x^{\prime}(s)P_{2}x(s)-x^{\prime}(M+1)P_{2}x(M+1) \[\leq \sum_{k=s}^{\infty}\left\|\left[\begin{array}{cc}0&T_{i}\\ T_{i}^{\prime}&S_{i}\end{array}\right]\right\|\left[\begin{array}{c}x(k)\\ \tilde{x}(k)\end{array}\right]\left\|^{2}\right.\] \[\leq \sum_{k=s}^{\infty}\lambda^{2k}\cdot c^{2}\bigg{\|}\left[\begin{array} []{cc}0&T_{i}\\ T_{i}^{\prime}&S_{i}\end{array}\right]\left\|\left[\begin{array}{c}x(0)\\ \tilde{x}(0)\end{array}\right]\right\|^{2}\] \[< \frac{\lambda^{2s}}{1-\lambda^{2}}\cdot c^{2}\bigg{\|}\left[ \begin{array}{cc}0&T_{i}\\ T_{i}^{\prime}&S_{i}\end{array}\right]\left\|\left[\begin{array}{c}x(0)\\ \tilde{x}(0)\end{array}\right]\right\|^{2}\] \[\doteq \bar{c}\lambda^{2s}.\] Since \(0<\lambda<1\), thus there exists a sufficiency large integer \(N\) such that for any \(\varepsilon>0\), satisfying \[\lambda^{2N}<\frac{1}{\bar{c}+1}\varepsilon.\] Combing with (46), one has \[\delta J_{i}(N,\infty)<\frac{\bar{c}}{\bar{c}+1}\varepsilon<\varepsilon. \tag{47}\] That is to say, the cost functions (32)-(33) under the observer-feedback Stackelberg strategy (19)-(20) are asymptotical optimal to the cost functions (42)-(43) under the optimal feedback Stackelberg strategy (9)-(10) when the integer \(N\) is large enough. The proof is now completed. ## V Numerical Examples To show the validity of the results in Theorem 1 to Theorem 4, the following example is presented. Consider system (1)-(3) with \[A = \left[\begin{array}{cc}1&-0.7\\ 1&-0.3\end{array}\right],\quad B_{1}=\left[\begin{array}{c}-5\\ -1\end{array}\right],\] \[B_{2} = \left[\begin{array}{c}0\\ 1\end{array}\right],\quad H_{1}=\left[\begin{array}{cc}1&0\end{array} \right],\quad H_{2}=\left[\begin{array}{cc}0&1\end{array}\right],\] and the associated cost functions (4)-(5) with \[Q_{1} = \left[\begin{array}{cc}1&0\\ 0&1\end{array}\right],\quad Q_{2}=\left[\begin{array}{cc}2&0\\ 0&1\end{array}\right],\] \[R_{11} = 1,\quad R_{11}=2,\quad R_{21}=0,\quad R_{22}=1.\] By decoupled solving the algebraic Riccati equations (13)-(14), the feedback gains in (11)-(12) are respectively calculated as \[K_{1} = \left[\begin{array}{cc}0.2028&-0.1374\end{array}\right],\] \[K_{2} = \left[\begin{array}{cc}-0.4005&0.0791\end{array}\right].\] By using the LMI Toolbox in Matlab, \(L_{i}\) (\(i=1,2\)) are calculated as \[L_{1}=\left[\begin{array}{c}1.2364\\ 0.4246\end{array}\right],\quad L_{2}=\left[\begin{array}{c}0.0039\\ 0.1925\end{array}\right],\] while the four eigenvalues of matrix \(\mathcal{A}\) are calculated as: \[\lambda_{1}(\mathcal{A}) = 0.1949,\quad\lambda_{2}(\mathcal{A})=0.6791,\] \[\lambda_{3}(\mathcal{A}) = \lambda_{4}(\mathcal{A})=0.7317,\] which means that \(\mathcal{A}\) in (21) is able. In this way, following from Theorem 1, the state error estimation \(\tilde{x}(k)\) in (24) is stable, which is shown in Fig. 1, where data 1 to data 4 represent the four components of vector \(\tilde{x}(k)\doteq\left[\begin{array}{cc}\tilde{x}_{11}(k)&\tilde{x}_{21}(k )&\tilde{x}_{31}(k)&\tilde{x}_{41}(k)\end{array}\right]^{\prime}\). Moreover, under the observer-feedback Stackelberg strategy (19)-(20), the state \(x(k)\) in (1) is also stable which can be seen in Fig. 2, where data 1 and data 2 represent the two components of \(x(k)\doteq\left[\begin{array}{cc}x_{11}(k)&x_{21}(k)\end{array}\right]^{\prime}\). Finally, by analyzing Fig. 1 and Fig. 2 and selecting \(N=30\) in Theorem 4, the asymptotical optimal property of the cost functions (32)-(33) under the observer-feedback Stackelberg strategy (19)-(20) is guaranteed. shown that the cost functions under the proposed observer-feedback Stackelberg strategy are asymptotical optimal to the cost functions under the optimal feedback Stackelberg strategy.
2302.14612
Multiple-q current states in a multicomponent superconducting channel
It is well-established that multicomponent superconductors can host different nonstandard phenomena such as broken-time reversal symmetry (BTRS) states, exotic Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phases, the fractional Josephson effect as well as plenty of topological defects like phase solitons, domain walls and unusual vortex structures. We show that in the case of a two-component superconducting quasi-one-dimensional channel this catalogue can be extended by a novel inhomogeneous current state, which we have termed as a multiple-momenta state or, in short, a multiple-q state, characterized by the coexistence of two different interpenetrating Cooper pair condensates with different total momenta. Within the Ginzburg-Landau formalism for a dirty two-band superconductor with sizable impurity scattering treated in the Born-approximation we reveal that under certain conditions, the occurrence of multiple-q states can induce a cascade of transitions involving switching between them and the homogeneous BTRS (non-BTRS) states and vice versa leading this way to a complex interplay of homogeneous and inhomogeneous current states. We find that hallmarks of such a multiple-q state within a thin wire or channel can be a saw-like dependence of the depairing current and the existence of two distinct stable branches on it (a bistable current state).
Yuriy Yerin, Stefan-Ludwig Drechsler, Mario Cuoco, Caterina Petrillo
2023-02-28T14:55:06Z
http://arxiv.org/abs/2302.14612v1
# Multiple-q current states in a multicomponent superconducting channel ###### Abstract It is well-established that multicomponent superconductors can host different nonstandard phenomena such as broken-time reversal symmetry (BTRS) states, exotic Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phases, the fractional Josephson effect as well as plenty of topological defects like phase solitons, domain walls and unusual vortex structures. We show that in the case of a two-component superconducting quasi-one-dimensional channel this catalogue can be extended by a novel inhomogeneous current state, which we have termed as a multiple-momenta state or, in short, a multiple-q state, characterized by the coexistence of two different interpenetrating Cooper pair condensates with different total momenta. Within the Ginzburg-Landau formalism for a dirty two-band superconductor with sizable impurity scattering treated in the Born-approximation we reveal that under certain conditions, the occurrence of multiple-q states can induce a cascade of transitions involving switching between them and the homogeneous BTRS (non-BTRS) states and vice versa leading this way to a complex interplay of homogeneous and inhomogeneous current states. We find that hallmarks of such a multiple-q state within a thin wire or channel can be a saw-like dependence of the depairing current and the existence of two distinct stable branches on it (a bistable current state). ## I Introduction The study of multicomponent superconductivity has become one of the major research topics of condensed matter physics. The attention to this issue stems primarily from the fact that multicomponent superconductivity reveals a field with significantly rich physics and new interesting phenomena and unusual states not observed in conventional superconductors. The variety of multicomponent superconducting systems is represented for instance by strontium ruthenate [1; 2], iron-based [3; 4; 5], noncentrosymmetric [6] and heavy-fermions superconductors [7]. Loosely speaking, these materials can be considered as a kind of stage theater where the actors can play the role of various exotic states and phenomena. In this regard, it is evident that much effort and research is being made to discover and cast new promising "actors", viz. new phenomena and states, unknown until now in multicomponent superconductors. Among the effects that have already been discovered and those that are yet to be discovered, a special niche is occupied by the so-called phase coherent effects in multicomponent superconductors, connected with the emergence of nontrivial phase shifts between several distinctive order parameters. This can lead to the interesting phenomenon known as chiral superconductivity with \(s_{\pm}+is_{++}\) pairing symmetry and as a consequence to state broken time-reversal symmetry (BTRS), when the phases of the multicomponent order parameter do exhibit frustration. The presence of non-zero phase shifts raises a reasonable question, namely how are these topics they manifested or could they become visible in the observables? At this stage, it has already been theoretically established that the occurrence of such phase difference topics should affect the Josephson effect with the appearance of \(\phi\) (\(\phi_{0}\)) and \(\pi\) junctions and the corresponding current-phase relations [8; 9; 10; 11; 12; 13; 14; 15; 16; 17], phase-sensitive structures like dc-SQUID with the unusual Fraunhofer diffraction patterns [18], the Little-Parks effect with the non-parabolic dependence of the critical temperature shift [19; 20] and current states with anomalous characteristics of depairing curves [21]. Moreover, under certain circumstances an applied magnetic flux can drive the phase shift, converting a state with chiral \(s_{\pm}+is_{++}\) symmetry into a \(s_{\pm}\) configuration, when the intercomponent phase difference is stable and equal to \(\pi\), and vice versa [22]. Such a controlled switching between current states of different symmetries (different phase shifts) can produce an anomalous diamagnetic response inducing current density jumps and kinks in doubly-connected geometries [23]. Besides, the intercomponent phase difference itself can arise due to topological excitations inherent solely in multicomponent superconductors and known as phase solitons of the sine-Gordon type [25; 26; 27; 28; 29; 30] or double sine-Gordon type [31]. These inhomogeneous current states have been confirmed experimentally in a series of experiments [32; 33; 34; 35]. Another example of inhomogeneous current state is the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state in a two-band superconductor, when due to the competition of two different modulation length scales, the FFLO phase is transformed into two phases separated by a first order phase transition: the so-called \(Q_{1}\)- and \(Q_{2}\)-FFLO phases at the higher and lower fields [36; 37]. This similarity obviously suggests the possibility for the existence of a current state in a multicomponent superconductor, in which different coexisting condensates will have different superconducting momenta \(q_{i}\), where \(i\) is the number of the component. Strictly speaking, such a situation, when condensates can have different momenta is not new and can be achieved theoretically by means of the additional contribution from the Andreev-Bashkin effect, when the intercomponent current-current coupling gives rise to a dissipationless drag (also known as entrainment) between the two components within a mixture of two superfluids [38; 39; 40] or a superconductor coupled to a superfluid [41] or between the neutron and proton condensates in the core of a neutron star [42; 43]. In this paper, we intend to demonstrate that an inhomogeneous current state with different superconducting condensate momenta can arise in a superconductor even without taking into account the Andreev-Bashkin current-current coupling. In the framework of the Ginzburg-Landau phenomenological theory, it will be shown that a two-band superconducting quasi-one-dimensional channel with a weak interaction between the bands and with the inclusion of the interband scattering effect is sufficient for the onset of such a state. Along with this we find that its occurrence can start a cascade of transitions between it and homogeneous states with breaking of the time-reversal symmetry and its preservation. In the context of unconventional superconductivity with \(d\)-wave symmetry the coexistence of strong impurity scattering and \(q\)-dependent inhomogeneities induced at high-magnetic fields manifested in the celebrated Fulde-Ferrel-Larkin-Ovchinnikov (FFLO) phases has been demonstrated recently [44; 45]. Here, we will show that a similar modulation of the order parameters is also possible at low or ambient magnetic fields at least in thin wires or channels but induced by an external current for dirty two-band superconductors with chiral \(s_{\pm}+\)is\({}_{++}\)-symmetry. The outline of the paper is as follows. In Sec. II we describe the geometrical characteristics of a channel and introduce the Ginzburg-Landau (GL) formalism generalized for the case of a two-component order parameter with the interband scattering effect included. In Sec. III we study the phase diagram of a two-component superconductor, where domains with a nontrivial phase difference as a function of the temperature and the strength of the interband scattering rate are shown. In this phase diagram we select reference points from each domain, which are the basis for the presentation of our results and subsequent conclusions. Following this, in Sec. IV we derive general expressions for the GL free energy and investigate its behavior for selected reference points. The results of our calculations are discussed in Sec. V. Finally, we present our conclusions in Sec. VI. ## II Model and formalism The subject of our consideration is given by the current states in a thin two-band superconducting wire with the diameter \(d\ll\xi_{1,2}(T),\lambda_{1,2}(T)\), where \(\xi_{1,2}(T)\) and \(\lambda_{1,2}(T)\) are coherence lengths and London penetration depths for each non-interacting order parameter, respectively (Fig. 1). The research tool for the study of current states will be the the GL-theory for a dirty two-band superconductor. For this physical case, by means of the Usadel equations generalized for a two-band superconductor with interband scattering by impurities one can deduce the free energy \(F\)[46; 47] to the form \[F=F_{1}+F_{2}+F_{12}+\int\frac{\left(\text{rot }\mathbf{A}-\mathbf{H}\right)^{2}}{8 \pi}d^{3}\mathbf{r}, \tag{1}\] where \(F_{i}\) are the partial contributions of the \(i\)th band, \(F_{12}\) is the component arising from the interband interaction which is also affected by the presence of interband impurity scattering. The last term describes the contribution of a magnetic field \(\mathbf{H}\) and the vector-potential \(\mathbf{A}\). The expressions for \(F_{i}\) and \(F_{12}\) have the form \[F_{1}=\int\Bigg{[}a_{11}|\Delta_{1}|^{2}+\frac{1}{2}b_{11}|\Delta_{1}|^{4}+ \frac{1}{2}k_{11}\Bigg{|}-i\hbar\nabla-\frac{2e}{c}\mathbf{A}\Bigg{|}^{2} \Delta_{1}\Bigg{]}d^{3}\mathbf{r}, \tag{2}\] \[F_{2}=\int\Bigg{[}a_{22}|\Delta_{2}|^{2}+\frac{1}{2}b_{22}|\Delta_{2}|^{4}+ \frac{1}{2}k_{22}\Bigg{|}-i\hbar\nabla-\frac{2e}{c}\mathbf{A}\Bigg{|}^{2} \Delta_{2}\Bigg{]}d^{3}\mathbf{r}, \tag{3}\] \[\begin{split}& F_{12}=\int\Big{[}b_{12}|\Delta_{1}|^{2}|\Delta_{2}|^{ 2}+2\left(a_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|+c_{11}\Delta_{ 1}\right|^{3}\left|\Delta_{2}\right|+c_{22}\left|\Delta_{1}\right|\left| \Delta_{2}\right|^{3}\Big{)}\cos\phi+c_{12}|\Delta_{1}|^{2}\Delta_{2}|^{2} \cos 2\phi\\ &+\frac{1}{2}k_{12}\left(\left(-i\hbar\nabla-\frac{2e}{c} \mathbf{A}\right)\Delta_{1}\left(i\hbar\nabla-\frac{2e}{c}\mathbf{A}\right) \Delta_{2}^{*}+\left(i\hbar\nabla-\frac{2e}{c}\mathbf{A}\right)\Delta_{1}^{*} \left(-i\hbar\nabla-\frac{2e}{c}\mathbf{A}\right)\Delta_{2}\right)\right]d^{3} \mathbf{r}.\end{split} \tag{4}\] Here, \(\Delta_{i}=|\Delta_{i}|\exp\left(i\chi_{i}\right)\) are complex order parameters. Also, we introduce the phase difference between the order parameters \(\phi=\chi_{2}-\chi_{1}\), which will play an important role for the determination of the ground state of a dirty two-band superconductor and for the description of the current states. The functional derivative \(\frac{\partial F}{\partial\mathbf{A}(\mathbf{r})}\) yields the expression for the current \(\mathbf{j}\): \[\mathbf{j}=-ie\hbar k_{11}\left(\Delta_{1}^{*}\nabla\Delta_{1}- \Delta_{1}\nabla\Delta_{1}^{*}\right)-ie\hbar k_{22}\left(\Delta_{2}^{*}\nabla \Delta_{2}-\Delta_{2}\nabla\Delta_{2}^{*}\right)-ie\hbar k_{12}\left(\Delta_{1 }^{*}\nabla\Delta_{2}-\Delta_{2}\nabla\Delta_{1}^{*}-\Delta_{1}\nabla\Delta_{ 2}^{*}+\Delta_{2}^{*}\nabla\Delta_{1}\right)\] \[-\frac{4e^{2}}{c}\left(k_{11}|\Delta_{1}|^{2}+k_{22}|\Delta_{2}|^ {2}+k_{12}\left(\Delta_{1}^{*}\Delta_{2}+\Delta_{2}^{*}\Delta_{1}\right) \right)\mathbf{A}. \tag{5}\] The microscopic expressions for the coefficients of the GL free energy functional are given in the Appendix A. Noteworthy, the coefficients \(b_{12}\), \(c_{ij}\) and \(k_{12}\) in Eq. (4) are absent in the case of a clean two-band superconductor. Their emergence is the result of the contribution of the interband impurities, whose strength is characterized by the interband scattering rate \(\Gamma\), being proportional to the impurity concentration. The special geometry of the system under consideration allows us to reduce the analysis of the current states to a one-dimensional problem and to neglect the self-magnetic field of the wire. In the absence of external magnetic fields the calibration \(\mathbf{A}=0\) is applied. From the physical point of view the derivatives of the order parameter phases \(\frac{d\chi_{1}}{dx}\) and \(\frac{d\chi_{2}}{dx}\) determine the superfluid momenta of Cooper pairs. For a conventional superconductor or the so-called Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) superconductor the modulation of the order parameter is described by a _single_ plane wave (FF state) or a simple cos-term (LO state) as its real part in the simplest cases. Here, for the thin wire or channel, the vector of the detrimental for superconductivity depairing current plays a similar role as the strong magnetic field in the FFLO states in the bulk: it causes modulations of the order parameters to minimize its detrimental influence. This common effect rests on the special equivalence of voltage and current driven responses in the present experimental situation [48]. We should make an important remark about the present form of the GL free energy. Since our analysis is based on Eq. (1) this approach is applicable to systems for the so-called voltage-driven regime. For the current-driven regime the study of current states should be performed by means of the Gibbs free energy with the additional contribution of the current \(I\), because the phase difference between the ends of the wire (i.e. parameter \(q\)) becomes a dependent variable and is determined by the depairing \(I\)[48; 49]. The GL-equations for the order parameter will be derived in the following sections for different states. Figure 1: The long thin wire with the thickness \(d\) and the length \(L\) is the proposed experimental system under consideration to reveal its multiple-\(q\) character by measuring the current. Thereby it is assumed that the length of the wire or channel much exceeds the coherence length of the dirty two-band superconductor \(L\gg\xi(T)\) to guarantee the rare (usually _nonuniversal_) equivalence of voltage and current driven responses (see Ref. [48]). ### The GL-formalism for the BTRS state The calculation of the functional derivatives \(\partial F/\partial\phi=0\), \(\partial F/\partial|\Delta_{1}|=0\) and \(\partial G/\partial|\Delta_{2}|=0\) leads to equations for \(|\Delta_{i}|\) and allows us to obtain solutions for their phase difference \(\phi\) \[\left(a_{11}+\frac{k_{11}\hbar^{2}q^{2}}{2}\right)|\Delta_{1}|+b_{11}|\Delta_{ 1}|^{3}+b_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|^{2}+\left(a_{12}+ \frac{k_{12}\hbar^{2}q^{2}}{2}+3c_{11}|\Delta_{1}|^{2}+c_{22}|\Delta_{2}|^{2} \right)|\Delta_{2}|\cos\phi \tag{6}\] \[\left(a_{22}+\frac{k_{22}\hbar^{2}q^{2}}{2}\right)|\Delta_{2}|+b_{22}|\Delta_{ 2}|^{3}+b_{12}|\Delta_{1}|^{2}\left|\Delta_{2}\right|+\left(a_{12}+\frac{k_{12 }\hbar^{2}q^{2}}{2}+c_{11}|\Delta_{1}|^{2}+3c_{22}|\Delta_{2}|^{2}\right)| \Delta_{1}|\cos\phi \tag{7}\] \[\sin\phi=0\Rightarrow\phi=0,\phi=\pi, \tag{8}\] which corresponds to \(s_{++}\) and \(s_{\pm}\) symmetry, respectively. The most interesting case is the BTRS solution with an arbitrary \(\phi\) and the accompanied chiral symmetry \(s_{\pm}+is_{++}\) \[\cos\phi=-\frac{k_{12}\hbar^{2}q^{2}+2\left(a_{12}+c_{11}|\Delta_{1}|^{2}+c_{2 2}|\Delta_{2}|^{2}\right)}{4c_{12}\left|\Delta_{1}\right|\left|\Delta_{2} \right|}, \tag{9}\] which gives rise to two solutions for the phase difference and consequently leads to a kind of frustration with a two-fold degenerate ground states and a spontaneously broken \(\mathbb{Z}_{2}\) time-reversal symmetry. For \(q=0\) one can derive analytical solutions for the amplitudes of the superconducting order parameters. There are two solutions which read \[\left|\Delta_{1}^{(0)}\right|^{2}=-\frac{a_{11}b_{22}c_{12}-a_{11}c_{22}^{2}+ a_{12}b_{12}c_{22}-a_{12}b_{22}c_{11}-a_{12}c_{12}c_{22}-a_{22}b_{12}c_{12}+a_{22}c _{11}c_{22}+a_{22}c_{12}^{2}}{b_{11}b_{22}c_{12}-b_{11}c_{22}^{2}-b_{12}^{2}c_ {12}+2b_{12}c_{11}c_{22}+2b_{12}c_{12}^{2}-b_{22}c_{11}^{2}-2c_{11}c_{12}c_{22} -c_{12}^{3}}, \tag{10}\] \[\left|\Delta_{2}^{(0)}\right|^{2}=\frac{a_{11}b_{12}c_{12}-a_{11}c_{11}c_{22 }-a_{11}c_{12}^{2}+a_{12}b_{11}c_{22}-a_{12}b_{12}c_{11}+a_{12}c_{11}c_{12}-a_ {22}b_{11}c_{12}+a_{22}c_{11}^{2}}{b_{11}b_{22}c_{12}-b_{11}c_{22}^{2}-b_{12}^{ 2}c_{12}+2b_{12}c_{11}c_{22}+2b_{12}c_{12}^{2}-b_{22}c_{11}^{2}-2c_{11}c_{12}c _{22}-c_{12}^{3}}, \tag{11}\] while for \(q\neq 0\) \[\begin{array}{l}\left|\Delta_{1}\right|^{2}(q)=\left|\Delta_{1}^{(0)}\right| ^{2}\\ -\frac{1}{2}\frac{-b_{12}c_{12}k_{22}+b_{12}c_{22}k_{12}-b_{22}c_{11}k_{12}+b_ {22}c_{12}k_{11}+c_{11}c_{22}k_{22}+c_{12}^{2}k_{22}-c_{12}c_{22}k_{12}-c_{22} ^{2}k_{11}}{b_{11}b_{22}c_{12}-b_{11}c_{22}^{2}-b_{12}^{2}c_{12}+2b_{12}c_{11} c_{22}+2b_{12}c_{12}^{2}-b_{22}c_{11}^{2}-2c_{11}c_{11}c_{22}c_{22}-c_{12}^{3}}q^{2}, \end{array} \tag{12}\] \[\begin{array}{l}\left|\Delta_{2}\right|^{2}(q)=\left|\Delta_{2}^{(0)}\right| ^{2}\\ -\frac{1}{2}\frac{-b_{11}c_{12}k_{22}+b_{11}c_{22}k_{12}-b_{12}c_{11}k_{12}+b_ {12}c_{12}k_{11}+c_{11}^{2}k_{22}+c_{11}c_{12}k_{12}-c_{11}c_{22}k_{11}-c_{12} ^{2}k_{11}}{b_{11}b_{22}c_{12}-b_{11}c_{22}^{2}-b_{12}^{2}c_{12}+2b_{12}c_{11} c_{22}+2b_{12}c_{12}^{2}-b_{22}c_{11}^{2}-2c_{11}c_{11}c_{12}c_{22}-c_{12}^{3}}q^{2}. \end{array} \tag{13}\] The subsequent substitution of the expression for the phase difference in the BTRS state given by Eq. (9) into Eq. (14) yields a fourth-order polynomial of \(q\) \[\begin{array}{l}\frac{F}{L}=F_{0}-\frac{1}{c_{12}}\left[\frac{k_{12}^{2} \hbar^{4}q^{4}}{8}+\left(\left(c_{11}k_{12}-c_{12}k_{11}\right)|\Delta_{1}|^{2 }+\left(c_{22}k_{12}-c_{12}k_{22}\right)|\Delta_{2}|^{2}+a_{12}k_{12}\right) \frac{\hbar^{2}q^{2}}{2}\\ +\left(\frac{1}{2}a_{12}+c_{11}|\Delta_{1}|^{2}+c_{22}|\Delta_{2}|^{2} \right)a_{12}+\frac{1}{2}\Big{(}c_{11}|\Delta_{1}|^{2}+c_{22}|\Delta_{2}|^{2} \Big{)}^{2}+c_{12}^{2}|\Delta_{1}|^{2}|\Delta_{2}|^{2}\right].\end{array} \tag{14}\] Eqs. (6, 7) and (9) must be supplemented an expression for the total current \[\frac{I}{L}=2e\hbar k_{11}|\Delta_{1}|^{2}q+2e\hbar k_{22}|\Delta_{2}|^{2}q-e \hbar\frac{k_{12}}{c_{12}}q\left(k_{12}\hbar^{2}q^{2}+2\left(a_{12}+c_{11}| \Delta_{1}|^{2}+c_{22}|\Delta_{2}|^{2}\right)\right). \tag{15}\] ### The GL-formalism for the homogeneous state For the homogeneous case we have \[\begin{array}{l}\frac{F}{L}=F_{0}+\left(\frac{1}{2}k_{11}{\left|\Delta_{1} \right|}^{2}+\frac{1}{2}k_{22}{\left|\Delta_{2}\right|}^{2}+k_{12}\left|\Delta_ {1}\right|\left|\Delta_{2}\right|\cos\phi\right)\hbar^{2}q^{2}\\ +2\left(a_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|+c_{11}{\left| \Delta_{1}\right|}^{3}\left|\Delta_{2}\right|+c_{22}\left|\Delta_{1}\right| \left|\Delta_{2}\right|^{3}\right)\cos\phi+c_{12}{\left|\Delta_{1}\right|}^{2}{ \left|\Delta_{2}\right|}^{2}\cos 2\phi.\end{array} \tag{16}\] Correspondingly, the GL equations for the order parameters have the form \[\begin{array}{l}\left(a_{11}+\frac{k_{11}\hbar^{2}q^{2}}{2}\right)\left| \Delta_{1}\right|+b_{11}{\left|\Delta_{1}\right|}^{3}+b_{12}\left|\Delta_{1} \right|\left|\Delta_{2}\right|^{2}+\left(a_{12}+\frac{k_{12}\hbar^{2}q^{2}}{2} +3c_{11}{\left|\Delta_{1}\right|}^{2}+c_{22}{\left|\Delta_{2}\right|}^{2} \right)\left|\Delta_{2}\right|\cos\phi\\ +c_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|^{2}\cos 2\phi=0, \end{array} \tag{17}\] \[\begin{array}{l}\left(a_{22}+\frac{k_{22}\hbar^{2}q^{2}}{2}\right)\left| \Delta_{2}\right|+b_{22}{\left|\Delta_{2}\right|}^{3}+b_{12}{\left|\Delta_{1} \right|}^{2}\left|\Delta_{2}\right|+\left(a_{12}+\frac{k_{12}\hbar^{2}q^{2}}{2 }+c_{11}{\left|\Delta_{1}\right|}^{2}+3c_{22}{\left|\Delta_{2}\right|}^{2} \right)\left|\Delta_{1}\right|\cos\phi\\ +c_{12}{\left|\Delta_{1}\right|}^{2}\left|\Delta_{2}\right|\cos 2\phi=0, \end{array} \tag{18}\] while for the total current \[\frac{I}{L}=2e\hbar k_{11}{\left|\Delta_{1}\right|}^{2}q+2e\hbar k_{22}{\left| \Delta_{2}\right|}^{2}q+4e\hbar k_{12}\left|\Delta_{1}\right|\left|\Delta_{2} \right|q. \tag{19}\] ### The GL-formalism for the multiple-\(q\) state Strictly speaking, there are no convincing arguments against the assumption that the superconducting momenta of both condensates in a two-component superconductor can have different values, rather than one, as introduced in the previous section for the homogeneous state. This implies that we can represent the order parameters as plane waves with different \(q_{1}\) and \(q_{2}\) wave vectors \[\Delta_{i}=\left|\Delta_{i}\right|\exp\left(iq_{i}x\right)\text{.} \tag{20}\] A similar approach with the introduction of two competing wave-vectors has been used for the study of the phase diagram of Pauli-limiting two-band superconductors [36; 37]. There the emergence of the exotic FFLO state was predicted. The latter is divided into two states by a first-order transition: the \(Q_{1}\)- and \(Q_{2}\)-FFLO states at the higher and the lower magnetic field, respectively. Based on this similarity we term the inhomogeneous current state under consideration "multiple-momenta state" that we abbreviate for convenience as a multiple-\(q\) state. The term "multiple" was deliberately chosen because the state with two momenta can be generalized to the case of a superconductor in which the order parameter has more than two components. The present analytical consideration is rests for the lack of space and simplicity, so far, on the ansatz given by the Eq. (20), and we have not yet considered also for the same reason another possible _intrinsic_, closely related co-sinusoidally modulation of the order parameter like in the LO phase. Moreover, we have also not addressed the interplay with other, _external_ to our proposed mechanism, modulations such as pair density waves (PDW) [see for instance the comprehensive review by Agterberg _et al._[24]], which consideration itself represents a separate problem for future research. Anyhow, for all these interesting cases the problem of depairing currents in the corresponding one-band cases should be addressed first, which however has not yet been done so far for the best of our knowledge. At a first glance this inhomogeneous multiple-\(q\) state is reminiscent to phase soliton states in a multi-component superconductor [25; 26; 27; 28; 29; 30]. Indeed, for the case of thin-walled two-band superconducting cylinders with the radius \(R\), the phase soliton is described by the sine-Gordon equation with the solution in terms of Jacobi elliptic functions [27] \[\phi_{n}\left(\varphi\right)=\frac{\left(1+\text{sgn }a_{12}\right)}{2}+2\text{am }\left(\frac{nK\left(k_{n}\right)}{\pi}\left(\varphi-\varphi_{n0}\right),k_{n} \right), \tag{21}\] where \(am(u)\) denotes the elliptic amplitude, \(K\left(k\right)\) is the complete elliptic integral of the first kind, \(\varphi\) is the polar coordinate, \(\varphi_{n0}\) are arbitrary constants, and the \(k_{n}\) (\(n=\pm 1,\pm 2,\ldots\)) satisfy the equations \[\left|n\right|k_{n}K\left(k_{n}\right)=\frac{\pi R}{l}. \tag{22}\] Here the parameter \(l\) is defined by internal properties of a two-band superconductor and is the inverse proportional to the interband interaction coefficient \(a_{12}\), \(l\simeq 1/\sqrt{\left|a_{12}\right|}\). Based on Eq. (21) one can extract a particular example of the _non_-soliton topological solution corresponding to the physical case of a very weak interband coupling. This solution can be obtained by expansions in series when \(k_{n}\to 0\) (\(\frac{R}{l}\ll 1\)): \[\phi_{n}\left(\varphi\right)\approx\frac{\left(1+\text{sgn }a_{12}\right)\pi}{2}+n\left( \varphi-\varphi_{0}\right), \tag{23}\] which for \(a_{12}<0\) yields the dependence \[\phi_{n}\left(\varphi\right)\approx n\left(\varphi-\varphi_{0}\right), \tag{24}\] that is similar to the introduced above phase difference \(\phi(x)=(q_{2}-q_{1})x\) (see Eq. (20)) where the discrete number \(n\) plays formally the role of a continuous variable \(q_{2}-q_{1}\) and the shifted polar coordinate \(\varphi-\varphi_{0}\) is replaced for the Cartesian coordinate \(x\) according to the geometry of the wire system under consideration (see Fig. 1). The substitution of Eq. (20) to Eq. (1) and subsequent integration over \(x\) gives the GL free energy: \[\begin{array}{c}\frac{F}{L}=F_{0}+\frac{1}{2}k_{11}\hbar^{2}|\Delta_{1}|^{2 }q_{1}^{2}+\frac{1}{2}k_{22}\hbar^{2}|\Delta_{2}|^{2}q_{2}^{2}+k_{12}\hbar^{2 }\left|\Delta_{1}\right|\left|\Delta_{2}\right|q_{1}q_{2}\frac{\sin\left(\left( q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L}\\ +2\left(a_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|+c_{11}|\Delta_{1 }|^{3}\left|\Delta_{2}\right|+c_{22}\left|\Delta_{1}\right|\left|\Delta_{2} \right|^{3}\right)\frac{\sin\left(\left(q_{1}-q_{2}\right)L\right)}{\left(q_{ 1}-q_{2}\right)L}+\frac{1}{2}c_{12}|\Delta_{1}|^{2}|\Delta_{2}|^{2}\frac{\sin \left(2\left(q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L},\end{array} \tag{25}\] where \[F_{0}=a_{11}|\Delta_{1}|^{2}+a_{22}|\Delta_{2}|^{2}+\frac{1}{2}b_{11}|\Delta_ {1}|^{4}+\frac{1}{2}b_{22}|\Delta_{2}|^{4}+b_{12}|\Delta_{1}|^{2}|\Delta_{2}|^ {2}. \tag{26}\] After that we can perform the variation procedure and obtain the GL-equations \[\begin{array}{c}\left(a_{11}+\frac{k_{11}\hbar^{2}q_{1}^{2}}{2}\right)| \Delta_{1}|+b_{11}|\Delta_{1}|^{3}+b_{12}\left|\Delta_{1}\right|\left|\Delta_{ 2}\right|^{2}+\left(a_{12}+\frac{k_{12}\hbar^{2}q_{1}q_{2}}{2}+3c_{11}|\Delta_ {1}|^{2}+c_{22}|\Delta_{2}|^{2}\right)|\Delta_{2}|\frac{\sin\left(\left(q_{1} -q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L}\\ +\frac{1}{2}c_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|^{2}\frac{ \sin\left(2\left(q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L}=0, \end{array} \tag{27}\] \[\begin{array}{c}\left(a_{22}+\frac{k_{22}\hbar^{2}q_{2}^{2}}{2}\right)| \Delta_{2}|+b_{22}|\Delta_{2}|^{3}+b_{12}|\Delta_{1}|^{2}\left|\Delta_{2} \right|+\left(a_{12}+\frac{k_{12}\hbar^{2}q_{1}q_{2}}{2}+c_{11}|\Delta_{1}|^{ 2}+3c_{22}|\Delta_{2}|^{2}\right)|\Delta_{1}|\frac{\sin\left(\left(q_{1}-q_{2} \right)L\right)}{\left(q_{1}-q_{2}\right)L}\\ +\frac{1}{2}c_{12}|\Delta_{1}|^{2}\left|\Delta_{2}\right|\frac{\sin\left(2 \left(q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L}=0.\end{array} \tag{28}\] In case of a very long channel, when \(L\rightarrow\infty\) is obeyed, one can ignore the terms with a sine function and find an approximate analytical solution of Eqs. (27) and (28) \[\left|\Delta_{1}\right|^{2}\left(q_{1}\right)=\frac{a_{22}b_{12}-a_{11}b_{22} -\left(b_{22}k_{11}-b_{12}k_{22}\right)q_{1}^{2}}{b_{11}b_{22}-b_{12}^{2}}, \tag{29}\] \[\left|\Delta_{2}\right|^{2}\left(q_{2}\right)=\frac{a_{11}b_{12}-a_{22}b_{11} -\left(b_{11}k_{22}-b_{12}k_{11}\right)q_{2}^{2}}{b_{11}b_{22}-b_{12}^{2}}. \tag{30}\] For the characterization of the multiple-\(q\) state it is necessary to provide the expression for the total current \[\frac{I}{L}=2e\hbar k_{11}|\Delta_{1}|^{2}q_{1}+2e\hbar k_{22}|\Delta_{2}|^{2 }q_{2}+2e\hbar k_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|\left(q_{1} +q_{2}\right)\frac{\sin\left(q_{2}-q_{1}\right)L}{\left(q_{2}-q_{1}\right)L}. \tag{31}\] ## III Reference points on the phase diagram Eqs. (8) and (9) for \(\phi\) together with the solutions for the order parameter from Eqs. (10) and (11) for the BTRS state allow to determine the phase difference as a function of the temperature and the interband scattering rate \(\Gamma\) in the equilibrium phase when \(q=0\) and \(q_{1}=q_{2}=0\). Exploiting the microscopic expressions for the coefficients of GL free energy provided in the Appendix A, one can show the phase diagram with BTRS and non-BTRS domains. Figure 2 focuses on the phase diagram for a dirty two-band superconductor with the intraband \(\lambda_{11}=0.35\), \(\lambda_{22}=0.347\) and weak repulsive interband interaction constants \(\lambda_{12}=\lambda_{21}=-0.01\), where the small cone-like colorful part illustrates a BTRS state with \(\phi\neq 0\), while the large red and blue regions for a non-BTRS state with \(\phi=\pi\) and \(\phi=0\), respectively. To demonstrate the variety of current states and phase transitions between them in a superconducting quasi-one-dimensional wire, we choose three reference points on this phase diagram corresponding to different symmetries of the order parameter for the temperature \(T/T_{c0}=0.7\). These starting points are marked on the phase diagram by the filled black square, circle and diamond (see Fig. 2) and reflect three different types of symmetry of the dirty two-band superconductor. For \(\Gamma=0.07T_{c0}\) (the filled black square) we have \(s_{\pm}\) pairing symmetry and a non-BTRS state (\(\phi=\pi\)). The point \(\Gamma=0.07982T_{c0}\) (the filled black circle) is located on the upper edge of the BTRS state with \(s_{\pm}+is_{+\pm}\) chiral symmetry and \(\phi\approx 2\pi/23\). Finally, for \(\Gamma=0.09T_{c0}\) (the filled black diamond) \(s_{++}\) symmetry and \(\phi=0\) is realized again with a non-BTRS state. It is important to note that the our choice of the temperature \(T/T_{c0}=0.7\) as well as the lower bound for the allowed temperature range specified in the phase diagram shown in Fig. 2 may be restricted by the range of applicability of the GL theory for a dirty two-band superconductor. As a result, the microscopic theory for the description of the phase diagram should be applied [50]. However, here we consider temperatures, which are sufficiently close to Figure 2: The phase diagram for the phase difference \(\phi\) (in radian units) as a function of the interband scattering rate \(\Gamma\) and the temperature \(T\) with the set of intra- and interband constants \(\lambda_{11}=0.35\), \(\lambda_{22}=0.347\), \(\lambda_{12}=\lambda_{21}=-0.01\). The narrow colorful domain represents the BTRS state with \(s_{\pm}+is_{++}\) symmetry; blue and red domains stand for the non-BTRS state with \(s_{\pm}\) and \(s_{++}\) symmetry, respectively. The black filled square (\(\Gamma/T_{c0}=0.07\)), circle (\(\Gamma/T_{c0}=0.07982\)) and diamond (\(\Gamma/T_{c0}=0.09\)) illustrate the reference points for the consideration of transitions between current states at \(T/T_{c0}=0.7\). For the sake of clarity the inset shows a more extended view up to higher temperatures for the narrowing of the BTRS domain. \(T_{c0}\) denotes the critical temperature of the reference parent clean system. the \(T_{c}\)-values for the above selected values of the interband scattering rate (see Appendix B and Figure 5 therein). Therefore, we suggest that our model calculations obey the validity of the phenomenological GL-approach. ## IV Phase transitions As noted in the introduction, the emergence of additional degrees of freedom of the order parameter give rise to a plenty of current states in multicomponent superconducting systems. In the case of two components, we have already seen that the superconducting momenta of the Cooper pairs of each component admit both equal and different values, forming at least two homogeneous (BTRS and non-BTRS) and one inhomogeneous (multiple-\(q\)) state. The cornerstone for understanding the mechanisms of possible switching and phase transitions between these states is the behavior of the GL free energy depending on the superconducting momentum \(q_{1}\) or momenta \(q_{1}\), \(q_{2}\). Obviously, this can be done by solving the equations for the order parameters derived for each state and then substituting them into the expressions for the GL energy of the superconducting wire. To this end we would like to emphasize that for the BTRS state we use Eqs. (12), (13) and (14); for the non-BTRS state governing equations are Eqs. (17), (17) and (16). Finally, the calculations of the multiple-\(q\) state exploit the Eqs. (27), (28) and (25). For the latter case we need to choose and fix certain values of \(q_{2}\) and consider \(q_{1}\) as the \(q\) variable for the BTRS and non-BTRS state. Such a trick allows to compare the energies of multiple-\(q\) state, characterized by two superfluid momenta \(q_{1}\) and \(q_{2}\), with energies of homogeneous states with the unique superfluid momentum \(q\). In order to exclude the Josephson effect, we consider a very long channel with a length exceeding the coherence lengths and the London penetration depths for each component of the order parameter. From the numerical point of view here and hereafter, we set the channel length is equal to \(L=50\xi_{10}\), where \(\xi_{10}\) is the coherence length for the first component in the absence of the interband interaction at \(T=0\). The results of our calculations are summarized and visualized in Figure 3. First of all, in these three figures, corresponding to the reference points selected earlier (see the black filled square, the circle and the diamond in Figure 2), we will identify the energy curves for the homogeneous states, namely BTRS and non-BTRS. The black curves display the non-BTRS state energies as a function of the superconducting momentum \(q\) when the phase difference between the order parameters is \(\phi=\pi\) (\(s_{\pm}\) pairing symmetry) as in Fig. 3(a) and (b) or \(\phi=0\) (\(s_{++}\) pairing symmetry) as in Fig. 3(c). The GL energy behavior of the BTRS state, in which \(s_{\pm}+is_{++}\) chiral symmetry occurs, is depicted by the blue line in Fig. 3(b). The variety of energy dependences of the inhomogeneous multiple-\(q\) state, when the superconducting momenta of the Cooper pairs of each of the two components may differ, is shown by the remaining color curves, where we measure GL energy as a function of \(q_{1}\) (this refinement is additionally shown on the horizontal axis of the graphs as the equality \(q=q_{1}\)). The strategy of values selection of momentum \(q_{2}=0\) (magenta line), \(q_{2}=0.05\) (cyan line in Fig. 3(c) only), \(q_{2}=0.1\) (green line) and \(q_{2}=0.1\) (red line) is arbitrary and was due solely to the demonstration of the variety of transitions, which will be discussed below. In other words, without loss of generality we could choose other values of \(q_{2}\) to compare energies of multiple-\(q\) state with its homogeneous Figure 3: GL free energy of a quasi-one-dimensional wire for three reference points as a function of the superfluid momentum \(q\) or \(q_{1}\) for the case of the multiple-\(q\) state with a given value of \(q_{2}\). In figure (a) the curves are plotted for \(\Gamma=0.07T_{c0}\) and correspond to the non-BTRS state with \(s_{\pm}\) symmetry (black line) and multiple-\(q\) state with \(q_{2}=0\) (magenta line), \(q_{2}=0.1\) (green line) and \(q_{2}=0.15\) (red line). In figure (b) a BTRS state with chiral \(s_{\pm}+is_{++}\) symmetry (blue line), a non-BTRS state with \(s_{\pm}\) symmetry (black line) and a multiple-\(q\) state with \(q_{2}=0\) (magenta line), \(q_{2}=0.1\) (green line) and \(q_{2}=0.15\) (red line) are depicted for \(\Gamma=0.07982T_{c0}\). Figure (c) represents a non-BTRS state with \(s_{++}\) symmetry (black line) and a multiple-\(q\) state with \(q_{2}=0\) (magenta line), \(q_{2}=0.05\) (cyan line), \(q_{2}=0.1\) (green line) and \(q_{2}=0.15\) (red line) when \(\Gamma=0.09T_{c0}\). Solid lines for all curves refer to stable regions of the above mentioned states, while dotted lines indicate saddle or unstable regions. The ratio of diffusion coefficients \(D_{2}/D_{1}=2\). counterparts. An important detail characterizing the energy behavior of the states is the presence of regions on the curves in figure 3, marked with a dotted line, corresponding to the unstable superconducting state. In turn, the appearance of such regions is determined by the behavior of the minimal eigenvalues of the Hessian matrix formed by the second partial derivatives of the energy for given values of \(q\) (homogeneous state) or values of \(q_{1}\) and \(q_{2}\) (inhomogeneous state). It is well-known that for a function of three or more variables a local minimum is attained, when the Hessian is positive definite, namely has all eigenvalues positive. Therefore, if the minimal eigenvalue is positive then we can make a unambiguous statement about the minimum of the GL energy and as a consequence stability of the given state. We studied this problem in detail in Appendix C and after that specified the regions of instability as dotted lines. From this additional elucidation stems the full picture of possible phase transitions between homogeneous and inhomogeneous state of a dirty two-component superconductor. As a starting point, we consider how a system evolves where at \(q=q_{1}=0\) the ground state is the non-BTRS state with \(s_{++}\) pairing symmetry (see Fig. 3a). With increasing value \(q\) the system moves on the energy scale along the black curve denoting a homogeneous state. Figure 3a shows that at a certain value of \(q=q_{1}\) it crosses the dotted magenta energy line, which, however, is unstable and thus cannot transit to this inhomogeneous state with \(q_{2}=0\). As a consequence, with a further increase in \(q\), when the black curve crosses already with the green solid curve, there is a transition to the multiple-\(q\) state with \(q_{2}=0.1\). The system energy then evolves along the green curve until it attains the next unstable region (the green dotted curve). After that, one can say that either the system stabilizes here or descends to the lower energy level (magenta curve), where it continues its evolution moving along this curve to the unstable area. The scenario described is obviously a probabilistic one, since the arbitrary character of the choice of the \(q_{2}\) values for our energy plots was already mentioned above. In this particular example we have only demonstrated how this inhomogeneous multiple-\(q\) state can emerge in a superconducting wire. To describe the evolution of current states and phase transitions between them in Figure 3, we should note first that even without any multiple-\(q\) state, there is a direct possibility of of the first order phase transition between BTRS and non-BTRS states with the increase of \(q\), when the blue curve (BTRS state) meets the black curve (non-BTRS state). The existence of such a topological transition has already been predicted in the case of systems with the Euler characteristic equal to zero (double-connected systems of the cylinder or ring type etc [51]. Now it can be seen that this prediction can be extended to the case of a quasi-dimensional channel as well. The account of the multiple-\(q\) state adds essential features to the evolutionary processes of the system under consideration. The most remarkable feature in this case is the coincidence (within the numerical error of calculations) of the energies at \(q=0\) for the homogeneous BTRS (blue line) and the inhomogeneous multiple-\(q\) (magenta line) states and, as a result, the possibility for the system with equal probabilities to evolve by two different paths with increasing \(q\). The first path is the choice and motion of the system within the multiple-\(q\) state with \(q_{2}=0\) as long as that state remains stable (solid line). The second one represents the evolution as the homogeneous BTRS state (blue line) with the subsequent transition to the inhomogeneous multiple-\(q\) state with \(q_{2}=0.1\) (green line), which in turn, as \(q=q_{1}\) increases, can exist within the stable region and then may relax into already known inhomogeneous state with \(q_{2}=0\) (magenta line) that is favorable from an energetic point of view. As for Figure 3c and the probable scenario of the evolution of the current states, the picture looks even richer and more diverse with its phase transitions due to the chosen values of \(q_{2}\). First, as in the previous case in Figure 3b, the energies of the homogeneous non-BTRS state (black line) and inhomogeneous multiple-\(q\) (magenta line) states with \(q_{2}=0\) at \(q=q_{1}=0\) coincide (within the accuracy of our numerical calculation). With increasing \(q=q_{1}\) this allows the system to start to evolve equally probable both these states. Second, regardless of the initial state with increasing \(q_{1}\) the system can undergo a cascade of transitions. For instance, let us consider the non-BTRS state (black line) as the starting stage of current states. One can easily see that an increase in momentum \(q=q_{1}\) is accompanied by a switch of the non-BTRS state to the multiple-\(q\) state with \(q_{2}=0.05\) (cyan line). Then the system comes back to the homogeneous non-BTRS state (black line). After that the transition to another multiple-\(q\) state with \(q_{2}=0.1\) occurs (green line) and a fall again to the non-BTRS state. Finally, this cascade completes by the transition from the non-BTRS state to the multiple-\(q\) state with \(q_{2}=0\) (magenta line), where further evolution is restricted by the condition of stability (dotted magenta line corresponds to the unstable state). ## V Discussions The existence of such transitions obviously raises the question of how to record them experimentally or outline possible experimental strategies for detecting them. One possibly suitable method of their observation is the study of the so-called depairing current (current-momentum) curves, related to transport properties. In other words one should measure the dependences of the depairing current at which the kinetic energy of the superconducting carriers equals the binding energy of the Cooper pairs, i.e. when the value of the current reaches the certain threshold above which superconductivity is suppressed. Such dependences can be calculated based on Eqs. (15), (19) and (31) taking into account dependences of energy (see Fig. (3) with possible scenarios for the evolution of states as shown in the previous section. Using such a probe one can plot the depairing currents to show the hallmarks of phase transitions between different states, in particular between homogeneous BTRS or non-BTRS state and inhomogeneous multiple-\(q\) state (Fig. 4). The interesting feature worth mentioning is the presence of two stable increasing with \(q\) branches of the depairing curve corresponding to the possibility of a bistable state (see the solid lines in Fig. 4a and c). The dotted regions of the depairing curves, as in the case of the energy dependences, display unstable regions that will be not observed during an experiment. They do not carry any physical meaning and thus cannot be measured. The same conclusion applies to the plateau of the depairing curve at large values of \(q=q_{1}\) and the non-zero value of \(q_{2}=0.1\) for the BTRS state case, shown by the dotted line in Figure 4b. This result is an artifact of our assumption of a initially fixed \(q_{2}\neq 0\) and does not reflect the real transport properties of the system belonging to instability of the superconducting phase. Moreover, for conventional superconductors, it has long ago been established that the monotonically increasing part of the depairing curve corresponds to a stable superconducting state, while the monotonically decreasing part corresponds to an instability. A remarkable counterintuitive feature found here is that within the multiple-\(q\) state the depairing current curves exhibit an increasing segment, which can be unexpectedly essentially unstable (see dotted magenta lines in Fig. 4a and c and the dotted green line in Fig. 4b), too. And vice versa, there are decreasing segments corresponding stable states (see solid magenta lines in Fig. 4a and c). It should be noted that the plots are illustrative in nature and are intended to demonstrate the expected noteworthy qualitative characteristics of phase transitions between homogeneous and inhomogeneous states. From a measurement perspective the experimental verification of predicted results can be done by means of so-called pulsed measurement technique, which has already proven itself in the study of superconducting transport properties and the detection of the depairing current in particular. A technical description of this experimental approach and further details on the depairing current can be found elsewhere (see e.g. [52; 53; 54; 55]). ## VI Conclusions In this paper, using the Ginzburg-Landau theory for a two-band superconductor with the interband impurity scattering effect as the underlying model, we have extended the variety of exotic states in multicomponent superconductors. For a quasi-one-dimensional thin wire or channel we predict the emergence of an inhomogeneous multiple-\(q\) state, which is characterized by different superconducting condensate momenta. Based on a particular example of a Figure 4: Patterns of possible dependencies of the total current \(I\) in a quasi-one-dimensional wire for three reference points vs. the superfluid momenta \(q\) or \(q_{1}\) for the case of a multiple-\(q\) state with a given value of \(q_{2}\). (a) For the non-BTRS case with \(s_{\pm}\) symmetry (\(\Gamma=0.07T_{c0}\)) the current-momentum dependence consists of the contribution from the \(\phi=\pi\) state (black line) and multiple-\(q\) states with \(q_{2}=0.1\) (green line) and \(q_{2}=0\) (magenta line). The inset in (a) shows the current-momentum dependence for a non-BTRS state without phase transitions between different states. (b) For the BTRS case (\(\Gamma=0.07982T_{c0}\)) the current-momentum dependence can be formed by contributions by the chiral state with \(\phi\neq 0\) (blue line) and from multiple-\(q\) state with \(q_{2}=0\) (magenta line). In the absence of a multiple-\(q\) state the current-momentum dependence has the form shown in the inset where the black curve is for the non-BTRS state with \(\phi=\pi\) (\(s_{\pm}\) symmetry). (c) The phase transitions between the non-BTRS state with \(s_{++}\) symmetry (\(\phi=0\)) and multiple-\(q\) state can be detected by the current-momentum dependence, which may be formed by contributions of the \(\phi=0\) state (black line) and multiple-\(q\) states with \(q_{2}=0.05\) (cyan line), \(q_{2}=0.1\) (green line) and the \(q_{2}=0\) (magenta line). The inset in (c) shows the current-momentum dependence for a ”pure” non-BTRS state without transitions between different states. In all figures the solid and dotted lines of corresponding colors specify the stable and unstable states, respectively. The ratio of diffusion coefficients \(D_{2}/D_{1}=2\). two-band superconductor with the weak repulsive interband interaction, we have revealed that the multiple-\(q\) state can trigger a peculiar cascade of phase transitions between this novel state and the homogeneous BTRS or non-BTRS states and vice versa. A possibly suitable tool for the detection of this inhomogeneous state has been proposed to verify our theoretical predictions. According to our calculations, a saw-like dependence of the depairing current and the emergence of the bistable current state can be considered as a fingerprint of such a multiple-\(q\) state. A quantitative comparison with the two \(q\)-vectors expected in the FFLO state in the bulk is an interesting issue for future studies. Both phenomena are expected to shed light on the rich response of unconventional multiband superconductors and the complexity of their pair condensates. ###### Acknowledgements. Y.Y. acknowledges support by the CarESS project. ## Appendix A GL coefficients The coefficients of the GL free energy functional Eq. (1) are expressed as [46; 47]: \[a_{ii}=N_{i}\left(\frac{\lambda_{jj}}{\det\lambda_{ij}}-2\pi T\sum_{\omega>0}^{ \omega_{c}}\frac{\omega+\Gamma_{ij}}{\omega\left(\omega+\Gamma_{ij}+\Gamma_{ji }\right)}\right)=N_{i}\left(\frac{\lambda_{jj}}{\det\lambda_{ij}}-\frac{1}{ \lambda}+\ln\left(\frac{T}{T_{c}}\right)+\psi\left(\frac{1}{2}+\frac{\Gamma} {\pi T}\right)-\psi\left(\frac{1}{2}\right)\right), \tag{10}\] \[a_{ij}=-N_{i}\left(\frac{\lambda_{ij}}{\det\lambda_{ij}}+2\pi T\sum_{\omega>0} ^{\omega_{c}}\frac{\Gamma_{ij}}{\omega\left(\omega+\Gamma_{ij}+\Gamma_{ji} \right)},\right) \tag{11}\] \[b_{ii}=N_{i}\pi T\sum_{\omega>0}^{\omega_{c}}\frac{\left(\omega+\Gamma_{ji} \right)^{4}}{\omega^{3}(\omega+\Gamma_{ij}+\Gamma_{ji})^{4}}+N_{i}\pi T\sum_{ \omega>0}^{\omega_{c}}\frac{\Gamma_{ij}\left(\omega+\Gamma_{ji}\right)\left( \omega^{2}+3\omega\Gamma_{ji}+\Gamma_{ji}\right)}{\omega^{3}(\omega+\Gamma_{ij} +\Gamma_{ji})^{4}}, \tag{12}\] \[b_{ij}=-N_{i}\pi T\sum_{\omega>0}^{\omega_{c}}\frac{\Gamma_{ij}\omega^{3}}{ \omega^{3}(\omega+\Gamma_{ij}+\Gamma_{ji})^{4}}+N_{i}\pi T\sum_{\omega>0}^{ \omega_{c}}\frac{\Gamma_{ij}\left(\Gamma_{ij}+\Gamma_{ji}\right)\left(\Gamma_{ ji}\left(\omega+2\Gamma_{ij}\right)+\omega\Gamma_{ij}\right)}{\omega^{3}(\omega+ \Gamma_{ij}+\Gamma_{ji})^{4}}, \tag{13}\] \[c_{ii}=N_{i}\pi T\sum_{\omega>0}^{\omega_{c}}\frac{\Gamma_{ij}\left(\omega+ \Gamma_{ji}\right)\left(\omega^{2}+\left(\omega+\Gamma_{ji}\right)\left( \Gamma_{ij}+\Gamma_{ji}\right)\right)}{\omega^{3}(\omega+\Gamma_{ij}+\Gamma_{ ji})^{4}}, \tag{14}\] \[k_{ii}=2N_{i}\pi T\sum_{\omega>0}^{\omega_{c}}\frac{D_{i}\left(\omega+\Gamma_{ ji}\right)^{2}+\Gamma_{ij}\Gamma_{ji}D_{j}}{\omega^{2}(\omega+\Gamma_{ij}+\Gamma_{ ji})^{2}} \tag{15}\] \[k_{ij}=2N_{i}\Gamma_{ij}\pi T\sum_{\omega>0}^{\omega_{c}}\frac{D_{i}\left( \omega+\Gamma_{ji}\right)+D_{j}\left(\omega+\Gamma_{ij}\right)}{\omega^{2}( \omega+\Gamma_{ij}+\Gamma_{ji})^{2}}, \tag{16}\] where \(\omega=(2n+1)\pi T\) are Matsubara frequencies, \(\omega_{c}\) is the cut-off frequency, \(N_{i}\) are the densities of states at the Fermi level, \(\lambda_{ij}\) and \(\Gamma_{ij}\) are coupling constants and interband scattering rates that characterize the strength of the interband impurities, \(D_{i}\) are diffusion coefficients. For the sake of simplicity and without loss of generality we put \(\lambda_{12}=\lambda_{21}\), \(\Gamma_{12}=\Gamma_{21}\) and \(N_{1}=N_{2}\) in the main paper. Eqs. (11)-(16) can be expressed in terms of polygamma functions after the summation procedure. However, we do not provide these expression due to their cumbersome forms. Appendix B The critical temperature as a function of impurities and the strength of the interband interaction The expression for the critical temperature as a function of the impurity scattering rate \(\Gamma\) can be obtained within the linearized Usadel equations generilized for two-band superconductor and supplemented by the self-consistent equations for the energy gaps (see details in Ref. [56]). The final formula showing the suppression of the critical temperature \(T_{c}\) in respect to the critical temperature \(T_{c0}\) of a clean two-band superconductor without impurities when \(\Gamma=0\) is given by \[U\left(\frac{\Gamma}{\pi T_{c}}\right)=-\frac{2\left(w\lambda\ln t+\lambda \left(\lambda_{11}+\lambda_{22}\right)-2w\right)\ln t}{2w\lambda\ln t+\lambda \left(\lambda_{11}+\lambda_{22}-\lambda_{12}-\lambda_{21}\right)-2w}, \tag{10}\] where \(U\left(x\right)=\psi\left(\frac{1}{2}+x\right)-\psi\left(\frac{1}{2}\right)\) is expressed via the digamma function \(\psi(x)\), \(t=T_{c}/T_{c0}\), \(\lambda\) is the largest eigenvalue of the matrix of intra- and interband coefficients and \(w=\det\lambda_{ij}=\lambda_{11}\lambda_{22}-\lambda_{12}\lambda_{21}\). The numerical solution of Eq. (10) is shown in Figure 5. It is interesting that generally speaking Eq. (10) has two type of solutions, one of them (lower curve in Figure 5b) is unphysical. To make it more convincing we have marked the filled black square, blue circle and red diamond on the upper curve as the reference points presented earlier on the phase diagram in Figure 2. ## Appendix C Stability conditions ### The BTRS state The problem of the current state stability is equivalent to the problem of determining the point of extrema of the GL free energy as a minimum, maximum or saddle point. Since in the case of the BTRS state Eq. (14) is a function of three variables \(\Delta_{1}\), \(\Delta_{2}\) and \(q\), the problem is reduced to the study of the eigenvalues of the Hessian matrix at the critical point. The Hessian matrix has the form \[H_{|\Delta_{1}||\Delta_{1}|q}=\left(\begin{array}{cc}\frac{\partial^{2}F}{ \partial|\Delta_{1}|^{2}}&\frac{\partial^{2}F}{\partial|\Delta_{1}|\Delta_{2}| }&\frac{\partial^{2}F}{\partial|\Delta_{1}|q}\\ \frac{\partial^{2}F}{\partial|\Delta_{2}|\Delta_{1}|}&\frac{\partial^{2}F}{ \partial|\Delta_{2}|^{2}}&\frac{\partial^{2}F}{\partial|\Delta_{2}|q}\\ \frac{\partial^{2}F}{\partial|\Delta_{1}|}&\frac{\partial^{2}F}{\partial| \Delta_{2}|}&\frac{\partial^{2}F}{\partial q^{2}}\end{array}\right), \tag{11}\] Figure 5: (a) The critical temperature \(T_{c}\) of a dirty two-band superconductor as a function of the interband scattering rate \(\Gamma\) and the interband interaction coefficient \(\lambda_{12}\) with \(\lambda_{11}=0.35\) and \(\lambda_{22}=0.347\). (b) \(T_{c}\) as a function of \(\Gamma\) with \(\lambda_{11}=0.35\), \(\lambda_{22}=0.347\), \(\lambda_{12}=\lambda_{21}=-0.01\). The values of \(T_{c}\) and \(\Gamma\) are calibrated to the critical temperature of a two-band superconductor without impurities \(T_{c0}\) and \(\Gamma=0\), respectively. The filled black square, blue circle and red diamond correspond to values of \(\Gamma=0.07T_{c0}\), \(\Gamma=0.07982T_{c0}\) and \(\Gamma=0.09T_{c0}\), which are considered in the main paper as reference points for \(s_{\pm}\) (non-BTRS state), \(s_{\pm}+is_{++}\) (BTRS state) and \(s_{++}\) (non-BTRS state) simmetries of the order parameter. The low black curve in (b) represents unphysical solution of Eq. (10). where \[\frac{\partial^{2}F}{\partial\left|\Delta_{1}\right|^{2}}=\frac{6\left|\Delta_{1} \right|^{2}\left(b_{11}c_{12}-c_{11}^{2}\right)+2\left|\Delta_{2}\right|^{2} \left(b_{12}c_{12}-c_{11}c_{22}-c_{12}^{2}\right)\,+\left(k_{11}c_{12}-k_{12}c _{11}\right)q^{2}+2\left(a_{11}c_{12}-a_{12}c_{11}\right)}{c_{12}}, \tag{10}\] \[\frac{\partial^{2}F}{\partial\left|\Delta_{1}\right|\left|\Delta_{2}\right|}= \frac{4\left|\Delta_{1}\right|\left|\Delta_{2}\right|\,\left(b_{12}c_{12}-c_{1 1}c_{22}-c_{12}^{2}\right)}{c_{12}}, \tag{11}\] \[\frac{\partial^{2}F}{\partial\left|\Delta_{1}\right|q}=\frac{2\left|\Delta_{1} \right|\left(c_{12}k_{11}-k_{12}c_{11}\right)q}{c_{12}}, \tag{12}\] \[\frac{\partial^{2}F}{\partial\left|\Delta_{2}\right|^{2}}=\frac{2\left|\Delta _{1}\right|^{2}\left(b_{12}c_{12}-c_{11}c_{22}-c_{12}^{2}\right)+6\left| \Delta_{2}\right|^{2}\left(b_{22}c_{12}-c_{22}^{2}\right)+\left(k_{22}c_{12}- k_{12}c_{22}\right)q^{2}+2\left(a_{22}c_{12}-a_{12}c_{22}\right)}{c_{12}}, \tag{13}\] \[\frac{\partial^{2}F}{\partial\left|\Delta_{2}\right|q}=\frac{2\left|\Delta_{2 }\right|\,\left(k_{22}c_{12}-k_{12}c_{22}\right)q}{c_{12}}, \tag{14}\] \[\frac{\partial^{2}F}{\partial q^{2}}=\frac{2\left|\Delta_{1}\right|^{2}\left( k_{11}c_{12}-k_{12}c_{11}\right)+2\left|\Delta_{2}\right|^{2}\left(k_{22}c_{12}- k_{12}c_{22}\right)-3k_{12}^{2}q^{2}-2a_{12}k_{12}}{2c_{12}}. \tag{15}\] To classify the stability region of the BTRS state it is enough to determine the sign of the minimal eigenvalue \(l_{min}\) of the Hessian matrix Eq. (10). The contour plot in Figure 6 shows \(\text{sgn}(l_{min})\) for different values of the interband scattering rate \(\Gamma\) and \(q\). ### The non-BTRS state As in the case of the BTRS state for the homogeneous non-BTRS state the Hessian matrix is formed by the second partial derivatives of the GL free energy Eq. (16) \[H_{\left|\Delta_{1}\right|\left|\Delta_{1}\right|q}=\left(\begin{array}{c c c c}\frac{\partial^{2}F}{\partial\left|\Delta_{1}\right|^{2}}&\frac{ \partial^{2}F}{\partial\left|\Delta_{1}\right|\left|\Delta_{2}\right|}&\frac{ \partial^{2}F}{\partial\left|\Delta_{1}\right|q}\\ \frac{\partial^{2}F}{\partial\left|\Delta_{2}\right|\left|\Delta_{1}\right|}& \frac{\partial^{2}F}{\partial\left|\Delta_{2}\right|^{2}}&\frac{\partial^{2} F}{\partial\left|\Delta_{2}\right|q}\\ \frac{\partial^{2}F}{\partial\left|\Delta_{1}\right|}&\frac{\partial^{2}F}{ \partial\left|\Delta_{2}\right|}&\frac{\partial^{2}F}{\partial\left|\Delta_{ 2}\right|}\end{array}\right), \tag{16}\] Figure 6: The contour plot of the minimal eigenvalue of the Hessian matrix as a function of \(q\) and \(\Gamma\) for the BTRS state. The minimum of the GL energy has been found for the red region. The blue region might correspond to saddle-points or maxima. with the following expressions for the second derivatives \[\frac{\partial^{2}F}{\partial\left|\Delta_{1}\right|^{2}}=2a_{11}+6b_{11}| \Delta_{1}|^{2}+2b_{12}|\Delta_{2}|^{2}+k_{11}q^{2}+12c_{11}\left|\Delta_{1} \right|\,\left|\Delta_{2}\right|\,\cos\phi+2\,c_{12}|\Delta_{2}|^{2}\cos 2\phi, \tag{100}\] \[\frac{\partial^{2}F}{\partial\left|\Delta_{1}\right|\left|\Delta_{2}\right|}= 4b_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|+k_{12}q^{2}+2\left(a_{1 2}+3c_{11}|\Delta_{1}|^{2}+3c_{22}|\Delta_{2}|^{2}\right)\cos\phi+4c_{12}\left| \Delta_{1}\right|\,\left|\Delta_{2}\right|\,\cos 2\phi, \tag{101}\] \[\frac{\partial^{2}F}{\partial\left|\Delta_{1}\right|q}=2q\left(k_{11}\left| \Delta_{1}\right|\,+k_{12}\left|\Delta_{2}\right|\right), \tag{102}\] \[\frac{\partial^{2}F}{\partial\left|\Delta_{2}\right|^{2}}=2a_{22}+6b_{22}| \Delta_{2}|^{2}+2b_{12}|\Delta_{1}|^{2}+k_{22}q^{2}+12c_{22}\left|\Delta_{1} \right|\,\left|\Delta_{2}\right|\,\cos\phi+2\,c_{12}|\Delta_{1}|^{2}\cos 2\phi, \tag{103}\] \[\frac{\partial^{2}F}{\partial\left|\Delta_{2}\right|q}=2q\left(k_{12}\left| \Delta_{1}\right|\,+k_{22}\left|\Delta_{2}\right|\right), \tag{104}\] \[\frac{\partial^{2}F}{\partial q^{2}}=k_{11}\left|\Delta_{1}\right|\,^{2}+2k_{ 12}\left|\Delta_{1}\right|\,\left|\Delta_{2}\right|\,+k_{22}|\Delta_{2}|^{2}. \tag{105}\] As in the case of the BTRS state we consider the minimal eigenvalue \(l_{min}\) of the Hessian matrix and plot it as a function on \(q\) for the given value of \(\Gamma=0.07T_{c0}\) and \(\Gamma=0.09T_{c0}\) corresponding to \(s_{\pm}\) and \(s_{++}\) pairing symmetries, respectively (Fig. 7). ### The multiple-\(q\) state A more complicated form of the Hessian matrix takes place for multiple-\(q\) state because the GL free energy Eq. (25) is considered as a function of four variables \(\Delta_{1}\), \(\Delta_{2}\), \(q_{1}\) and \(q_{2}\). The calculation of the second derivatives yields \[H_{|\Delta_{1}||\Delta_{1}|q_{1}q_{2}}=\left(\begin{array}{c}\frac{\partial ^{2}F}{\left|\Delta_{1}\right|^{2}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, where \[\frac{1}{L}\frac{\partial^{2}F}{\left|\Delta_{1}\right|^{2}}=2a_{11}+6b_{11}| \Delta_{1}|^{2}+2b_{12}|\Delta_{2}|^{2}+k_{11}\hbar^{2}q_{1}^{2}+12c_{11}\left| \Delta_{1}\right|\left|\Delta_{2}\right|\frac{\sin\left(\left(q_{1}-q_{2}\right) L\right)}{\left(q_{1}-q_{2}\right)L}+c_{12}|\Delta_{2}|^{2}\frac{\sin\left(2\left(q_{1}-q_{2} \right)L\right)}{\left(q_{1}-q_{2}\right)L}, \tag{16}\] \[\frac{1}{L}\frac{\partial^{2}F}{\partial\left|\Delta_{1}\right| \left|\Delta_{2}\right|}=4b_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right| +k_{12}\hbar^{2}q_{1}q_{2}\frac{\sin\left(\left(q_{1}-q_{2}\right)L\right)}{ \left(q_{1}-q_{2}\right)L} \tag{17}\] \[+2\left(a_{12}+3c_{11}|\Delta_{1}|^{2}+3c_{22}|\Delta_{2}|^{2} \right)\frac{\sin\left(\left(q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2} \right)L}+2c_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|\frac{\sin \left(2\left(q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L},\] \[\frac{1}{L}\frac{\partial^{2}F}{\left|\Delta_{1}\right|q_{1}}=2k_{11}\hbar^{2 }\left|\Delta_{1}\right|q_{1}+\frac{k_{12}\hbar^{2}\left|\Delta_{2}\right|q_{ 2}\sin\left(\left(q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L}+ \frac{2c_{12}\left|\Delta_{1}\right|\left|\Delta_{2}\right|^{2}}{q_{1}-q_{2}} \left(\cos\left(2\left(q_{1}-q_{2}\right)L\right)-\frac{\sin\left(2\left(q_{1} -q_{2}\right)L\right)}{2\left(q_{1}-q_{2}\right)L}\right)\] \[+\frac{k_{12}\hbar^{2}\left|\Delta_{2}\right|q_{1}q_{2}+2\left(a_{12}\left| \Delta_{2}\right|+3c_{11}\left|\Delta_{1}\right|^{2}\left|\Delta_{2}\right|+c _{22}\left|\Delta_{2}\right|^{3}\right)}{q_{1}-q_{2}}\left(\cos\left(\left(q_{ 1}-q_{2}\right)L\right)-\frac{\sin\left(\left(q_{1}-q_{2}\right)L\right)}{ \left(q_{1}-q_{2}\right)L}\right),\] \[\frac{1}{L}\frac{\partial^{2}F}{\left|\Delta_{1}\right|q_{2}}=\frac{k_{12} \hbar^{2}\left|\Delta_{2}\right|q_{1}\sin\left(\left(q_{1}-q_{2}\right)L\right) }{\left(q_{1}-q_{2}\right)L}-\frac{2c_{12}\left|\Delta_{1}\right|\left| \Delta_{2}\right|^{2}}{q_{1}-q_{2}}\left(\cos\left(2\left(q_{1}-q_{2}\right) L\right)-\frac{\sin\left(2\left(q_{1}-q_{2}\right)L\right)}{2\left(q_{1}-q_{2}\right)L}\right) \tag{19}\] \[-\frac{k_{12}\hbar^{2}\left|\Delta_{2}\right|q_{1}q_{2}+2\left(a _{12}\left|\Delta_{2}\right|+3c_{11}\left|\Delta_{1}\right|^{2}\left|\Delta_{ 2}\right|+c_{22}\left|\Delta_{2}\right|^{3}\right)}{q_{1}-q_{2}}\left(\cos \left(\left(q_{1}-q_{2}\right)L\right)-\frac{\sin\left(\left(q_{1}-q_{2}\right) L\right)}{\left(q_{1}-q_{2}\right)L}\right),\] \[\frac{1}{L}\frac{\partial^{2}F}{\left|\Delta_{2}\right|q_{1}}= \frac{k_{12}\hbar^{2}\left|\Delta_{1}\right|q_{2}\sin\left(\left(q_{1}-q_{2} \right)L\right)}{\left(q_{1}-q_{2}\right)L}+\frac{2c_{12}\left|\Delta_{1} \right|^{2}\left|\Delta_{2}\right|}{q_{1}-q_{2}}\left(\cos\left(2\left(q_{1}- q_{2}\right)L\right)-\frac{\sin\left(2\left(q_{1}-q_{2}\right)L\right)}{2 \left(q_{1}-q_{2}\right)L}\right) \tag{20}\] \[+\frac{k_{12}\hbar^{2}\left|\Delta_{1}\right|q_{1}q_{2}+2\left(a _{12}\left|\Delta_{1}\right|+3c_{22}\left|\Delta_{1}\right|\left|\Delta_{2} \right|^{2}+c_{11}\left|\Delta_{1}\right|^{3}\right)}{q_{1}-q_{2}}\left(\cos \left(\left(q_{1}-q_{2}\right)L\right)-\frac{\sin\left(\left(q_{1}-q_{2}\right) L\right)}{\left(q_{1}-q_{2}\right)L}\right),\] \[\frac{1}{L}\frac{\partial^{2}F}{\left|\Delta_{2}\right|q_{2}}=2k_{22}\hbar^{2} \left|\Delta_{2}\right|q_{2}+\frac{k_{12}\hbar^{2}\left|\Delta_{1}\right|q_{1} \sin\left(\left(q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L}-\frac{2c _{12}\left|\Delta_{1}\right|^{2}\left|\Delta_{2}\right|}{q_{1}-q_{2}}\left( \cos\left(2\left(q_{1}-q_{2}\right)L\right)-\frac{\sin\left(2\left(q_{1}-q_{2} \right)L\right)}{2\left(q_{1}-q_{2}\right)L}\right)\] \[-\frac{k_{12}\hbar^{2}\left|\Delta_{1}\right|q_{1}q_{2}+2\left(a _{12}\left|\Delta_{1}\right|+3c_{22}\left|\Delta_{1}\right|\left|\Delta_{2} \right|^{2}+c_{11}\left|\Delta_{1}\right|^{3}\right)}{q_{1}-q_{2}}\left(\cos \left(\left(q_{1}-q_{2}\right)L\right)-\frac{\sin\left(\left(q_{1}-q_{2}\right) L\right)}{\left(q_{1}-q_{2}\right)L}\right),\] \[\frac{1}{L}\frac{\partial^{2}F}{\left|\Delta_{2}\right|q_{2}}=2k_{22}\hbar^{2} \left|\Delta_{2}\right|q_{2}+\frac{k_{12}\hbar^{2}\left|\Delta_{1}\right|q_{1} \sin\left(\left(q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L}-\frac{2c _{12}\left|\Delta_{1}\right|^{2}\left|\Delta_{2}\right|}{q_{1}-q_{2}}\left( \cos\left(2\left(q_{1}-q_{2}\right)L\right)-\frac{\sin\left(2\left(q_{1}-q_{2} \right)L\right)}{2\left(q_{1}-q_{2}\right)L}\right)\] \[-\frac{k_{12}\hbar^{2}\left|\Delta_{1}\right|q_{1}q_{2}+2\left(a _{12}\left|\Delta_{1}\right|+3c_{22}\left|\Delta_{1}\right|\left|\Delta_{2} \right|^{2}+c_{11}\left|\Delta_{1}\right|^{3}\right)}{q_{1}-q_{2}}\left( \cos\left(\left(q_{1}-q_{2}\right)L\right)-\frac{\sin\left(\left(q_{1}-q_{2} \right)L\right)}{\left(q_{1}-q_{2}\right)L}\right),\] \[\frac{1}{L}\frac{\partial^{2}F}{\left|\Delta_{2}\right|q_{2}}=2k_{22}\hbar^{2} \left|\Delta_{2}\right|q_{2}+\frac{k_{12}\hbar^{2}\left|\Delta_{1}\right|q_{1} \sin\left(\left(q_{1}-q_{2}\right)L\right)}{\left(q_{1}-q_{2}\right)L}-\frac{2c _{12}\left|\Delta_{1}\right|^{2}\left|\Delta_{2}\right|}{q_{1}-q_{2}}\left( \cos\left(2\left(q_{1}-q_{2}\right)L\right)-\frac{\sin\left(2\left(q_{1}-q_{2} \right)L\right)}{2\left(q_{1}-q_{2}\right)L}\right)\]
2309.06164
Identifying multiwavelength counterparts to astrophysical neutrino events
High-energy neutrinos originating in astrophysical sources should be accompanied by gamma-rays at production. Depending on the properties of the emission environment and the distance of the source to the Earth, these gamma-rays may be observed directly, or through the detection of lower energy photons that result from interactions with the intervening radiation fields. In this work, we present an automated tool that aims at using data from the Fermi-Large Area Telescope to identify multiwavelength counterparts to astrophysical neutrino events. The main goal of this tool is to enable prompt follow-up observations with ground-based and space-based observatories in order to help pinpoint the neutrino source.
Atreya Acharyya, Marcos Santander
2023-09-12T12:19:23Z
http://arxiv.org/abs/2309.06164v1
# Identifying multiwavelength counterparts to astrophysical neutrino events ###### Abstract: High-energy neutrinos originating in astrophysical sources should be accompanied by gamma-rays at production. Depending on the properties of the emission environment and the distance of the source to the Earth, these gamma-rays may be observed directly, or through the detection of lower energy photons that result from interactions with the intervening radiation fields. In this work, we present an automated tool that aims at using data from the _Fermi_-Large Area Telescope to identify multiwavelength counterparts to astrophysical neutrino events. The main goal of this tool is to enable prompt follow-up observations with ground-based and space-based observatories in order to help pinpoint the neutrino source. Introduction The extragalactic gamma-ray sky is dominated by blazars [1], a subclass of radio-loud active galactic nuclei (AGN) powered by a central supermassive black hole (SMBH), with relativistic jets pointed close to our line-of-sight. The spectral energy distribution (SED) of a typical blazar comprises two distinct peaks. While the first peak, occurring in the radio to the X-ray regime, has been attributed to synchrotron emission from electrons and positrons within the jet, the physical mechanisms responsible for the second peak, produced in the X-ray to gamma-ray regime, is still a matter of debate. Towards understanding this, the _Fermi_-Large Area Telescope (LAT, [2]) is a pair conversion telescope capable of detecting gamma-ray photons in the energy range between 20 MeV to above 500 GeV. Primarily operating in survey mode, it scans the entire sky every three hours. Leptonic models [3, 4] attribute the high-energy peak of the SED to the inverse Compton (IC) scattering of electrons off a source of seed photons, either the same photons emitted through synchrotron emission (synchrotron self-Compton [SSC] model) or photon populations external to the jet (external Inverse Compton [EIC] model). On the other hand, lepto-hadronic models [5, 6] suggest that the second peak may be a result of either proton synchrotron or the decay of high-energy mesons produced in cosmic-ray interactions. A hadronic component of the gamma-ray emission [7] would potentially make AGN prime candidate sources of astrophysical neutrinos [8, 9]. Since 2016, IceCube has been broadcasting automatic real-time alerts for potential astrophysical neutrino candidate events in order to allow for prompt follow-up observations with ground-based and space-based observatories. The alerts are issued by the Astrophysical Multimessenger Observatory Network (AMON1, [10]) and circulated using the Gamma-ray Coordinates Network (GCN)2, an open-source platform created by NASA to receive and transmit alerts about astronomical transient phenomena. One of the main goals of these alerts is to allow for follow-up observations with other observatories, in the hope of observing multimessenger events. Footnote 1: [https://www.amon.psu.edu](https://www.amon.psu.edu) (accessed on 06/06/2023) Footnote 2: [https://gcn.gsfc.nasa.gov](https://gcn.gsfc.nasa.gov) (accessed on 06/06/2023) In this work, we present an automated tool that aims at using _Fermi_-LAT data to identify multiwavelength counterparts to astrophysical neutrino events with ground-based and space-based observatories in order to help pinpoint the neutrino source. More specifically, we want to know what is in the region of interest (RoI) around the neutrino alert, whether the RoI is observable from a ground-based observatory and whether anything interesting, for example a gamma-ray flare, is occurring at that particular location. In Section 2, we introduce the main components of the analysis and processing pipeline. In Section 3, we consider a typical example of a neutrino alert, IC230506A-gold-rev1, and discuss the primary outputs from the analysis tool. We summarise our conclusions and discuss some plans for future work in Section 4. ## 2 Methodology The automated pipeline, illustrated in Fig. 1, listens for incoming AMON_ICECUBE_GOLD and AMON_ICECUBE_BRONZE event alerts coming in through the GCN. These are individual single-neutrino events in the IceCube detector with a neutrino energy in the sub-PeV to 1 PeV energy regime and are shown in Fig. 2 for the time interval between June 19, 2019 and June 6, 2023. More specifically, the analysis pipeline extracts information from the GCN alert related to the type of event (GOLD or BRONZE), the sky coordinates of the alert along with the associated confidence interval, the date and time of the alert in UTC, and also the corresponding _Run_ID_ and _Event_ID_. The main components of the tool include performing an automatic _Fermi_-LAT analysis of the neutrino RoI, calculating the visibilities for common follow-up instruments and collecting multiwavelength archival data for known sources in the RoI. The entire process is then repeated for multiple revisions of a particular alert. The tool then runs an automatic analysis of the _Fermi_-LAT photons detected from within the RoI of the neutrino alert over a 30 day interval prior to each individual event. Each analysis uses the _Fermi_ Science Tools version \(11-05-03\)3, _FERMIPY_ version 1.0.1 4[11] in conjunction with the _PASS_ 8 instrument response functions [12]. The contributions from the isotropic and Galactic diffuse backgrounds are modeled using the most recent templates for isotropic and Galactic diffuse emission, iso_P8R3_SOURCE_V2_v1.txt and gll_iem_v07.fits respectively. Sources in the 4FGL-DR3 catalog [13] within a radius of 20\({}^{\circ}\) from the best-fit location of each alert are included in the model with their spectral parameters fixed to their catalog values. The _gtfindsrc_ routine is also applied to search for any additional point sources not accounted for in the model. Any source found to have a test statistic (TS, [14]) \(\geq\) 9 (roughly corresponding to a significance of \(\sim\) 3\(\sigma\)) is permanently added to the model at the position of its highest TS value. Figure 1: A flowchart of the automated pipeline illustrating how each IceCube GCN alert is processed and analyzed before the results are mirrored on to the web server ([https://multimessenger.ua.edu/fermi/](https://multimessenger.ua.edu/fermi/), accessed on 06/06/2023). Moreover, the normalization factor for both the isotropic and Galactic diffuse emission templates are left free to vary, along with the spectral normalization of all modeled sources within the RoI. Finally, the spectral shape parameters of all modeled sources within 3\({}^{\circ}\) of the alert are left free to vary while those of the remaining sources are fixed to the values reported in the 4FGL-DR3 catalog. A binned likelihood analysis is then performed in order to obtain the spectral parameters best describing the model during the period of observation, using a spatial binning of 0.1\({}^{\circ}\) pixel\({}^{-1}\) and two energy bins per decade. The results, including HTML, FITS files, and images of plots, are mirrored on to the web server ([https://multimessenger.ua.edu/fermi/](https://multimessenger.ua.edu/fermi/), accessed on 06/06/2023). ## 3 Discussion In this Section, we consider a typical example of a neutrino alert, IC230506A-gold-rev15, received on 2023-05-06 at 15:53:45 UTC. This was a revised AMON_ICECUBE_GOLD type alert, with _Run_ID_ = 137910, _Event_ID_ = 29871391, and the associated sky coordinates are RA = 50.19\({}^{\circ}\), Dec = 21.06\({}^{\circ}\), with a corresponding error radius of 3.12\({}^{\circ}\). The plots produced by the automated tool are shown in Fig. 3 and Fig. 4. Footnote 5: [https://gcn.gsfc.nasa.gov/notices_amon_g_b/137910_29871391.amon](https://gcn.gsfc.nasa.gov/notices_amon_g_b/137910_29871391.amon) (accessed on 06/06/2023) The output results include a skymap of the neutrino alert, also containing all sources in the confidence region from the 12-year _Fermi_-LAT catalog (4FGL-DR3, [13]), as well as the third _Fermi_-LAT catalog of sources significantly detected in the 10 GeV - 2 TeV energy range (3FHL, [15]). These sources are also listed separately in a Table containing their 4FGL and 3FHL names, sky coordinates, distance from the neutrino alert and links to the source entry in _Simbad_[16], as Figure 2: A skymap, in celestial coordinates, showing AMON_ICECUBE_GOLD events (in yellow) and AMON_ICECUBE_BRONZE events (in red) between June 19, 2019 and June 6, 2023, from the web server ([https://multimessenger.ua.edu/fermi/](https://multimessenger.ua.edu/fermi/), accessed on 06/06/2023). well as multiwavelength archival data repositories including the _Fermi_-LAT Light Curve Repository [17] and the _Fermi_ All-sky Variability Analysis (FAVA, [18]) Light Curve Repository. Also shown in Fig. 3, are the set of diagnostic plots produced in this _Fermi_-LAT data analysis. These are obtained for each individual alert and subsequent revision and include a skymap showing the modelled distribution of the gamma-ray photons obtained after removing all sources found to have a TS < 9 followed by applying the _gtfindsrc_ routine over the RoI, as well as a skymap of the residual significance and the excess photon counts obtained in the analysis over the RoI centred on the location of the alert. Fig. 4 shows more results obtained from the _Fermi_-LAT analysis, including both known 4FGL sources having TS \(\geq\) 9, as well as new sources obtained from the _gtfindsrc_ routine. This includes the TS obtained for each source, the measured flux values in the energy range 100 MeV - 300 GeV along with the corresponding 4FGL flux values for known sources. There is also a link to corresponding plots, such as the one shown in Fig. 4, of the _Fermi_-LAT spectrum for each source, with the 4FGL-DR3 best-fit spectrum [13] also shown for comparison for known sources. Furthermore, we also obtain a skymap and lightcurve of the GeV photons, emitted from within the RoI and having a \(\geq\) 99 % probability of originating from each individual source, over the one month observation period investigated. Moreover, the tool also produces an interactive skymap containing markers for nearby sources in three catalogs, namely the 4FGL, 3FHL, and 2WHSP [19]. Other catalogs, for example the 2RXS [20] and BZCat [21] plus a dynamic HiPS search in _Simbad_, can also be enabled. An Figure 3: **Left: Top panel:** A skymap of the neutrino alert, also containing all sources in the confidence region from the 4FGL-DR3 and 3FHL catalog. **Bottom panel:** These sources, listed separately, with links to the source entry in _Simbad_, as well as multiwavelength archival data repositories. **Right:** The diagnostic plots produced in this _Fermi_-LAT data analysis. These include a skymap of the residual significance and the excess photon counts obtained in the analysis over the RoI centred on the location of the alert. The colour scales correspond to the excess significance of each pixel in the RoI in Gaussian \(\sigma\) and the number of excess photons at each pixel in the RoI respectively. additional _Simbad_ search can be made to show a selection of objects detected in radio, X-rays, gamma-rays and galaxies near the best fit position of the alert. The background sky can also be changed to optical (Mellinger survey, [22]) using the radio buttons below the image. Finally, we also produce visibility plots of the RoI for the four ground-based instruments, namely MAGIC, H.E.S.S., VERITAS and HAWC, over the time interval just after the alert, in order to help enable prompt follow-up observations. ## 4 Conclusions and Future Work In this work, we introduce an automated tool that aims at using _Fermi_-LAT data to identify multiwavelength counterparts to astrophysical neutrino events and enable prompt follow-up observations with ground-based and space-based observatories in order to help pinpoint the neutrino source. After discussing the main components of the the analysis and processing pipeline, we walk-through the primary outputs from the analysis tool for a typical example of a neutrino alert, IC230506A-gold-rev1. Figure 4: **Left: Top panel:** The results obtained from the _Fermi_-LAT analysis, including the TS, flux and spectrum for both known 4FGL sources, as well as potential new sources. **Middle panel:** A skymap and lightcurve of the GeV photons emitted from within the RoI. **Bottom panel:** An example of a _Fermi_-LAT spectrum for the source 4FGL J0325.7+2225 in red. The data are binned into two energy bins per decade, with individual bins having a TS \(<4\) (roughly corresponding to a significance of \(\sim 2\sigma\)) considered as upper limits. Also shown for comparison is the 4FGL-DR3 best-fit spectrum. **Right: Top panel:** An interactive skymap containing markers for nearby sources in the 4FGL, 3FHL, and 2WHSP catalog. **Bottom panel:** A visibility plot of the RoI with VERITAS. It should be noted that this is an early version of the automated tool with plenty of scope for even further improvement. This includes, for example, adding support for 1-year and full-mission _Fermi_-LAT analysis alongside the month long time period currently analyzed, gathering more multiwavelength data and resources to enable SED construction, and finally improving compatibility with the GCN Kafka Client setup 6. Footnote 6: [https://gcn.nasa.gov/docs/client](https://gcn.nasa.gov/docs/client) (accessed on 06/06/2023) ## Acknowledgments A.A. and M.S. acknowledge support through NASA grants 80NSSC22K1515, 80NSSC22K0950, 80NSSC20K1587, 80NSSC20K1494 and NSF grant PHY-1914579.
2309.11418
Communities detection in complex network and multilayer network systems: A flow approach
A flow approach to community detection in complex network and multilayer network systems is proposed. Two methods have been developed to search for communities in a network system (NS). The first of them is based on the calculation of flow influence parameters of NS's subsystems, selected according to the principle of nesting hierarchy. The second method uses the concept of flow core of network system. Two methods are also proposed for community detection in multilayer network system (MLNS). The first of them is based on the concept of MLNS aggregate-network and subsequent allocation of its flow core. The second method uses the concept of flow core of the process of intersystem interactions in general. All developed methods are based on the use of flow criterion that the selected group of nodes really forms a community. The results of application of developed approaches are illustrated by examples for which known methods are ineffective.
Olexandr Polishchuk
2023-09-20T15:50:04Z
http://arxiv.org/abs/2309.11418v1
## Communities detection in complex network and multilayer network systems: A flow approach ###### Abstract A flow approach to community detection in complex network and multilayer network systems is proposed. Two methods have been developed to search for communities in a network system (NS). The first of them is based on the calculation of flow influence parameters of NS's subsystems, selected according to the principle of nesting hierarchy. The second method uses the concept of flow core of network system. Two methods are also proposed for community detection in multilayer network system (MLNS). The first of them is based on the concept of MLNS aggregate-network and subsequent allocation of its flow core. The second method uses the concept of flow core of the process of intersystem interactions in general. All developed methods are based on the use of flow criterion that the selected group of nodes really forms a community. The results of application of developed approaches are illustrated by examples for which known methods are ineffective. Complex network, Network system, Intersystem interactions, Flow model, Hierarchy, Flow core, Aggregate-network, Influence, Community ## 1 **Bcryn** Olenicho 3 zakhakhnykh problem, zka doslidkustcsia y teorei skladnik merek (TCM), s posnykh trui vazmosovov'iazhnikh vadii, lehntikhatsia zikh strnie kratnomnomnomino printsii vortsiaziiii styukturi ta ta porisev dunkichionnuvannia skladnik merekakhakhr sistem. V tezarnik MC naigliliyi posnternimi printsami s tak zvani stil'notin - pidmerreki, 3v'iazhnikh vuznamii zikh s uli'nimiimi za stil'nimiimi, niz mik imiamii vuznamii mereki [1, 2]. Stil'notin ishnuot y viznamomy svii, zkivii prirodi, seonomii, na ratsiropti, y miskii metastrikturi pottso [3, 4]. V i pol'skomomiy stil'nytomia mozhka vuzakhtin pomatski priatniania, politanniyi napti, retsiiii kornesii, natsionalnyi chastori, rtsii volsial'nikh merekakh [5, 6] i t. i. Harener onovna vuzana priplitsetsia porogohenno metodiv ponnykh stil'nbot, zki vazuotot'sia na strukturikh kharakteristikhakh merekakhr meekavnik sistem - naimenimum porisi, ierrakhnii sklosternnaii, otsii modl'nnosti abo estropii, sistektral'nikh vuzatsinovostakh mereki in vniadkovii kol'nogo [7, 8]. He memiy vuzakhnoo ta skladnono v sadana pohitku stil'not y Shill'n, zki otstsnovot' pronesi miskistemikh vaznomii u nagl'nsctennikh vuznostennikh vuznostennikh vuznostennikh vuznostennikh vuznostennikh [9, 10]. V ihomy vnial'nikh zazhnai takozh vikhoristovuiot'sia pererakovavi nishe metodii i skladnik [11]. Osnovnim nezolikom viznomikh metodiv ponnykh stil'nbot prirodi i obtsinnoval'nogo skladnikistvo ta perevorsmistico v izvatsiasticu s dvostorinnoo teoreticno obtrimutovano vriterno togo, sho viznosthena bud-shvim iz iakh metodiv ptsniania vuznost' dinnou vuznostnos sil'notu, dakke kikho termi stil'nist'y mereki v termi v term s 'octativno zoromiyimi i eleko obtsinnoetsia za vidomimi formvami, to ponitts svil'nimi zob otslabel' 3v'iazok i strukturiogo pottsiadu ne s dostativno vitekim ta odnosniannim [8]. Same ta obtsavanna vimnye ingoi shvoznostovnati viznost'n modeli doslidkushniniu [12]. Dolatkova vada isnuonikh metodiv polatacas v tomo, sho voni zazhnai stil'nomoyni na poluk vzk deformovanikh ta dostativno stil'nikh provirnino vseikhikh za skladom stil'not, ele ne videtkuot' pony takh svil'nbot u mereki ta z
2309.04834
Particle-in-cell simulations of pulsar magnetospheres: transition between electrosphere and force-free regimes
Global particle-in-cell (PIC) simulations of pulsar magnetospheres are performed with a volume, surface and pair production-based plasma injection schemes to systematically investigate the transition between electrosphere and force-free pulsar magnetospheric regimes. A new extension of the PIC code OSIRIS to model pulsar magnetospheres using a two-dimensional axisymmetric spherical grid is presented. The sub-algorithms of the code and thorough benchmarks are presented in detail, including a new first-order current deposition scheme that conserves charge to machine precision. It is shown that all plasma injection schemes produce a range of magnetospheric regimes. Active solutions can be obtained with surface and volume injection schemes when using artificially large plasma injection rates, and with pair production-based plasma injection for sufficiently large separation between kinematic and pair production energy scales.
Fábio Cruz, Thomas Grismayer, Alexander Y. Chen, Anatoly Spitkovsky, Ricardo A. Fonseca, Luis O. Silva
2023-09-09T16:16:48Z
http://arxiv.org/abs/2309.04834v1
Particle-in-cell simulations of pulsar magnetospheres: transition between electrosphere and force-free regimes ###### Abstract Context: Aims:Global particle-in-cell (PIC) simulations of pulsar magnetospheres are performed with a volume, surface and pair production-based plasma injection schemes to systematically investigate the transition between electrosphere and force-free pulsar magnetospheric regimes. Methods:A new extension of the PIC code OSIRIS to model pulsar magnetospheres using a two-dimensional axisymmetric spherical grid is presented. The sub-algorithms of the code and thorough benchmarks are presented in detail, including a new first-order current deposition scheme that conserves charge to machine precision. Results:It is shown that all plasma injection schemes produce a range of magnetospheric regimes. Active solutions can be obtained with surface and volume injection schemes when using artificially large plasma injection rates, and with pair production-based plasma injection for sufficiently large separation between kinematic and pair production energy scales. Conclusions: ## 1 Introduction Over the last decade, global kinetic simulations have been essential tools to understand the electrodynamics of pulsar magnetospheres. They have been used to study the organization of plasma currents in the vicinity of the neutron star (Philippov et al., 2015; Chen, 2017; Kalapotharakos et al., 2018) and the acceleration of leptons (Chen & Beloborodov, 2014; Belyaev, 2015; Cerutti et al., 2015; Philippov & Spitkovsky, 2014; Philippov et al., 2015; Brambilla et al., 2018) and ions (Guepin et al., 2020) in the current sheets that develop beyond the light cylinder, leading to gamma-ray emission consistent with observations. Particle-in-cell (PIC) (Dawson, 1962, 1983; Hockney & Eastwood, 1988; Birdsall & Langdon, 1991) has been the main methodology used in global kinetic simulations of pulsar magnetospheres. PIC simulations reproduce with high fidelity the kinetic plasma phenomena relevant in pulsars, such as the evolution of highly non-thermal particle distributions or kinetic-scale fluctuations (Touati et al., 2022). Recent extensions of the PIC method have also allowed the inclusion of Quantum Electrodynamics effects such as pair production (Girsmayer et al., 2016, 2017) or general relativity corrections (Philippov et al., 2015) relevant in pulsars. Due to the large disparity between kinetic and system scales in pulsars, PIC simulations typically employ a phenomenological description of the pair production processes responsible for filling the pulsar magnetosphere. Such description can be as simple as injecting plasma in a significant fraction the simulation domain (Philippov & Spitkovsky, 2014; Belyaev, 2015; Kalapotharakos et al., 2018; Brambilla et al., 2018), limiting this injection to occur close to the stellar surface (Cerutti et al., 2015; Hakobyan et al., 2023), or even considering heuristic pair production models (Chen & Beloborodov, 2014; Philippov et al., 2015; Philippov et al., 2015; Chen et al., 2020; Guepin et al., 2020; Bransgrove et al., 2022). Depending on the details of the injection and/or pair production model, the global asymptotic magnetospheric topology varies quite significantly: in some cases, the system autoregulates to a fully charge-separated configuration (also called electrosphere) that does not produce a Poynting flux, whereas in other cases the magnetosphere converges to a force-free regime (Philippov & Spitkovsky, 2014; Chen & Beloborodov, 2014; Cerutti et al., 2015; Guepin et al., 2020; Hakobyan et al., 2023). While this range of solutions has been identified in several works, a systematic study has not been performed to compare volume, surface and pair production-based injection schemes. In this work, we perform two-dimensional axisymmetric global simulations of pulsar magnetospheres with three different pair injection schemes: over large volumes of the magnetosphere, from the stellar surface only and using a prescription model for pair production. We use these simulations to to systematically characterize the obtained magnetospheric solutions as a function of the injection and/or pair production model parameters. We show that all plasma sources produce near force-free solutions in the regime of large plasma supply and inactive electrosphere solutions with small plasma supply. All plasma sources also allow a transitional regime with sub-force-free surface Poynting flux and wide equatorial current sheets. The simulations presented in this work are performed with a recent extension of the PIC code OSIRIS (Fonseca et al., 2002, 2008) developed for magnetospheric models of compact objects, presented also in this work for completeness. This paper is organized as follows. In Sect. 2, we describe the set of numerical techniques used to generalize the PIC method to perform two-dimensional axisymmetric global kinetic simulations of pulsar magnetospheres with OSIRIS: the adopted discretization of the spatial domain is presented in Sect. 2.1 and the numerical schemes used to advance the field and particle equations and the corresponding boundary conditions are detailed in Sects. 2.2 and 2.3. A new charge-conserving current deposition scheme is presented in Sect. 2.4, and the typical scales and normalizations adopted in the code are presented in Sect. 2.5. In Sect. 3, we present simulations with volume (Sect. 3.1), surface (Sect. 3.2) and pair production-based (Sect. 3.3) plasma injection. Our conclusions are presented in Sect. 4. ## 2 Numerical tool ### Discretization and spatial grid The numerical tool presented in this work aims to model the global plasma environment surrounding neutron stars, _i.e._, the spatial volume between the stellar surface and a few light cylinder radii above it. We describe this system in spherical coordinates, with the radial coordinate \(r\) measured from the center of the neutron star and the polar angle \(\theta\) measured from the star's rotation axis \(\mathbf{\Omega}\). We assume that \(\mathbf{\Omega}\) is either parallel or anti-parallel to the star's magnetic axis \(\boldsymbol{\mu}\), such that we can assume axisymmetry about \(\mathbf{\Omega}\), _i.e._, derivatives with respect to the azimuthal angle \(\phi\) can be dropped, \(\partial/\partial\phi=0\). Similarly to Chen & Beloborodov (2014); Cerutti et al. (2015), we discretize the simulation domain \(r\in[r_{\rm min},r_{\rm max}]\), \(\theta\in[0,\pi]\) in a grid with \(N_{r}\times N_{\theta}\) cells. We adopt a regular grid spacing in \(\theta\), \(\Delta\theta=\pi/(N_{\theta}+1)\), and in \(\log r\). The latter choice allows for a grid spacing that monotonically increases with \(r\). In pulsar magnetosphere simulations, this choice favors the resolution of shorter spatial scales close to the stellar surface, where denser plasmas are expected, and relaxes it far from the neutron star, where it is less needed. The discretization in the radial direction can be formally written as \[\log r_{n}=\log r_{\rm min}+(n-1)\Delta\,\ \ n=1,2,...,N_{r}+1\, \tag{1}\] with \(\Delta\equiv\log(r_{\rm max}/r_{\rm min})/N_{r}\). Equation (1) can be manipulated to write the useful relation \(r_{n}=r_{\rm min}\delta^{n-1}\), where \(\delta\equiv(r_{\rm max}/r_{\rm min})^{1/N_{r}}\) is a parameter that combines all properties of the radial axis. A schematic representation of the grid used to discretize a typical simulation domain in illustrated in Fig. 1a. The edges of grid cells are shown in black lines, and domain boundaries are highlighted in blue and dark red. The lower radial boundary coincides with the stellar surface, \(r_{\rm min}=r_{*}\), whereas the upper radial boundary is at \(r_{\rm max}\sim\) tens of \(r_{*}\), and acts as an open boundary. The \(\theta=0,\pi\) boundaries enforce axisymmetry, effectively serving as reflecting boundaries. More details about these boundaries are provided in Sects. 2.2 and 2.3. In Fig. 1b, we show a schematic representation of a typical grid cell, that we label with indices \((i,j)\) in the radial and polar directions, respectively. Cell boundaries are drawn in solid black lines, and auxiliary lines are drawn in dashed black lines. The positions where the electric and magnetic field components are defined are indicated in dark red and blue. Half integer indices \(i+1/2\) and \(j+1/2\) indicate positions defined as \(r_{i+1/2}\equiv(r_{i}+r_{i+1})/2\) and \(\theta_{j+1/2}\equiv(\theta_{j}+\theta_{j+1})/2\), respectively. The grid illustrated in Fig. 1 presents two key differences with respect to a typical Cartesian grid: a) its cells have curvilinear boundaries and b) their shape and volume change across the grid. These conditions make each step of the PIC method in spherical coordinates more challenging, requiring conversions between coordinate systems in the particle pusher and adjustments in the current deposition scheme to accomodate particle shrinking/expansion in each time step. We explore these challenges and workarounds in Sects. 2.2, 2.3 and 2.4. ### Electromagnetic field solver Electric and magnetic field components are defined in the edges of the staggered grid cells indicated in Fig. 1b. This definition is analogous to that used in traditional Cartesian grids, and allows the use of the Yee algorithm (Yee, 1966) to advance the electric and magnetic field in time via Maxwell's equations, \[\mathbf{B}^{n+1/2}=\mathbf{B}^{n-1/2}=-c\Delta t(\nabla\times \mathbf{E})^{n}\, \tag{2}\] \[\mathbf{E}^{n+1}=\mathbf{E}^{n}+c\Delta t(\nabla\times\mathbf{B} )^{n+1/2}-4\pi\Delta t\mathbf{j}^{n+1/2}\, \tag{3}\] where quantities with integer/half integer superscripts are defined in integer/half integer times and \(\Delta t\) is the time step. Here we adopt the same methodology as Cerutti et al. (2015); Belyaev (2015) and use an integral form of Maxwell's equations that avoids divergences on the polar boundaries. This integral form is obtained by using Stokes' theorem to evaluate the curl of electric and magnetic fields in a given cell as \[(\nabla\times\mathbf{E})_{\rm cell}=\left(\oint_{C_{\rm cell}}\mathbf{E}\cdot d \mathcal{C}_{\rm cell}\right)/\mathcal{S}_{\rm cell}\, \tag{4}\] where \(\mathcal{C}_{\rm cell}\) is the contour defining the edge of that cell, \(\mathcal{S}_{\rm cell}\) is the corresponding area, and the closed integral and dot product have the usual definition of Stokes' theorem. The cell label and corresponding integrations in Eq. (4) change according to the field component under consideration. For instance, we can write the radial component of \(\nabla\times\mathbf{E}\) as \[(\nabla\times\mathbf{E})_{r_{(i,j+1/2)}}=\frac{\sin\theta_{j+1}E_{\phi_{(i,j+1 )}}-\sin\theta_{j}E_{\phi_{(i,j)}}}{r_{i}(\cos\theta_{j}-\cos\theta_{j+1})}. \tag{5}\] This expression is derived by noting that, according to Eq. (2), \((\nabla\times\mathbf{E})_{r}\) should be defined in the same position as \(\mathbf{B}_{r}\), _i.e._, at cell indices \((i,j+1/2)\). This defines the integration surface relevant to Stokes' theorem as \(r=r_{i}\), \(\theta\in[\theta_{j},\theta_{j+1}]\). The numerator and denominator in Eq. (4) then read respectively \(2\pi r_{i}(r_{i}\sin\theta_{j+1}E_{\phi_{(i,j+1)}}-r_{i}\sin\theta_{j}E_{\phi_ {(i,j)}})\) and \(2\pi r_{i}^{2}(\cos\theta_{j}-\cos\theta_{j+1})\), where the \(2\pi\) factor comes from the integration along \(\phi\). A similar calculation can be performed for all other components (Cerutti et al., 2015). We note that at the simulation boundaries (\(i=\{1,N_{r}+1\}\), \(j=\{1,N_{\theta}+1\}\)), the integration regions are adapted to fit within the domain. For example, the \(\theta\) integration is changed to \(\theta\in[0,\theta_{1+1/2}]\) and \(\theta\in[\theta_{N_{\theta}+1/2},\pi]\) at the \(\theta=0\) and \(\theta=\pi\) boundaries, respectively. We also apply special rules to the field components at the boundaries, e.g. in the polar boundaries we enforce the axisymmetry conditions \(\mathbf{E}_{\phi_{(i,1)}}=\mathbf{E}_{\phi_{(i,N_{\theta}+1)}}=0\) and \(\mathbf{B}_{\theta_{(i+1/2,1)}}=\mathbf{B}_{\theta_{(i+1/2,N_{\theta}+1)}}=0\). The inner radial boundary acts generally as a rotating conductor mimicking the stellar surface, whereas the outer boundary acts as a first-order standard Mur open boundary condition (Mur, 1981), _i.e._, a perfect absorber of perturbations propagating perpendicularly to the boundary. We have also implemented static conductor boundary conditions for both inner and outer radial boundaries, that enforce t (normal) electric (magnetic) field components to be null, _i.e._, \(\mathbf{E}_{\sigma(1,l)}=\mathbf{E}_{\sigma(N_{+}1,l_{-})}=0\), \(\mathbf{E}_{\sigma(1,l+1/2)}=\mathbf{E}_{\sigma(N_{+}1,l+1/2)}=0\) and \(\mathbf{B}_{\sigma(1,l+1/2)}=\mathbf{B}_{\sigma(N_{+}1,l_{+}1/2)}=0\). We have benchmarked our field solver implementation by studying stationary electromagnetic TM modes between two spherical static conductors (Jackson 1975). We have verified that the solution obtained numerically is in excellent agreement with the analytical solution of Maxwell's equations for these modes, as well as with the detailed discussion about a similar solver in Belyaev (2015b). ### Particle pusher Particle position and momentum components are updated in Cartesian coordinates with either the Boris (Boris 1970; Birdsall & Langdon 1991) or Vay (Vay 2008) pushers, although other pushers are also compatible with the remaining modified sub-algorithms of PIC presented in this work. In each time step, a particle push is done as follows: first, the electric and magnetic fields are interpolated from the edges of the corresponding grid cell to the particle position \(\mathbf{x}^{e}_{p}\equiv(r_{p},\theta_{p})\), an operation that we write schematically as \((\mathbf{E}^{n}_{(i,l)},\mathbf{B}^{n}_{(i,l)})\rightarrow(\mathbf{E}^{n}_{p },\mathbf{B}^{n}_{p})\). This interpolation is done using a area/volume weighting scheme. For example, the toroidal component of the electric field can be written as \[\mathbf{E}_{\phi_{p}}=\sum_{r=d,l+1}\sum_{r=j,l+1}f_{rr}f_{\theta r}\mathbf{E} _{\phi(\varphi,r)}\;, \tag{6}\] with \[f_{rz} =1-f_{rz+1}=\frac{r_{p}^{3}-r_{i}^{3}}{r_{i+1}^{3}-r_{i}^{3}}\;,\] \[f_{\theta j} =1-f_{\theta j+1}=\frac{\cos\theta_{j}-\cos\theta_{p}}{\cos \theta_{j}-\cos\theta_{j+1}}\;.\] After the interpolation, the field components are converted from spherical to Cartesian coordinates, \((\mathbf{E}^{n}_{p},\mathbf{B}^{n}_{p})\rightarrow(\mathbf{E}^{n}_{p,C}, \mathbf{B}^{n}_{p,C})\), a calculation that depends on the particle position at time \(t^{n}\), \(\mathbf{x}^{a}\). Finally, the particle momentum and position are updated in time, \(\mathbf{u}^{a-1/2}\equiv\mathbf{p}^{a-1/2}/m_{e}c\rightarrow\mathbf{u}^{a+1/2 }\equiv\mathbf{p}^{a+1/2}/m_{e}c\) and \(\mathbf{x}^{a}\rightarrow\mathbf{x}^{a+1}\) respectively. Choosing to advance position and momentum components in Cartesian coordinates guarantees that we are solving the simplest possible equations of motion and also allows for an easy integration with other modules in OSIRIS, such as those accounting for classical radiation reaction losses (Vranic et al. 2016) and QED effects (Grismayer et al. 2016, 2017). We note that advancing the particle position in \((x,y,z)\) does not introduce any asymmetry in the azimuthal direction \(\phi\); in fact, each macro-particle in our simulation represents a charged ring with azimuthal symmetry and \(\phi\) is never used throughout the rest of the numerical scheme. We have tested our implementation of the particle pushers in a large set of background electric and/or magnetic field configurations. In Fig. 2, we show results from a relevant subset of these configurations, namely a particle moving in a) a uniform azimuthal magnetic field, b) crossed constant magnetic and electric fields and c) the time-varying electric and magnetic field components of the TM modes described in the electromagnetic field solver benchmark presented in Sect. 2.2. For all these cases, we show a comparison between the solutions obtained with the Boris pusher and analytical or other numerical solutions. We obtain an excellent agreement between the results of the Boris pusher and the reference analytical/numerical curves. Solutions obtained with the Vay pusher show a similar agreement with the reference curves. In Fig. 2a2, we represent the temporal evolution of the particle energy for over \(\sim 1000\) periods, showing that it is conserved to machine precision. We note that in all these benchmarks, the only electromagnetic fields were those either imposed externally or calculated with the field solver, _i.e._, they Figure 1: Schematic representation of spherical PIC grid: a) shows the grid layout and identifies the coordinate system and boundary types, b) shows the grid cell’s edges where each field component is defined. do not include the fields self-consistently created due to particle motion via plasma currents. ### Current deposition A current deposition algorithm computes the current density \(\mathbf{j}\) on the edges of grid cells as the positions and momenta of particles are updated. A trivial choice is to compute this current as the sum over the macro-particles of the product of their charge density and instantaneous velocity. However, such algorithm in general does not satisfy the continuity equation (Villasenor & Buneman, 1992; Esirkepov, 2001), \[\frac{\partial\rho}{\partial t}+\nabla\cdot\mathbf{j}=0\;, \tag{7}\] where \(\rho\) is the total plasma density. Solving Eq. (7) ensures also that Gauss' law, written as \[\nabla\cdot\mathbf{E}=4\pi\rho\;, \tag{8}\] is satisfied. Finding a current deposition algorithm that satisfies Eq. (7), and consequently Eq. (8), _i.e._, a charge-conserving current deposition algorithm, is one of the key challenges in PIC codes. For Cartesian grids, there is a well established method for any interpolation order proposed in Esirkepov (2001). However, for non-uniform spherical grids, this challenge is more substantial, as grid cells (and particle shapes, that we shall define below) change across the grid. Other codes adopting such grids (Chen & Beloborodov, 2014; Cerutti et al., 2015; Belyaev, 2015; Chen, 2017) usually do not seem to include charge-conserving current deposition algorithms, and adopt instead numerical schemes to enforce the validity of Eq. (8), e.g. Poisson solvers. Here, we propose a new current deposition scheme that conserves charge to machine precision in the non-uniform grid defined in section 2.1. We start by defining the volume occupied by a macro-particle centered at \((r_{p},\theta_{p})\). The function that defines this volume is usually called the particle shape, \(S(r,\theta,r_{p},\theta_{p})\). Before writing the exact form of \(S\), let us define some of its important properties, that we illustrate schematically in Fig. 3. First, the particle shape should only coincide with the shape of the cell in which its center is located, labeled with indices \((i,j)\), when and only when \((r_{p},\theta_{p})=(r_{i+1/2},\theta_{i+1/2})\). Since the grid spacing in the radial direction is a function of \(r\), the particle width in this direction should also be a function of \(r_{p}\), _i.e._, \(\Delta r\equiv\Delta r(r_{p})\). Furthermore, the charge density associated with each macro-particle should also be a function of \(r_{p}\). More specifically, the charge density should decrease with \(r_{p}\) to compensate the corresponding increase in volume of the macro-particle, such that its total charge remains constant. Defining the number of real particles in a macro-particle as \(N_{p}\), we formally wish to find a waterbag-like particle number density \(n(r)\) such that \[\int_{V_{i}}n(r_{i+1/2})\;\mathrm{d}V_{i}=\int_{V_{r}}n(r_{i+1/2})\;\mathrm{d }V_{r}=N_{p}\;, \tag{9}\] where \(V_{i,r}\) are the volumes of cells with radial labels \(i,i^{\prime}\) (see Figure 3 b)). For simplicity, we assume that the particle density Figure 2: Particle pusher benchmarks corresponding to particle motions in a1-2) a uniform azimuthal magnetic field, b1-2) crossed constant magnetic and electric fields and c1-2) the time-varying electric and magnetic field components of TM modes. is only a function of \(r\), and generalize it later to include the natural dependence in \(\theta\) as well. Assuming that \(n(r_{i+1/2})\) is constant within cell \(i\), we can solve Eq. (9) to obtain \[n(r_{i+1/2})=\frac{3N_{p}}{4\pi}\frac{1}{r_{i+1}^{3}-r_{i}^{3}}=\frac{3N_{p}}{3 2\pi}\frac{(\delta+1)^{3}}{\delta^{3}-1}\frac{1}{r_{i+1/2}^{3}}\;, \tag{10}\] where we have used the relation \(r_{i+1/2}=r_{i}(1+\delta)/2=r_{i+1}(1+\delta^{-1})/2\). We note that Eq. (10) defines \(n(r)\) for any \(r_{i+1/2}\), but not for \(r\neq r_{i+1/2}\). We choose to take the continuous limit of \(n(r_{i+1/2})\) for an arbitrary radius, replacing \(r_{i+1/2}\) for an arbitrary \(r_{p}\), _i.e._, \[n(r_{p})=\frac{3N_{p}}{32\pi}\frac{(\delta+1)^{3}}{\delta^{3}-1}\frac{1}{r_{p }^{3}}\;. \tag{11}\] Eq. (11) ensures that \(n(r)\) satisfies exactly Eq. (9) when \(r_{p}=r_{i+1/2}\) and that the particle shape is a smooth function of \(r_{p}\). The particle width \(\Delta r(r_{p})\) is determined in a similar manner; first, we express the grid spacing in terms of \(r_{i+1/2}\), \(\Delta r_{i}=r_{i+1}-r_{i}=2r_{i+1/2}(\delta-1)/(\delta+1)\), and we extend this definition to an arbitrary radius \(r_{p}\), \[\Delta r(r_{p})=2r_{p}\frac{\delta-1}{\delta+1}\;. \tag{12}\] This quantity is represented for a typical grid in Fig. 4a, together with the grid spacing \(\Delta r_{i}\). As expected, both quantities match exactly when \(r=r_{i+1/2}\), and \(\Delta r\) is a smooth function of \(r\). Equations (11) and (12) ensure that the conservation law expressed in Eq. (9) can be extended to any radius, which is shown in Fig. 4b. The general particle shape \(S\) can be inferred from this discussion, and in particular from Eq. (11). It reads \[S(r,\theta,r_{p},\theta_{p}) =\frac{3}{16\pi}\frac{(\delta+1)^{3}}{\delta^{3}-1}\frac{1}{r_{p }^{3}}b_{0}\left(\frac{r-r_{p}}{\Delta r(r_{p})}\right)\times\] \[\times\frac{1}{\cos(\theta_{p}-\Delta\theta/2)-\cos(\theta_{p}+ \Delta\theta/2)}b_{0}\left(\frac{\theta-\theta_{p}}{\Delta\theta}\right)\;, \tag{13}\] where \(b_{0}(x)\) is the zeroth order b-spline function, defined as \(b_{0}(x)=1\) if \(|x|<0.5\) and \(0\) otherwise. Note that Eq. (13) generalizes the particle shape to a two-dimensional \((r,\theta)\) grid, hence the \(\cos(\theta_{p}\pm\Delta\theta)\) terms resulting from the integral in Eq. (9). With the shape function in Eq. (13), we can compute the charge density at any point \((r,\theta)\) due to the presence of a macro-particle with \(N_{p}\) real particles of charge \(q_{p}\) and coordinates \((r_{p},\theta_{p})\) as \(\rho_{p}(r,\theta,r_{p},\theta_{p})=q_{p}N_{p}S(r,\theta,r_{p},\theta_{p})\). The charge density at cell edges is defined resorting to the area/volume weighting technique described in Sect. 2.3, and can be formally derived as \[\rho_{(i,j)}(r_{p},\theta_{p}) =\frac{\int_{V_{i,j}}\rho_{s}(r,\theta,r_{p},\theta_{p})\;\mathrm{ d}V_{i,j}}{V_{i,j}}=\] \[=\frac{q_{p}N_{p}\frac{3}{16\pi}\frac{(\delta+1)^{3}}{\delta^{3} -1}}{(r_{i+1/2}^{3}-r_{i-1/2}^{3})(\cos\theta_{j-1/2}-\cos\theta_{j+1/2})}\times\] \[\times\left[\frac{r_{p}^{3}-r_{c}^{3}}{r_{p}^{3}}\right]\left[ \frac{\cos(\theta_{p}-\Delta\theta/2)-\cos\theta_{j+1/2}}{\cos(\theta_{p}- \Delta\theta/2)-\cos(\theta_{p}+\Delta\theta/2)}\right]\;. \tag{14}\] We note that the special integration limits \(r_{c}=\min(r_{p}+\Delta r(r_{p})/2,r_{i+1/2})\) and \(r_{c}=\max(r_{p}-\Delta r(r_{p})/2,r_{i-1/2})\) result from Figure 4: Particle shape properties: a) radial width and b) density and real particle number. Figure 3: Schematic representation of a) the spherical particle shape and b) the variation of its flat-top density value with the radial coordinate. The blue shaded region in a) represents the particle shape and identifies its widths in the radial and polar directions. the subtlety that the particle radial width is a function of the particle radial coordinate, \(r_{p}\). The expressions in square brackets are often referred to as the weighting functions in PIC current deposition algorithms. The particle shape in Eq. (13) and the deposition rule in Eq. (14) are the key ingredients in our charge-conserving current deposition scheme. This scheme is inspired by the seminal work of Villasenor & Buneman (1992) (hereafter VB), that presented a scheme that predecessor the widely used method of Esirkepov (2001) for PIC current deposition in Cartesian grids. The VB method is schematically represented in Fig. 5a. VB proposed that the current density \(\mathbf{j}\) should be computed directly by inverting the continuity equation, thus enforcing by construction that it is satisfied. In practice, when a particle is pushed in time from a position \(\mathbf{x}^{n}\) to a position \(\mathbf{x}^{n+1}\), part of its shape crosses the boundaries over which the current density is defined in the Cartesian PIC grid. These boundaries, and the exact locations where each of the components of \(\mathbf{j}\) are defined are shown in Fig. 5a in green and red lines and arrows, respectively. VB recognized that we can simply compute the different current density components by evaluating the fraction of charge density carried by each macro-particle that crosses the boundaries identified in green and red. For a Cartesian grid, this fraction can be computed geometrically as the ratio between the areas \(A_{\text{green}}\) and \(A_{\text{red}}\) and the total area corresponding to the particle shape, \(A_{\text{total}}\). This calculation is simple in Cartesian grids because the particle shape does not change across the grid, which allows us to label which parts of the colored area at \(x>x_{i+1/2}\) and \(y>y_{i+1/2}\) crossed each of the green or red lines. In a spherical grid, this condition is not met, and the calculation becomes more involved. A schematic representation of the method equivalent to VB in a spherical grid is shown in Fig. 5b, where same rationale described above is easily applied except for the determination of the area identified with \(A_{\gamma}\). Because the particle expands during its motion from \(\mathbf{x}^{n}\) to \(\mathbf{x}^{n+1}\), it is not trivial to determine which fraction of \(A_{\gamma}\) should be combined with \(A_{\text{green}}\) (\(A_{\text{red}}\)) to compute \(j_{\gamma(i+1/2,j)}\) (\(j_{\phi(i,j+1/2)}\)). We circumvent this issue by generalizing the geometrical interpretation of \(\nabla\cdot\mathbf{j}\) proposed by VB. They suggested that the total current divergence can be split as \(\nabla\cdot\mathbf{j}=(\nabla\cdot\mathbf{j})_{\gamma}+(\nabla\cdot\mathbf{j})\), in a Cartesian grid, with \((\nabla\cdot\mathbf{j})_{\gamma}\propto A_{\text{green}}/A_{\text{total}}\) and \((\nabla\cdot\mathbf{j})_{\gamma}\propto A_{\text{red}}/A_{\text{total}}\), and that these terms could be computed directly by evaluating \(-\partial\rho_{(i,j)}/\partial t\) assuming that the particle moves purely along the corresponding direction at an average position along the orthogonal direction. Formally, this is expressed as \[(\nabla\cdot\mathbf{j})_{\gamma(i,j)} =-\left.\frac{\partial\rho_{(i,j)}}{\partial t}\right|_{x,y}^{x +1,\bar{y}} =\frac{\rho_{(i,j)}(x^{n+1},\bar{y})-\rho_{(i,j)}(x^{n},\bar{y})}{ \Delta t}\, \tag{15}\] \[(\nabla\cdot\mathbf{j})_{\gamma(i,j)} =-\left.\frac{\partial\rho_{(i,j)}}{\partial t}\right|_{\bar{x},y ^{n+1}} =\frac{\rho_{(i,j)}(\bar{x},y^{n+1})-\rho_{(i,j)}(\bar{x},y^{n})}{ \Delta t}\, \tag{16}\] where \(\bar{x}=(x^{n+1}+x^{n})/2\) and \(\bar{y}=(y^{n+1}+y^{n})/2\). From Eqs. (15) and (16), we can express the divergence operators using finite differences and obtain \(j_{\gamma(i+1/2,j)}\) and \(j_{\gamma(i,j+1/2)}\). This approach can be generalized to spherical coordinates, _i.e._, we can write \(\nabla\cdot\mathbf{j}=(\nabla\cdot\mathbf{j})_{r}+(\nabla\cdot\mathbf{j})_{\theta}\). However, because the particle shape changes continuously in the radial direction, \((\nabla\cdot\mathbf{j})_{\theta}\) cannot be computed assuming that the particle moves purely along the polar direction with \(\bar{r}=(r^{n+1}+r^{n})/2\). Instead, we proceed as follows: first, we compute \(\nabla\cdot\mathbf{j}\) and \((\nabla\cdot\mathbf{j})\), using \[(\nabla\cdot\mathbf{j})_{(i,j)} =-\left.\frac{\partial\rho_{(i,j)}}{\partial t}\right|_{x,y}^{x +1,\theta^{n+1}}=\frac{\rho_{(i,j)}(r^{n+1},\theta^{n+1})-\rho_{(i,j)}(r^{n}, \theta^{n})}{\Delta t}\, \tag{17}\] \[(\nabla\cdot\mathbf{j})_{r(i,j)} =-\left.\frac{\partial\rho_{(i,j)}}{\partial t}\right|_{x,\bar{ y}}^{x+1,\bar{y}} =\frac{\rho_{(i,j)}(r^{n+1},\bar{\theta})-\rho_{(i,j)}(r^{n},\bar{\theta})}{ \Delta t}\, \tag{18}\] where \(\bar{\theta}=(\theta^{n+1}+\theta^{n})/2\). Then, we compute \((\nabla\cdot\mathbf{j})_{\theta}=\nabla\cdot\mathbf{j}-(\nabla\cdot\mathbf{j})_ {r}\). Finally, we invert the nabla operators, \[(\nabla\cdot\mathbf{j})_{r(i,j)} =3\left[\frac{r_{i+1/2}^{2}j_{r(i+1/2,j)}-r_{i-1/2}^{2}j_{r(i-1 /2,j)}}{r_{i+1/2}^{3}-r_{i-1/2}^{3}}\right]\, \tag{19}\] \[(\nabla\cdot\mathbf{j})_{\theta(i,j)} =\frac{3}{2}\frac{r_{i+1/2}^{2}-r_{i-1/2}^{2}}{r_{i+1/2}^{3}-r_{ i-1/2}^{3}}\times\] \[\times\left[\frac{\sin\theta_{i+1/2}j_{0(i,j+1/2)}-\sin\theta_{i -1/2}j_{0(i,j+1/2)}}{\cos\theta_{j-1/2}-\cos\theta_{j+1/2}}\right]\, \tag{20}\] to find the current components. The inversion of \((\nabla\cdot\mathbf{j})_{\theta(i,j)}\) is simple, because the second term in the square brackets of Eq. (20) is always zero given that the particle motion is restricted to cell \((i,j)\). The same is applicable to the inversion of \((\nabla\cdot\mathbf{j})_{r(i,j)}\) for most particle positions in cell \((i,j)\); however, due to the fact that the particle expands with \(r_{p}\), it can deposit current at the grid position \((i-1/2,j)\) when \(r_{p}\) is close to \(r_{i}\). When this happens, we determine \((\nabla\cdot\mathbf{j})_{r(i-1,j)}\) using Eq. (18), invert the corresponding operator to obtain \(j_{r(i-1/2,j)}\) and use it to solve for \(j_{r(i+1/2,j)}\) in Eq. (19). When particles cross two cells from \(\mathbf{x}^{n}\) to \(\mathbf{x}^{n+1}\), we split their trajectory such that each split is within a single cell, and apply the method described before to each trajectory split. The same strategy is applied in the algorithms proposed in Villasenor & Buneman (1992) and Esirkepov (2001). This method does not impose any restriction on the azimuthal current component, which we take to be simply \(j_{\theta(i,j)}=\rho_{(i,j)}v_{\theta}\), where \(v_{\theta}\) is the macro-particle velocity in the azimuthal direction. Finally, we note that Eqs. (14) and (19)-(20) can also be derived by applying the algorithms in Villasenor & Buneman (1992) or Esirkepov (2001) (in first-order) in a Cartesian logical space with the spherical coordinates metric. However, the special radial integration rule described in this section to account for particle shrinking/expansion should be included to ensure that those algorithms conserve charge to machine precision. We have benchmarked the current deposition method presented here by initializing particles all over the simulation domain with a random velocity, depositing their current over a time step \(\Delta t\) and evaluating \[\Delta_{\text{Continuity}} =\frac{\Delta t}{\rho_{(i,j)}}\left(\frac{\partial\rho_{(i,j)}}{ \partial t}+(\nabla\cdot\mathbf{j})_{(i,j)}\right)\, \tag{21}\] \[\Delta_{\text{Gauss}} =\frac{1}{\rho_{(i,j)}}\left((\nabla\cdot\mathbf{E})_{(i,j)}-4 \pi\rho_{(i,j)}\right). \tag{22}\] Both \(\Delta_{\text{Continuity}}\) and \(\Delta_{\text{Gauss}}\) should be zero if the continuity equation and Gauss' law are satisfied. Figure 6 shows that these quantities are both of the order of \(10^{-15}-10^{-11}\), _i.e._, of the order of machine precision. The value of both \(\Delta_{\text{Continuity}}\) and \(\Delta_{\text{Gauss}}\) tends to be larger closer to the star, due to the larger number of operations subject to round-off errors in this region, caused by particles crossing more cell boundaries and depositing their current in more than one cell. We have verified that the accuracy of the method is maintained over multiple time steps by ensuring that the evolution of the grid integrals of \(\Delta_{\rm Continuity}\) and \(\Delta_{\rm Gauss}\) remain at machine precision level. This current deposition method thus accurately conserves charge, avoiding the need for other correcting algorithms. It is also inexpensive, since most factors in Eqs. 17-20 can be pre-computed and reused throughout a simulation. ### Typical scales and normalizations In the benchmarks presented above, the normalization units of distances, times, and fields varied according to what best suits the respective tests. However, for pulsar magnetosphere simulations, we adopt a common normalization that we introduce here. We choose to normalize distances to the stellar radius \(r_{*}\) and times to \(r_{*}/c\). Electric and magnetic fields are normalized to \(m_{e}c^{2}/er_{*}\), however we typically represent them in units of \(en_{\rm GJ}r_{*}\), where \(n_{\rm GJ}=\Omega B_{*}/2\pi nec\) is the surface Goldreich-Julian (GJ) (Goldreich & Julian 1969) particle number density. The GJ density also defines a typical frequency \(\omega_{p,\rm GJ}=\sqrt{4\pi e^{2}n_{\rm GJ}/m_{e}}\) and an electron skin depth \(d_{e,\rm GJ}=c/\omega_{p,\rm GJ}\). The time step and grid spacing are chosen to resolve these temporal and spatial scales, respectively. In pulsar magnetosphere simulations, the main parameter responsible for setting the typical temporal, spatial and energy scales is the normalized value of the surface magnetic field, \(B_{*}(er_{*}/m_{e}c^{2})\). For realistic parameters, \(B_{*}\simeq 10^{12}\) G and \(r_{*}\simeq 10\) km, we have \(B_{*}(er_{*}/m_{e}c^{2})\sim 10^{15}\). Global simulations are not feasible with such values, since they would have to resolve scales of the order of \(\sim\) tens of \(r_{*}\) down to \(d_{e,\rm GJ}\sim 10^{-7}\,r_{*}\). For this reason, we use more modest values of \(B_{*}(er_{*}/m_{e}c^{2})\sim 10^{3}-10^{6}\), such that we respect the ordering in these objects, \(\Omega\ll\omega_{p,\rm GJ}\ll\omega_{c}\), where \(\omega_{c}=eB_{*}/m_{e}c\) is the cyclotron frequency associated with a field magnitude \(B_{*}\). Figure 5: Schematic representation of the current deposition algorithm in a) Cartesian and b) spherical coordinates (see text for details). Figure 6: Current deposition benchmarks, showing that both a) the continuity equation and b) Gauss’ law are satisfied to machine precision. ## 3 Global simulations of pulsar magnetospheres In this Section, we present global PIC simulations of pulsar magnetospheres obtained with the OSIRIS framework (Fonseca et al. 2002, 2008). We start by allowing electron-positron pairs to be artificially and abundantly injected in our simulations, and then make increasingly realistic assumptions about the plasma supply processes, in particular regarding the regions of space where pair cascades operate, and the separation between kinetic and system scales. All simulations presented here have a similar initial configuration: the system starts in vacuum and with an initial dipolar magnetic field of polar surface magnitude \(B_{s}\), _i.e._, \(B_{s}(r,\theta)=B_{s}(r_{*}/r)^{3}\cos\theta\) and \(B_{\theta}(r,\theta)=(1/2)B_{s}(r_{*}/r)^{3}\sin\theta\). The inner radial boundary is treated as a rotating conductor of angular velocity \(\mathbf{\Omega}=\Omega\mathbf{\Omega}\); at the surface of the neutron star, we impose the co-rotation electric field \(\mathbf{E}=-(\mathbf{v}_{\mathrm{rot}}\times\mathbf{B})/c\), with \(\mathbf{v}_{\mathrm{rot}}=\mathbf{\Omega}\times(r_{\mathrm{r}}\mathbf{\hat{r}})\). In all simulations, we consider the stellar rotation frequency to be initially zero and increase it linearly over a time \(t_{\mathrm{rise}}\)/\(r_{*}=1.5\) to \(\Omega r_{*}/c=0.125\). For times \(t>t_{\mathrm{rise}}\), the stellar frequency is kept constant. The stellar period is \(T=2\pi/\Omega=50~{}r_{*}/c\) and the light-cylinder radius is \(R_{\mathrm{LC}}/r_{*}=8\). All simulations use also \(r_{\mathrm{min}}/r_{*}=1\) and \(r_{\mathrm{max}}/r_{*}=20\), such that the plasma dynamics can be captured up to \(r/R_{\mathrm{LC}}>2\). The value of \(B_{s}\) is chosen to satisfy the ordering \(\Omega\ll\omega_{p,\mathrm{GJ}}\ll\omega_{c}\) described in Sect. 2.5 while maintaining simulations numerically feasible. This choice and others regarding e.g., grid resolution vary according to the injection scheme and parameter regime under study, and are detailed alongside the corresponding simulations. ### Volume injection In this section, we inject plasma everywhere in the simulation domain where the local electric field component parallel to the magnetic field satisfies the condition \(E_{\parallel}c/r_{*}\Omega B_{s}>k_{\mathrm{lim}}\), where \(k_{\mathrm{lim}}\) is a constant. Similar injection criteria have been used in Belyaev (2015a), whereas in Philippov & Spitkovsky (2014); Kalapotharakos et al. (2018); Brambilla et al. (2018) plasma is only injected if the local magnetization is also above a given threshold. Physically, this injection scheme is equivalent to assuming that electron-positron pair cascades may develop wherever \(E_{\parallel}\) is sufficiently large, _i.e._, it neglects any role of the local magnetic field magnitude or curvature. Since all fields (and in particular \(E_{\parallel}\)) decay with \(r\), the choice of \(k_{\mathrm{lim}}\) can also be interpreted as a spatial limitation to the plasma supply: infinitely small values of \(k_{\mathrm{lim}}\) allow plasma to be injected up to \(r\gg r_{*}\), whereas \(k_{\mathrm{lim}}\sim 1\) restricts the plasma supply to radii \(r\sim r_{*}\). A macro-electron-positron pair carrying a number density \(n_{\mathrm{vol}}=k_{\mathrm{vol}}E_{\parallel}/er_{*}\), with \(k_{\mathrm{vol}}=0.2\), is injected at rest in each cell and time step in which the injection condition is met. The choice of \(k_{\mathrm{vol}}\) is such that a few macro-particles are required to supply the charge density that screens \(E_{\parallel}\) and stops the injection. We can also interpret \(k_{\mathrm{vol}}\) as a parameter proportional to the local GJ density, since \(E_{\parallel}/er_{*}\sim n_{\mathrm{GJ}}\). In all the simulations presented in this section, \(B_{s}e_{*}/m_{e}c^{2}=8\times 10^{3}\), \(N_{r}\times N_{\theta}=1000^{2}\) and \(\Delta tc/r_{*}=10^{-3}\). In these conditions, \(c/\omega_{p,\mathrm{GJ}}r_{*}\simeq 0.022\), whereas the minimum grid spacing is \(\min(\Delta r)/r_{*}=0.003\). In Fig. 7, we present an overview of the quasi-steady-state solution obtained with \(k_{\mathrm{lim}}=0.005\). This solution is achieved after a time \(\sim 25~{}r_{*}/c\sim T/2\)1. In the first half stellar period, the simulation undergoes a transient stage in which the vacuum co-rotation fields are established and plasma is created. The solution presented in Fig. 7 resembles the canonical force-free regime of pulsar magnetospheres: the magnetosphere is divided in two regions permeated by closed and open magnetic field lines (shown in white/black solid lines in all panels), with the last closed field line crossing the equatorial plane at the light-cylinder radius (shown in a white/black dashed vertical line in all panels). The open and closed field line regions are respectively negatively and positively charged, even if electrons and positrons exist in both regions -- see Fig. 7a-c, showing the electron and positron number density and the total charge density, respectively. As shown in Fig. 7d, a negative radial current density \(j_{r}\) (blue) is conducted from the polar regions and along the open field lines, which is compensated by return current layers (red) established on the last closed field line. The return current layers are connected with each other at a distance \(r\simeq R_{\mathrm{LC}}\) on the equatorial plane, where the poloidal magnetic field lines resemble a Y shape. A radial current density layer extends along the equatorial plane to large distances, supporting a strong gradient in the toroidal magnetic field component \(B_{\theta}\), illustrated in Fig. 7e. The poloidal magnetic field lines have also opposite polarity in opposite sides of this equatorial current layer, and reconnect sporadically, leading to the formation of outflowing plasmoids -- see the large density structures at \(r/r_{*}\simeq 12\) in Fig. 7a-b. The plasma supply in this simulation is large enough such that \(E_{\parallel}\) is effectively screened in the whole simulation domain, as shown in Fig. 7f, and thus lies well within the assumptions of the force-free regime for pulsar magnetospheres. Footnote 1: This is not a universal result. In fact, the time required by the system to achieve a steady-state (or quasi steady-state) solution varies with the injection scheme, the stellar ramp-up time \(t_{\mathrm{rise}}\) and other initial and/or boundary conditions. The quasi-steady-state shown in Fig. 7 is sustained via intermittent injection, mainly along the return current layers. In these regions, \(E_{\parallel}\) is less efficiently screened, leading to the injection of plasma which, in turn, screens the field as it flows along the return current layers. As we shall demonstrate, this intermittency has a period of \(\simeq 0.3-0.5~{}T\), and it may play a significant role in the temporal evolution of the magnetospheric state. However, for \(k_{\mathrm{lim}}=0.005\) the solution never deviates significantly from the force-free regime. In order to demonstrate how the magnetospheric solution changes with \(k_{\mathrm{lim}}\), in Fig. 8 we compare the total charge density of the solutions obtained with \(k_{\mathrm{lim}}=\{0.005,0.01,0.1\}\). We recall that \(k_{\mathrm{lim}}\) is the minimum value of \(E_{\parallel}c/r_{*}\Omega B_{s}\) for which we inject plasma. It is clear that the force-free regime is only observed for \(k_{\mathrm{lim}}=0.005\). For \(k_{\mathrm{lim}}=0.01\), the equatorial current sheet (positively charged region at \(r\gtrsim R_{\mathrm{LC}}\)) is wide and the return current layers are not positively charged everywhere, and for \(k_{\mathrm{lim}}=0.1\) the solution does not even produce an outflow. In fact, by increasing \(k_{\mathrm{lim}}\), we are limiting the plasma supply to regions closer and closer to the stellar surface. This can be understood by noting that this parameter compares the local \(E_{\parallel}\) with the reference value \(\Omega B_{s}r_{*}/c\) (_i.e._, the surface magnitude of the electric field in vacuum). Since the typical magnitude of \(E_{\parallel}\) decreases with \(r\), decreasing \(k_{\mathrm{lim}}\) limits plasma injection to smaller radii. In the \(k_{\mathrm{lim}}=0.01\) run, this supply occurs only up to radii \(r/r_{*}\simeq 3\), and the solution shows the same intermittency observed for \(k_{\mathrm{lim}}=0.005\). However, the injection stage is not as efficient in this case, and the equatorial outflow is not dense enough to produce a thin current sheet. For \(k_{\mathrm{lim}}=0.1\), only regions close to the surface can initially fulfill the injection criteria, and no plasma is supplied to large radii. The system relaxes in this case to a fully charge-separated configuration, with only electrons (positrons) in the poles (equatorial region). This solution is often denominated as the disk-dome or electrosphere solution (Jackson 1976; Krause-Polstorff & Michel 1985). In the charged regions, the electric field is screened, injection ceases and no plasma outflows are formed. An important property of the magnetospheric solution is the integrated Poynting flux \(L(r)\), defined as \[L(r)=\frac{c}{2}\int_{0}^{\pi}(\vec{\mathbf{E}}\times\mathbf{B}),\,r^{2}\sin \theta\mathrm{d}\theta\,. \tag{23}\] Figure 9 shows \(L(r)\) as a function of time for the three simulations described before. This quantity is normalized to the theoretical value of the spindown luminosity, \(L_{0}=\mu^{2}\Omega^{4}/c^{3}\), with \(\mu=B_{r}r_{*}^{3}\). We observe a large spindown at early times for all simulations, which is a consequence of the initial transient stage. After this transient, the \(k_{\mathrm{lim}}=0.1\) simulation converges to a surface Poynting flux \(L_{*}/L_{0}\ll 1\), which is a consequence of the inactivity of disk-dome solution. On the contrary, the simulations with lower \(k_{\mathrm{lim}}\) have \(L_{*}/L_{0}\sim 1\). The Poynting flux remains approximately constant within the light-cylinder for these runs, and decays with \(r\) for \(r>R_{\mathrm{LC}}\), which is a signature of the con Figure 7: Force-free magnetosphere obtained with volume injection. Panels a-f show the electron and positron density, total charge density, radial current density, azimuthal magnetic field and electric field component parallel to the local magnetic field, respectively. Quantities are multiplied by powers of \(r\) to enhance large radii features. White/black solid lines represent magnetic field lines, and vertical dashed lines show the location of the light-cylinder. version from magnetic to kinetic energy due to magnetic reconnection in the equatorial plane. The surface Poynting flux shows variations of periodicity \(0.3-0.5~{}T\), which are correlated with the intermittency of the solution identified above in this section. The time-averaged radial dependence of the luminosity \(\langle L\rangle\) after a stellar period and the temporal dependence of \(L_{*}\) is shown in Fig. 10. The simulations presented in this section show that the efficiency of the plasma supply critically determines the global structure of the pulsar magnetosphere. It is expected that pulsar magnetoshere is in a regime close to the force-free configuration identified with \(k_{\rm lim}=0.005\) or lower. However, pair production cannot operate in all regions of the magnetosphere, in particular at radii comparable to the light-cylinder radius. It is then important to assess if more realistic injection and/or pair production schemes can provide the plasma supply required for the magnetosphere to be in the force-free regime. In the next sections, we address this question by considering plasma supply schemes limited to regions close to the stellar surface. ### Surface injection In this section, we limit injection to occur only at the stellar surface. In doing so, we phenomenologically introduce the impor Figure 8: Magnetospheric solutions obtained with volume injection. The panels show the total charge density after a stellar rotation period. Figure 9: Poynting flux in simulations with volume injection. Values are normalized to the theoretical value \(L_{0}=\mu^{2}\Omega^{4}/c^{3}\). tant role of the magnetic field amplitude in our treatment of the magnetospheric plasma supply. As in Sect. 3.1, we do not allow particles to emit photons and/or pairs. We adopt two different criteria for the injection and vary the density and velocity of the surface-injected plasma. The parametrization of the plasma flow injected from the stellar surface is similar to that presented in Cerutti et al. (2015). However, our criteria for injection differ slightly from that work, that also assumes a minimum threshold for the local plasma magnetization. In all simulations presented in this section, we use \(B_{s}er_{*}/m_{e}c^{2}=8\times 10^{3}\), \(N_{r}\times N_{\theta}=500^{2}\) and \(\Delta tc/r_{*}=3\times 10^{-3}\). The first injection criterion is based on the local value of \(E_{\parallel}\). We inject a macro-electron-positron pair in each cell just above the stellar surface (\(r=r_{*}\)) that satisfies \(E_{\parallel}c/r,\Omega B_{*}>k_{\rm lim}\). In this case, we consider a fixed \(k_{\rm lim}=0.002\) and vary the properties of the injected pairs, namely their density \(n_{\rm s}=k_{\rm s}n_{\rm GJ}\) and poloidal velocity \(v_{\rm s}\). These pairs are also injected with a toroidal velocity that matches the local linear velocity of the stellar surface, \(v_{\phi}=\Omega r\sin\theta\). Despite the large range of injection parameters considered, \(k_{\rm s}=n_{\rm s}/n_{\rm GJ}=\{0.2,0.5,1\}\) and \(v_{\rm s}/c=\{0,0.1,0.5,0.99\}\), the solutions obtained for long times, \(t/T\gtrsim 2\), always converge to the disk-dome solution identified in Sect. 3.1. Figure 11 shows the charge density \(\rho\) and \(E_{\parallel}\) of two runs with \(k_{\rm s}=n_{\rm s}/n_{\rm GJ}=1\) and \(v_{\rm s}=\{0,0.999\}\) after a time \(t/T\simeq 4\). After an initial transient, the system settles to a charge-separated solution and effectively screens \(E_{\parallel}\) at the stellar surface, precluding further injection. The second injection criterion does not depend on the local surface field conditions. Instead, injection is allowed in all cells above the stellar surface in which the combined local number density of positrons and electrons satisfies \(n_{*}+n_{-}<5\ n_{\rm GJ}\), to ensure that enough plasma exists everywhere to screen the local electric field parallel to the magnetic field. We emphasize that \(n_{\rm GJ}=\Omega B_{*}/2\pi rec\) is the pole GJ density and not its local value. This criterion allows injection to occur even if \(E_{\parallel}\sim 0\), and is thus harder to motivate from first-principles arguments. Here, we shall interpret it as a means of producing a set plasma density over a layer near the stellar surface of width smaller than the local resolution of the simulation grid. In pulsars, such layer can be as small as \(\sim 100\)(Ruderman & Sutherland, 1975). We consider that the injected electron-positron pairs carry a number density \(n_{\rm s}=k_{\rm s}n_{\rm GJ}\) and poloidal velocity \(v_{\rm s}\). In Fig. 12, we show the charge density distribution of the solutions obtained for a fixed \(k_{\rm s}=n_{\rm s}/n_{\rm GJ}=0.2\) and varying \(v_{\rm s}\) for a time \(t/T=1\). With \(v_{\rm s}=0\), the system converges to the electrosphere solution. Particles injected at early times de Figure 10: Radial and temporal dependencies of Poynting flux in simulations with volume injection. a) shows the time-averaged luminosity \(\langle L\rangle\) as a function of \(r\) after a stellar rotation period, and b) shows the temporal evolution of the surface Poynting flux \(L_{*}\). The dashed lines in a) and b) identify the light-cylinder radius and the theoretical surface Poynting flux \(L_{0}=\dot{\mu}^{2}\Omega^{4}/c^{3}\), respectively. velop a space-charge limited flow, driving \(E_{\parallel}\) to zero near the stellar surface and thus inhibiting freshly injected particles to be pulled away from or towards the star. For \(v_{\rm s}>0\), we observe that the system develops a positively charged outflow along the equatorial plane. This outflow occurs in a narrower current sheet for larger values of \(v_{\rm s}\), which can be understood as a mechanism to support the stronger toroidal magnetic field driven by the stronger poloidal currents of these regimes. However, we do not observe a current sheet as thin as that characteristic of the force-free regime. Instead, the current sheet remains wide even for \(v_{\rm s}/c=0.99\). This may indicate that the plasma launched into this region is not dense enough, a question that we address below in this section. Figure 13 shows the time-averaged Poynting flux produced by the simulations described above with surface injection as a function of the radial coordinate \(r\) and its surface value as a function of time. We see once again that an electrosphere solution (\(v_{\rm s}/c=0\)) produces no spindown luminosity, and that it increases overall with increasing \(v_{\rm s}\). The same decrease for \(r>R_{\rm LC}\) observed in Sect. 3.2 is observed here. We note that the \(v_{\rm s}/c=0.99\) run shows a surface Poynting flux larger than \(L_{0}\), which is a consequence of the smaller size of the co-rotation region (and thus a smaller effective light-cylinder radius and larger effective \(L_{0}\)). We have also performed a set of simulations with fixed \(v_{\rm s}/c=0.5\) and varying \(k_{\rm s}=n_{\rm s}/n_{\rm GJ}=\{0.1,0.2,0.5\}\). The charge density obtained in the steady-state (or quasi-steady-state) of these simulations is shown in Fig. 14. These results confirm that the denser the injected plasma is, the more the solution approaches the force-free regime (see in particular the solution obtained for \(k_{\rm s}=0.5\)). This injection density requirement seems to be critical in the launching of large density plasma to large radii, in particular along the return current layers, that connect the surface to the equatorial current sheet. In summary, some of the parameters used in simulations presented in this section yield active magnetospheric solutions, with \(L_{*}/L_{0}\sim 1\) and a global configuration similar to the force-free regime. This is consistent with the results presented in Cerutti et al. (2015). However, it is hard to motivate the injection criteria and the choice of numerical parameters required to observe such regime. ### Pair production The results presented in Sects. 3.1 and 3.2 are in good agreement with similar previous works. In particular, both Philippov & Spitkovsky (2014) and Cerutti et al. (2015) observe a transition from electrosphere to active solutions with more abundant Figure 11: Magnetospheric solutions obtained with surface injection proportional to \(E_{\parallel}\), a1-2) show the total charge density and \(E_{\parallel}\), respectively for a simulation with \(v_{\rm s}/c=0\) and b1-2) show the same for a simulation with \(v_{\rm s}/c=0.99\). Solid lines represent magnetic field lines, and vertical dashed lines show the location of the light-cylinder. plasma supply. While in Philippov & Spitkovsky (2014) pairs are injected up to large radii, in Cerutti et al. (2015) only surface injection is considered, showing trends with \(k_{\rm s}\) and \(v_{\rm s}\) very similar to our results. The convergence to a force-free regime in the asymptotic limit of large plasma supply with both volume and surface injection is reassuring. However, an important question remains open when translating global simulations with volume and surface injection schemes to realistic systems: how is this plasma supplied, if strong field pair production operates efficiently only near the stellar surface? Is this pair production channel enough to supply the plasma to fill the whole magnetosphere? In young and rapidly rotating pulsars (e.g., the Crab pulsar and other gamma-ray pulsars), pairs can also be created via the \(\gamma\)-\(\gamma\) channel. In this process, for which the cross-section peaks at around a center of mass energy \(\sim 2\,m_{e}c^{2}\), gamma-rays produced via synchrotron emission and/or inverse Compton scattering in the equatorial current sheet collide with photons from a low energy bath, producing pairs. However, slower pulsars are not expected to have a sufficiently dense low-energy photon bath for this process to be relevant, and strong field pair production remains the main plasma supply channel. In this section, we use global simulations that include pair production only near the stellar surface to understand whether it can provide enough plasma to maintain an active magnetospheric solution. We use the heuristic pair production model described in Cruz et al. (2021, 2022), in which a lepton emits a pair of combined energy \(\gamma_{\rm pair}m_{e}c^{2}\) whenever it achieves a threshold Lorentz factor \(\gamma_{\rm thr}\). We keep the ratio \(\gamma_{\rm thr}/\gamma_{\rm pair}\) constant, and vary the ratio \(\eta\equiv\gamma_{\rm max}/\gamma_{\rm thr}\), where \(\gamma_{\rm max}=e\Phi_{\rm pc}/m_{e}c^{2}\) is the maximum energy achievable by the particles in the voltage \(\Phi_{\rm pc}=B_{*}r_{*}^{3}\Omega^{2}/c^{2}\) induced by the rotating star across the polar cap. In general, \(\gamma_{\rm pair}\ll\gamma_{\rm thr}\ll\gamma_{\rm max}\) in real systems; however, it is very hard to achieve a large separation between these scales in global PIC simulations. For instance, previous works, considering a similar pair production model (Chen, 2017; Philippov et al., 2015), have used \(\eta\sim 10\) and \(\gamma_{\rm thr}/\gamma_{\rm pair}\sim 2\), which severely limits the efficiency of the pair cascades and the plasma multiplicity. In this Section, we present simulations with fixed \(\gamma_{\rm pair}=16\) and \(\gamma_{\rm thr}=25\) and a range of large values of \(\eta\). We achieve this by controlling the surface magnetic field amplitude \(B_{*}\). In doing this, besides increasing the scale separation between pair production and the dynamical scales, we also decrease the plasma kinetic scales. For this reason, we adopt a varying number of grid cells and time steps in our simulations to be able to resolve these scales. For \(\eta=5\) we use \(N_{r}\times N_{\theta}=500^{2}\) and \(\Delta tc/r_{*}=3\times 10^{-3}\), for \(\eta=\{25,50\}\) we use \(N_{r}\times N_{\theta}=1000^{2}\) and \(\Delta tc/r_{*}=10^{-3}\) and for \(\eta=\{100,150\}\) we use \(N_{r}\times N_{\theta}=2000^{2}\) and \(\Delta tc/r_{*}=5\times 10^{-4}\). In order to mimic the relevance of the large magnetic field required for pair production to occur, we limit pair production Figure 12: Magnetospheric solutions obtained with surface injection proportional to \(n_{\rm GJ}\) with fixed \(k_{\rm s}=n_{\rm s}/n_{\rm GJ}=0.2\) and varying \(v_{\rm s}/c\). to only occur at radii \(r/r_{*}<3\). We also forbid pair production for \(\theta<0.01\), to reproduce the suppression of the corresponding QED cross-section in this region (Cruz et al., 2021). Seed electron-positron pairs are provided at the stellar surface whenever \(E_{\rm i}c/r_{*}\Omega B_{*}>k_{\rm lim}\), with \(k_{\rm lim}=0.1\). Each pair is injected at rest and carrying a density \(n_{\rm s}=k_{\rm s}E_{\rm i}/er_{*}\), with \(k_{\rm s}=n_{\rm s}/n_{\rm GJ}=0.2\). We stress that in these conditions, we obtained an electrosphere configuration in simulations without pair production (see section 3.2). In Figure 15, we show the charge density obtained at a time \(t/T\simeq 2\) for a relevant subset of the simulations performed. We observe a transition from electrosphere to force-free-like configurations by increasing \(\eta\). Physically, this corresponds to allowing more pairs per particle to be created, hence increasing the plasma supply of the system. For \(\eta=5\), pair production is not efficient enough, and after an initial transient with some pair production, the accelerating electric field is screened and the system settles to an inactive solution. For \(\eta\sim 10-50\), the system is able to launch plasma towards the light-cylinder and produce a positively charged equatorial outflow. This plasma is launched along the return current layers due to pair production at \(r/r_{*}<3\); however, because of the limited effectiveness of the pair production in this range of \(\eta\), the plasma produced is not dense enough to confine the equatorial current sheet to a thin region, and it becomes wide for large distances from the stellar surface. For \(\eta\gtrsim 100\), the system converges to a near force-free regime, with magnetic field lines open to infinity and a thin equatorial current sheet. In these simulations, pair production is very effective, and launches a large density (\(n\sim\) few \(n_{\rm GJ}\)), quasi-neutral plasma to the light-cylinder. In this region, part of the plasma escapes along the equatorial field lines; however, a fraction of the particles flows back to the star. The majority of these particles are electrons, such that the return current layers are negatively charged. The time-averaged radial dependence of the Poynting flux and its surface value as a function of time for the simulations described above are presented in Figure 16. The observed radial dependence is similar to the regimes previously observed, with the \(\eta\gtrsim 100\) simulations approaching the force-free spindown luminosity \(L_{0}\) within the light-cylinder. In the equatorial current sheet, a fraction of \(0.3-0.4\)\(L_{*}\) is dissipated between \(r\sim R_{\rm LC}\) and \(r\sim 2\)\(R_{\rm LC}\) and converted into particle kinetic energy. For all \(\eta<100\) runs, the surface luminosity decreases over time, and we expect them to eventually converge to the electrosphere solution Figure 13: Radial and temporal dependencies of Poynting flux in simulations with surface injection proportional to \(n_{\rm GJ}\) with fixed \(k_{\rm s}=n_{\rm s}/n_{\rm GJ}=0.2\) and varying \(v_{s}/c\). a) shows the time-averaged luminosity \(\langle L\rangle\) as a function of \(r\) after a stellar rotation period, and b) shows the temporal evolution of the surface Poynting flux \(L_{*}\). The dashed lines in a) and b) identify the light-cylinder radius and the theoretical surface Poynting flux \(L_{0}=\mu^{2}\Omega^{4}/c^{3}\), respectively. for \(t/T\gg 1\). However, for \(\eta\gtrsim 100\), the surface Poynting flux remains stable over time. All simulations present some temporal variability. We see small scale fluctuations on the charge and current densities in the open field line outflows, due to the \(E_{\parallel}\) screening process resulting from pair cascades. These fluctuations occur on a temporal scale \(\sim r_{*}/\eta c\). We also observe a quasi-periodic launch of plasma towards the light-cylinder region along the return current layers with a temporal scale \(\sim 0.3-0.5~{}T\). We show one of these events in Figure 17 for a simulation with \(\eta=100\). As plasma is injected along the last closed field lines, most of it escapes along the equatorial current sheet. As this happens, the return current density drops close to \(r\sim R_{\rm LC}\), allowing \(E_{\parallel}\) to grow. Electrons flowing back to the star are thus accelerated along these field lines and produce a large number of pairs when they enter the pair producing region \(r/r_{*}<3\) -- see e.g., Figure 17 a1) and b1). The secondary particles then advect to large radii along the return current layers, reestablishing \(j_{r}\) and effectively screening the \(E_{\parallel}\) responsible for triggering the process -- see Figure 17 d1-3). This process produces a larger fraction of the total pair production events for \(10\lesssim\eta\lesssim 50\). The solutions obtained in this range resemble that of _weak pulsars_(Gruzinov, 2015), with screened surface \(E_{\parallel}\) but with wide equatorial current sheets as a result of inefficient pair production. The process presented here is similar to that described in Chen et al. (2020); Bransgrove et al. (2022). The periodicity of the cyclic behaviour driven by pair production along the return current layers is \(\sim 0.3-0.5~{}T\). We believe that this periodicity can depend on the multiplicity from the pair cascade near \(r/r_{*}\sim 3\), since if more pairs outflow during the active phase, more electrons can be stored in the Y-point charge cloud, which takes longer to deplete. If this is true, a larger multiplicity should translate to a longer duty cycle. A detailed study of the importance of the cascade multiplicity on the cyclic behaviour is deferred to a future work. Finally, we note that apart from the effective pair discharges along the return current layers, we also observe abundant pair production within the polar cap region for all simulations with \(\eta>5\) -- see Figure 18 for an illustrative example. This occurs because the density supplied from the stellar surface is insufficient to screen \(E_{\parallel}\) in this region. With stronger surface injection, we expect this pair production to be less significant. However, we do not expect the overall structure of the magnetosphere to be meaningfully modified. Interestingly, the polar cap pair production observed in this regime resembles that expected when general relativity effects are taken into account. When corrections due to the strong gravitational field of the neutron star are considered, we expect pair creation activity within the polar cap even if the surface can supply a charge density \(\pm ten_{\rm GJ}\)(Philippov et al., 2015; Chen et al., 2020; Bransgrove et al., 2022), since general relativity requires a current in this region \(|j_{r}|>en_{\rm GJ}\)(Beloborodov, 2008; Belyaev & Parfrey, 2016; Gralla et al., 2016; Torres et al., 2023). Apart from driving this difference in the time-dependent nature of the polar cap, general relativity is not expected to play a significant role in the overall magnetospheric organization. ## 4 Conclusions In this work, we have presented a systematic study of the different global regimes of pulsar magnetospheres. Namely, we have performed simulations with three distinct plasma sources: in volume, from the stellar surface, and via pair production. Our results, presented in Sect. 3, show that all plasma sources produce near force-free solutions in the regime of large plasma supply. In the opposite regime, we obtain inactive electrosphere solutions with all sources. These results are in overall good agreement with other works considering independently volume (Philippov & Spitkovsky, 2014; Belyaev, 2015; Kalapotharakos et al., 2018; Brambilla et al., 2018) or surface injection schemes (Cerutti et al., 2015; Hakobyan et al., 2023) or with heuristic pair production models (Chen & Beloborodov, 2014; Philippov et al., 2015; Philippov et al., 2015; Chen et al., 2020; Guepin et al., 2020; Bransgrove et al., 2022). While volume and surface plasma injection serve as a means to efficiently fill the pulsar magnetosphere and produce a near force-free configuration, as shown in Sects. 3.1 and 3.2, respectively, these are hard to motivate from first-principle arguments. On one hand, the pair cascades that these injection schemes aim to mimic develop only when the local magnetic field is close to the Schwinger field, and as such they should only operate near the stellar surface. On the other hand, these cascades produce Figure 14: Magnetospheric solutions obtained with surface injection proportional to \(n_{\rm GJ}\) with fixed \(v_{*}\) and varying \(k_{*}=n_{*}/n_{\rm GJ}\). plasma with a complex energy distribution, that depends on e.g., the local electric and magnetic field geometry. Thus, any volume or surface injection scheme is a substantial simplification of the highly nonlinear plasma supply from pair cascades in pulsars. Understanding if and how pair production alone can fill the whole pulsar magnetosphere is thus crucial, namely to reliably determine observational signatures. The simulations including pair production presented in Sect. 3.3 show that pair discharges operating close to the stellar surface produce a range of solutions of the pulsar magnetosphere. The character of the solution depends critically on the ratio between the maximum attainable particle energy and the energy at which leptons emit pair producing photons, \(\eta=\gamma_{\rm max}/\gamma_{\rm thr}\), that quantifies the efficiency of the pair discharges. Our results show that when \(\eta\gtrsim 100\), enough pairs are created to fill the magnetosphere and reach a near force-free surface Poynting flux, with dissipation occurring in an equatorial current sheet beyond the light-cylinder. In the opposite limit, \(\eta\lesssim 10\), the magnetosphere settles to a fully charge-separated, static solution, with \(E_{\parallel}=0\) near the surface, that produces a negligible Poynting flux. For \(\eta\sim 10-50\), we observe an intermediate solution (Gruzinov, 2015), with a wide equatorial current sheet and with a surface Poynting flux \(50-80\%\) below that expected in the force-free regime. Our simulations show that the pair production along the return current layers is key to feed plasma to the light-cylinder region and beyond in near force-free regimes, in line with the results reported in other works, e.g. Chen & Beloborodov (2014). We have also identified a time-dependent mechanism similar to that presented in Chen et al. (2020); Bransgrove et al. (2022), that results from periodic openings of an outer gap in which particles flowing back to the star are able to accelerate, producing pairs when they get close to the stellar surface. The simulations presented here used a very simple heuristic model to describe pair production in strong magnetic fields. In this work, we have only explored the role of the parameter \(\eta\) on the magnetospheric structure and left the ratio \(\gamma_{\rm thr}/\gamma_{\rm pair}\) unchanged. This ratio plays an important role in the multiplicity of pair cascades, and was kept low to make simulations feasible. Larger values of \(\gamma_{\rm thr}/\gamma_{\rm pair}\) will likely provide even more abundant pairs to large radii, such that smaller values of \(\eta\) may be enough to set the magnetosphere in a force-free regime. Such study is left for future work. The pair production model considered here provides an adequate description of pair cascades when the curvature photon mean free path is negligible, _i.e._, when pair production is local. In global models, however, it is easy to conceive that photons emitted in some regions of the magnetosphere may decay into Figure 15: Magnetospheric solutions obtained with pair production. Panels a-d) show the total charge density for simulations with \(\eta=\{5,25,50,150\}\). Solid lines represent magnetic field lines, and vertical dashed lines show the location of the light-cylinder. pairs in others. For instance, photons emitted by electrons travelling towards the star along the return current layer may decay in the polar cap region. It would thus be interesting to include more sophisticated pair production models in these simulations to assess if nonlocal pair production may play a significant role in e.g., coherent emission processes. In this work, we have also described a spherical grid suitable to perform global PIC simulations of pulsar magnetospheres. We have detailed a) an electromagnetic field solver based on the Yee solver that uses an integral form of Maxwell's equations (Sect. 2.2, b) particle pulsars that solve the particles' equations of motion in Cartesian coordinates (Sect. 2.3) and c) a charge-conserving current deposition scheme (Sect. 2.4) for a non-uniform, curvilinear spherical grid. While the field solver and particle pusher techniques are also implemented in other similar codes, the current deposition scheme presented here is a novel development. By ensuring that the continuity equation (and, consequently, Gauss' law) is satisfied in the current deposition, this method does not require that other numerical algorithms are used to correct for artificial charges in the grid. For each of the numerical schemes presented here, we have provided comprehensive benchmarks for a variety of test scenarios. All numerical schemes presented here have been implemented in the PIC code OSIRIS. ## 5 Acknowledgments FC, TG, RAF and LOS acknowledge supported by the European Research Council (ERC-2015-AdG Grant 695088) and FCT (Portugal)-- Foundation for Science and Technology (grant PD/BD/114307/2016, in the framework of the Advanced Program in Plasma Science and Engineering APPLAuSE, grant PD/00505/2012, and project no. 2022.02230.PTDC). AC acknowledges support from NSF grants DMS-2235457 and AST-2308111. AS is supported in part by NSF grant PHY-2206607. We acknowledge PRACE for granting access to MareNostrum, Barcelona Supercomputing Center (Spain), where the simulations presented in this work were performed.
2310.20639
The two-variable hypergraph Tutte polynomial via embedding activities
We prove that the two-variable Tutte polynomial of hypergraphs can be defined via embedding activities. We also prove that embedding activities of hypergraphs yield a Crapo-style decomposition of $\mathbb{Z}^E$, thus generalizing Bernardi's results from graphs to hypergraphs. We also show that hypergraph embedding activities do not fit into the $\Delta$-activity framework of Courtiel. Based on this observation, we construct a graph with an activity notion that yields a Crapo decomposition, but cannot be obtained as a $\Delta$-activity.
Lilla Tóthmérész
2023-10-31T17:07:41Z
http://arxiv.org/abs/2310.20639v2
# The two-variable hypergraph Tutte polynomial via embedding activities ###### Abstract. We prove that the two-variable Tutte polynomial of hypergraphs can be defined via embedding activities. We also prove that embedding activities of hypergraphs yield a Crapo-style decomposition of \(\mathbb{Z}^{E}\), thus generalizing Bernardi's results from graphs to hypergraphs. ## 1. Introduction The Tutte polynomial is one of the most important and well-studied polynomials associated to graphs, and more generally, to matroids. One of the nice properties of the Tutte polynomial is that it has many equivalent definitions. A class of these definitions gives the Tutte polynomial as a generating function of various types of activities of spanning trees. The original definition of Tutte used activities with respect to an arbitrary ordering of the edges [9]. A remarkable fact about that definition is that although the activities of spanning trees depend on the chosen edge ordering, the actual polynomial does not. A different definition, due to Bernardi [1], defines activities via an embedding of the graph into an orientable surface, and in this sense, replaces the edge ordering with a more natural auxiliary structure. Kalman [5] generalized Tutte's activity definition of \(T(x,1)\) and \(T(1,y)\) to hypergraphs, and more generally, to polymatroids. Later, Bernardi, Kalman and Postnikov [3] gave a two-variable polynomial \(\mathcal{T}_{P}(x,y)\) for a polymatroid \(P\) using activities (again, with respect to a fixed ordering of the ground set). This polynomial generalizes the Tutte polynomial \(T_{M}(x,y)\) of matroids (subject to a change of variables). A different two-variable polymatroid Tutte polynomial was defined by Fink and Cameron [4] using lattice point counts instead of activities. Their polynomial is also a transformation of \(\mathcal{T}_{P}(x,y)\). It is not a priori obvious if Bernardi's embedding activity definition can be generalized further than the class of graphs. However, in [6], Kalman and the author of the current paper showed that embedding activities can be generalized to hypergraphs (that can be thought of as a class of polymatroids). They proved that a formula can be given for \(\mathcal{T}_{\mathcal{H}}(x,1)\) of a hypergraph using embedding activities. They also conjectured an analogous formula for \(\mathcal{T}_{\mathcal{H}}(1,y)\) using embedding activities, however, their method of proof could not handle external embedding activities. As a consequence, it also remained open whether a formula can be given for a two-variable hypergraph Tutte polynomial using embedding activities. In this paper, we resolve this question, and give a Bernardi-style definition for the two-variable hypergraph Tutte polynomial. In the mean time, we confirm that the conjectured Bernardi-style definition for \(T_{\mathcal{H}}(1,y)\) from [6] indeed gives the exterior polynomial. We note that [6] also proposed an alternative way to define embedding activities. We conjecture that these alternative embedding activities also yield the polynomial \(\mathcal{T}_{\mathcal{H}}(x,y)\), but this remains a conjecture (see Section 5). Let us briefly describe our method, and compare it to [3]. Bernardi, Kalman and Postnikov proved the well-definedness of \(\mathcal{T}_{P}(x,y)\) by first generalizing the corank-nullity definition of the Tutte polynomial to polymatroids. The well-definedness of this corank-nullity polynomial is immediate. Then, they established a Crapo-type decomposition of \(\mathbb{Z}^{E}\) for activities with respect to a fixed edge ordering. Finally, they used their Crapo-type decomposition to show that for any fixed ordering, the polynomial defined via activities has the same relationship to the corank-nullity polynomial, hence \(\mathcal{T}_{P}(x,y)\) does not depend on the ordering of the ground set. In this paper we follow the same path. Hence the main result of the paper is that a Crapo-type decomposition of \(\mathbb{Z}^{E}\) exists with respect to embedding activities of a hypergraph (see Theorem 3.6). Once we prove Theorem 3.6, we can prove that the polynomial defined via embedding activities has the same relationship to the corank-nullity polynomial as \(\mathcal{T}_{P}(x,y)\). Hence for any embedding, the polynomial defined via embedding activities agrees with \(\mathcal{T}_{P}(x,y)\). For graphs, Bernardi proved in [2, Theorem 13] the existence of a Crapo-decomposition for embedding activities. However, the hypergraph case does not follow easily from his arguments, and needs new ideas. Outline of the paper: In Section 2, we introduce the necessary background on activities and hypergraphs. We state our main results in Section 3, where we also show how the well-definedness of the embedding activity definition of the two-variable hypergraph Tutte polynomial follows from a Crapo decomposition of \(\mathbb{Z}^{E}\) for embedding activities. Section 4 is dedicated to the main technical result: the existence of the Crapo-decomposition for embedding activities of hypergraphs. Finally, in Section 5, we recall an alternative definition for embedding activities suggested in [6], and pose it as an open problem if those activities also yield \(\mathcal{T}_{\mathcal{H}}\). ### Acknowledgement I am grateful to Tamas Kalman for helpful discussions. This work was supported by the National Research, Development and Innovation Office of Hungary - NKFIH, grant no. 132488, by the Janos Bolyai Research Scholarship of the Hungarian Academy of Sciences, and by the UNKP-22-5 New National Excellence Program of the Ministry for Innovation and Technology, Hungary. This work was also partially supported by the Counting in Sparse Graphs Lendulet Research Group of the Alfred Renyi Institute of Mathematics. ## 2. Preliminaries In this section we introduce the necessary background on graphs, hypergraphs and activities. ### Graphs and their activities We briefly recall the definitions of the Tutte polynomial of a graph by Tutte and by Bernardi. Let \(G\) be a graph and \(T\) a spanning tree of \(G\). If \(e\notin T\), then \(T\cup e\) has a unique cycle, which is called the _fundamental cycle_ of \(e\) in \(T\), and is denoted by \(C_{G}(T,e)\). If \(G\) is clear from the context, we drop the subscript. If \(e\in T\), then \(T-e\) has two connected components, and the edges of \(G\) connecting the two components form a cut, which is called the _fundamental cut_ of \(e\) in \(T\), and is denoted by \(C_{G}^{*}(T,e)\). If \(G\) is clear from the context, we drop the subscript. #### 2.1.1. Activities with respect to an edge ordering **Definition 2.1** (activity).: Let \(G\) be a graph, and \(<\) an arbitrary ordering of the edges of \(G\). Let \(T\) be a spanning tree of \(G\). An edge \(e\in T\) is internally active if there is no edge \(f<e\) such that \(T-e\cup f\) is also a spanning tree. \(i(T)\) denotes the number of internally active edges for \(T\). An edge \(e\notin T\) is internally active if there is no edge \(f<e\) such that \(T\cup e-f\) is also a spanning tree. \(e(T)\) denotes the number of externally active edges for \(T\). **Definition 2.2** (Tutte polynomial [9]).: Let \(G\) be a graph. \[T_{G}(x,y)=\sum_{T\text{ spanning tree}}x^{i(T)}y^{e(T)}.\] This is well-defined, that is, \(T_{G}(x,y)\) does not depend on the ordering \(<\) used to define the activities. #### 2.1.2. Embedding activities Let \(G\) be a graph. A _ribbon structure_ of \(G\) is a family of cyclic permutations: for each vertex \(x\) of \(G\), a cyclic permutation of the edges incident to \(x\) is given. For an edge \(xy\) of \(G\), we use the following notations: * \(yx_{G}^{+}\): the edge following \(yx\) at \(y\) * \(xy_{G}^{+}\): the edge following \(xy\) at \(x\). If \(G\) is clear from the context, we omit the subscript. In addition to a ribbon structure, we also need to fix a _basis_\((b_{0},b_{0}b_{1})\), where \(b_{0}\) is an arbitrary node of the graph and \(b_{0}b_{1}\) is an arbitrary edge incident to \(b_{0}\). Suppose that a ribbon structure and a basis are fixed. Then any spanning tree \(T\) of \(G\) gives a "walk" in the graph. This was defined by Bernardi [1], and following him we call it the _tour_ of \(T\). **Definition 2.3** (Tour of a spanning tree).: Let \(G\) be a ribbon graph with a basis \((b_{0},b_{0}b_{1})\), and let \(T\) be a spanning tree of \(G\). The _tour_ of \(T\) is a sequence of node-edge pairs, starting with \((b_{0},b_{0}b_{1})\). If the current node-edge pair is \((x,xy)\) and \(xy\notin T\), then the current node-edge pair of the next step is \((x,xy^{+})\). If the current node-edge pair is \((x,xy)\) and \(xy\in T\), then the current node-edge pair of the next step is \((y,yx^{+})\). In the first case we say that the tour _skips_\(xy\) and in the second case we say that the tour _traverses_\(xy\). The tour stops right before when \((b_{0},b_{0}b_{1})\) would once again become the current node-edge pair. See Figure 1 for an example. Bernardi proved [1, Lemma 4.2 (Lemma 5 in the arxiv version)] that in the tour of a spanning tree \(T\), each edge \(xy\) of \(G\) becomes current edge twice, in one case with \(x\) as current vertex, and in the other case with \(y\) as current vertex. This naturally orders the edges of the graph in the order that they first become current. Bernardi defined the internal and external embedding activities of a tree \(T\) as the internal and external activities with respect to the order induced by the tour of \(T\). He then showed that the two-variable generating function of internal and external activities also gives the Tutte polynomial (in particular, the generating function does not depend on the chosen ribbon structure and basis). Embedding activities were generalized for hypergraphs in [6], where also their connections with geometry were explored. (They are related to dissections of root polytopes of bipartite graphs.) In [6], a formula for \(\mathcal{T}_{\mathcal{H}}(x,1)\) was given using embedding activities. In this paper, we give a formula also for \(\mathcal{T}_{\mathcal{H}}(x,y)\). ### Hypergraphs and their activities A _hypergraph_ is an ordered pair \(\mathcal{H}=(V,E)\), where \(V\) is a finite set and \(E\) is a finite multiset of subsets of \(V\). We refer to elements of \(V\) as _vertices_ and to elements of \(E\) as _hyperedges_. It is convenient to represent a hypergraph by a bipartite graph. For a hypergraph \(\mathcal{H}=(V,E)\), let the _underlying bipartite graph_\(\operatorname{Bip}\mathcal{H}\) be the bipartite graph with vertex classes \(V\) and \(E\), where \(v\in V\) is connected to \(e\in E\) if \(v\in e\) in \(\mathcal{H}\). We will mostly think of hypergraphs as bipartite graphs. When talking about \(\operatorname{Bip}\mathcal{H}\), we will call the elements of \(V\cup E\)_nodes_. Specifically, the elements of \(V\) will be called _violet_ nodes and elements of \(E\)_emerald_ nodes. We use the words hyperedge and emerald node interchangeably. We use greek letters to denote the edges of \(\operatorname{Bip}\mathcal{H}\). Throughout the paper, we assume that \(\mathcal{H}\) is _connected_, which means that \(\operatorname{Bip}\mathcal{H}\) is connected. Spanning trees of graphs are the bases of a matroid: the graphic matroid. For hypergraphs, Kalman [5] defined hypertrees, that generalize (characteristic vectors of) spanning trees from graphs to hypergraphs. Very importantly, hypertrees form a polymatroid. See more about this in [5]. **Definition 2.4**.: Let \(\mathcal{H}=(V,E)\) be a hypergraph with underlying bipartite graph \(\operatorname{Bip}\mathcal{H}\). We say that the vector \(h\in\mathbb{Z}^{E}\) is a _hypertree_ if there exists a spanning tree \(T\) of \(\operatorname{Bip}\mathcal{H}\) that has degree \(d_{T}(e)=h(e)+1\) at each node \(e\in E\). In this case we say that \(T\) represents \(h\). We denote the set of all hypertrees of \(\mathcal{H}\) by \(H(\mathcal{H})\). If \(\mathcal{H}\) is clear from the context, we simply write \(H\). Note that a hypertree might have many different representing spanning trees. It is an easy exercise to check that if \(G\) is a connected graph, then the hypertrees of \(\operatorname{Bip}G\) are exactly the characteristic vectors of spanning trees, hence hypertrees indeed generalize spanning trees. Hypertrees form bases of a polymatroid (or in a different terminology, they form an M-convex set) since a characterization for them can be given by a submodular set-function, see [5, Proposition 4.9] and [3, Remark 14.6]. Figure 1. The tour of a spanning tree. Let the ribbon structure be the one induced by the positive orientation of the plane. The edges of the tree are drawn by thick lines, the non-edges by dashed lines. With \(b_{0}=v_{1},b_{1}=v_{2}\), we get the tour \((v_{1},\varepsilon_{1})\), \((v_{2},\varepsilon_{2})\), \((v_{2},\varepsilon_{5})\), \((v_{4},\varepsilon_{3})\), \((v_{3},\varepsilon_{2})\), \((v_{3},\varepsilon_{3})\), \((v_{4},\varepsilon_{4})\), \((v_{4},\varepsilon_{5})\), \((v_{2},\varepsilon_{1})\), \((v_{1},\varepsilon_{4})\). This is a Jaeger tree because \((v_{2},\varepsilon_{2})\) precedes \((v_{3},\varepsilon_{2})\) and \((v_{4},\varepsilon_{4})\) precedes \((v_{1},\varepsilon_{4})\). This implies the following exchange property. **Proposition 2.5**.: _[_3_]_ _Let \(h\) and \(h^{\prime}\) be hypertrees and \(e\) be a hyperedge with \(h(e)<h^{\prime}(e)\). Then there exist a hyperedge \(f\) with \(h(f)>h^{\prime}(f)\) such that \(h+\mathbf{1}_{e}-\mathbf{1}_{f}\) and \(h^{\prime}-\mathbf{1}_{e}+\mathbf{1}_{f}\) are both hypertrees._ #### 2.2.1. Activities with respect to a hyperedge ordering Let us recall how [5] and [3] associates activities to hypertrees. We note that these constructions make sense (and were investigated) in the broader case of polymatroids, but as the current paper focuses on embedding activities for hypergraphs, we only talk about the hypergraph case here. Let \(<\) be an ordering of the hyperedges \(E\). **Definition 2.6**.: Let \((V,E)\) be a hypergraph with an order \(<\) on the set \(E\). A hyperedge \(e\in E\) is _internally active_ for the hypertree \(h\), with respect to \(<\), if \(h-\mathbf{1}_{e}+\mathbf{1}_{f}\) is not a hypertree for any \(f<e\). A hyperedge \(e\in E\) is _externally active_ for the hypertree \(h\), with respect to \(<\), if \(h+\mathbf{1}_{e}-\mathbf{1}_{f}\) is not a hypertree for any \(f<e\). Let \(Int_{<}(h)\) denote the set of internally active elements of \(h\) with respect to the order \(<\), and let \(Ext_{<}(h)\) denote the set of externally active elements of \(h\) with respect to \(<\). Let \(i_{<}(h)=|Int_{<}(a)|\) denote the number of internally active hyperedges in \(h\) and call this value the _internal activity_ of \(h\). Let \(e_{<}(h)=|Ext_{<}(a)|\) denote the number of externally active hyperedges in \(h\) and call this value the _external activity_ of \(h\). Moreover, let \(oi_{<}(h)=|Int_{<}(h)-Ext_{<}(h)|\), \(oe_{<}(h)=|Ext_{<}(h)-Int_{<}(h)|\) and \(ie_{<}(h)=|Int_{<}(h)\cap Ext_{<}(h)|\). The hypergraph Tutte polynomial of Bernardi, Kalman and Postnikov is defined the following way: **Definition 2.7**.: \[\mathcal{T}_{\mathcal{H}}(x,y)=\sum_{h\in H}x^{oi_{<}(h)}y^{oe_{<}(h)}(x+y-1) ^{ie_{<}(h)}.\] They show that this polynomial is well-defined, that is, it does not depend on the chosen hyperedge-ordering. Moreover, it has many nice properties. (In fact, they prove all this for the case of polymatroids.) The hypergraph Tutte polynomial does not agree with the usual graphic Tutte polynomial, but it can be obtained as a simple transformation. Indeed, for graphs, one typically only defines internal activity for edges that are in the spanning tree, and external activity for edges that are not in the spanning tree. However, for hypergraphs it is not possible to make such a distinction, as the value of a hypertree on a hyperedge can potentially be more than \(1\). Hence in Definition 2.6, any hyperedge can potentially be active (both internally and externally). If we apply Definition 2.6 to a graph, then each edge not in the spanning tree will be internally active, and each edge in the spanning tree will be externally active. Taking these differences into account, one obtains that for a graph \(G=(V,E)\) we have \(T_{G}(x,y)=x^{|E|-|V|+1}y^{|V|-1}\mathcal{T}_{G}(\frac{x+y-1}{y},\frac{x+y-1}{ y})\), see [3, Theorem 5.2]. Two interesting specializations of \(\mathcal{T}_{\mathcal{H}}(x,y)\) are \[\mathcal{T}_{\mathcal{H}}(x,1)=\sum_{h\in H}x^{i_{<}(h)}\quad\text{ and }\quad \mathcal{T}_{\mathcal{H}}(1,y)=\sum_{h\in H}y^{e_{<}(h)},\] that we call the _interior_ and _exterior_ polynomial, respectively. We note that in [5], \(x^{|E|}\mathcal{T}_{\mathcal{H}}(\frac{1}{x},1)\) is called the interior polynomial, and \(y^{|E|}\mathcal{T}_{\mathcal{H}}(1,\frac{1}{y})\) is called the exterior polynomial. #### 2.2.2. Embedding activities for hypergraphs To define embedding activities for hypergraphs, we assume that \(\operatorname{Bip}\mathcal{H}\) has a ribbon graph structure, and a basis \((b_{0},b_{0}b_{1})\) is fixed (where \(b_{0}\) is a node of \(\operatorname{Bip}\mathcal{H}\) (either from \(V\) or from \(E\)) and \(b_{0}b_{1}\) is an edge of \(\operatorname{Bip}\mathcal{H}\) incident to \(b_{0}\)). To generalize Bernardi's embedding activities, one needs to generalize the tour of a spanning tree to hypertrees. To do this, we will make use of a nice representation of hypertrees. Recall that hypertrees are exactly those vectors \(h\in\mathbb{Z}^{E}\) such that \(h+\mathbf{1}_{E}\) is the degree sequence of some spanning tree of \(G\). It is very useful to have a system of representing spanning trees at hand. Fortunately, ribbon structures provide us such trees. **Definition 2.8** (Jaeger tree [6]).: In a bipartite graph with a ribbon structure and basis, we call a spanning tree \(T\) a _Jaeger tree_ if for each edge \(ev\notin T\) with \(e\in E\) and \(v\in V\), the tour of \(T\) has \((e,ev)\) as a current node-edge pair before \((v,ev)\). In other words, in the tour of a Jaeger tree, each non-edge is first seen at its emerald endpoint. **Theorem 2.9**.: _[_6_]_ _Let \(\mathcal{H}\) be a connected hypergraph, and fix an arbitrary ribbon structure and basis. Then for each hypertree \(h\), there is exactly one Jaeger tree representing \(h\)._ **Remark 2.10**.: Of course one could also define Jaeger trees with the rule that each non-tree edge is first seen at its violet endpoint. In [6] both types of trees are investigated, and are called emerald, and violet Jaeger trees, respectively. In this paper, we only consider emerald Jaeger trees, and hence we drop the word "emerald". We note that Jaeger trees also have a nice geometric interpretation as a dissection of the root polytope of \(\operatorname{Bip}\mathcal{H}\) (see [6, Section 6]). While this interpretation is often very useful, it will not play a role in this paper. Since given a ribbon structure and basis, now we have a well-defined set of representing spanning trees for the hypertrees, we can associate to hypertrees the tours of their Jaeger trees. Then, we can use the tour to define an ordering of the hyperedges, as we now explain. **Definition 2.11** (ordering corresponding to a hypertree, \(<_{h}\)).: Let \(\mathcal{H}=(V,E)\) be a hypergraph and fix a ribbon structure on \(\operatorname{Bip}\mathcal{H}\) and a basis \((b_{0},b_{0}b_{1})\). Let \(h\) be a hypertree. Then \(<_{h}\) is a complete ordering of \(E\) defined the following way. Let \(T\) be the unique Jaeger tree realizing \(h\). For \(e,f\in E\), we define \(e<_{h}f\), if in the tour of \(T\), the first time that \(e\) is reached is before the first time that \(f\) is reached. We denote \(e\leq_{h}f\) if either \(e=f\) or \(e<_{h}f\). **Definition 2.12** (Embedding activities).: Let \(\mathcal{H}=(V,E)\) be a hypergraph and fix a ribbon structure on \(\operatorname{Bip}\mathcal{H}\) and a basis \((b_{0},b_{0}b_{1})\). A hyperedge \(e\in E\) is _internally embedding active_ for the hypertree \(h\) (with respect to the ribbon structure and basis), if \(h-\mathbf{1}_{e}+\mathbf{1}_{f}\) is not a hypertree for any \(f<_{h}e\). A hyperedge \(e\in E\) is _externally embedding active_ for the hypertree \(h\) (with respect to the ribbon structure and basis), if \(h+\mathbf{1}_{e}-\mathbf{1}_{f}\) is not a hypertree for any \(f<_{h}e\). We denote by \(Int_{em}(h)\) the set of internally embedding active elements of \(h\), and by \(Ext_{em}(h)\) the set of externally embedding active elements of \(h\). We denote \(oi_{em}(h)=|Int_{em}(h)-Ext_{em}(h)|\), \(oe_{em}(h)=|Ext_{em}(h)-Int_{em}(h)|\), \(ie_{em}(h)=|Int_{em}(h)\cap Ext_{em}(h)|\). **Example 2.13**.: Take the graph \(\operatorname{Bip}\mathcal{H}\) of Figure 2 with the indicated ribbon structure and basis. (For the node names, see the last panel.) The \(6^{th}\) panel shows the Jaeger tree of the hypertree \(h\) with \(h(e_{0})=h(e_{1})=1\), \(h(e_{2})=h(e_{3})=0\). We have \(e_{0}<_{h}e_{2}<_{h}e_{1}<_{h}e_{3}\). For the hypertree \(h\), hyperedge \(e_{0}\) is both internally and externally active (since it is the smallest element with respect to \(<_{h}\)). Hyperedge \(e_{2}\) is internally active, since a hypertree is nonnegative, however, it is externally passive since \(h+\mathbf{1}_{e_{2}}-\mathbf{1}_{e_{0}}\) is a hypertree. Hyperedge \(e_{1}\) is both internally and externally passive, since \(h+\mathbf{1}_{e_{1}}-\mathbf{1}_{e_{0}}\) and \(h-\mathbf{1}_{e_{1}}+\mathbf{1}_{e_{2}}\) are both hypertrees. Finally, \(e_{3}\) is internally active since \(h(e_{3})=0\), but externally passive since \(h+\mathbf{1}_{e_{3}}-\mathbf{1}_{e_{0}}\) is a hypertree. ## 3. Well-definedness of the two variable embedding Tutte polynomial Let us define the two-variable hypergraph Tutte polynomial via embedding activities, analogously to the formula using activities with respect to an ordering of the hyperegdes. Figure 2. A bipartite graph corresponding to a hypergraph with 3 vertices (violet nodes) and 4 hyperedges (emerald nodes). Take the ribbon structure induced by the positive orientation of the plane, and basis \((v_{0},v_{0}e_{0})\) (see the last panel for the notations). The first 7 panels show the 7 hypertrees of the hypergraph, together with their representing Jaeger trees, and the corresponding terms of \(\mathcal{T}_{\mathcal{H}}^{em}\). **Definition 3.1**.: \[\mathcal{T}^{em}_{\mathcal{H}}(x,y)=\sum_{h\in H}x^{oi_{em}(h)}y^{oe_{em}(h)}(x+y-1 )^{ie_{em}(h)}.\] **Example 3.2**.: For the hypergraph of Figure 2, we have \[\begin{array}{ccccc}&&y^{2}&-2y^{3}&+y^{4}\\ &&-4xy^{2}&+4xy^{3}&&\\ \mathcal{T}^{em}_{\mathcal{H}}(x,y)=&&-3x^{2}y&+6x^{2}y^{2}&&\\ &&-x^{3}&+4x^{3}y&&&\\ &&+x^{4}&&\end{array}.\] **Theorem 3.3**.: _For any hypergraph \(\mathcal{H}\), \(\mathcal{T}^{em}_{\mathcal{H}}\) is well-defined, that is, it does not depend on the chosen ribbon structure and basis. Moreover, \(\mathcal{T}^{em}_{\mathcal{H}}=\mathcal{T}_{\mathcal{H}}\)._ We prove Theorem 3.3 later in this Section. Note the following two specializatons: \[\mathcal{T}^{em}_{\mathcal{H}}(x,1)=\sum_{h\in H}x^{oi_{em}(h)}x^{ie_{em}(h)} =x^{|Int_{em}(h)|}\] and \[\mathcal{T}^{em}_{\mathcal{H}}(1,y)=\sum_{h\in H}y^{oe_{em}(h)}y^{ie_{em}(h)} =y^{|Ext(h)|}.\] In [6], Kalman and the author proved that \(\mathcal{T}^{em}_{\mathcal{H}}(x,1)\) is well-defined and agrees with \(\mathcal{T}_{\mathcal{H}}(x,1)\). Theorem 3.3 also implies the following, which was stated as a conjecture in [6]. **Corollary 3.4**.: \(\mathcal{T}^{em}_{\mathcal{H}}(1,y)\) _is well-defined and it agrees with \(\mathcal{T}_{\mathcal{H}}(1,y)\)._ To prove Theorem 3.3, we recall the definition of the corank-nullity polynomial \(\tilde{\mathcal{T}}_{\mathcal{H}}\) of a hypergraph from [3], and show that \(\mathcal{T}^{em}_{\mathcal{H}}\) has the same relationship to \(\tilde{\mathcal{T}}_{\mathcal{H}}\) as \(\mathcal{T}_{\mathcal{H}}\) does, thus proving Theorem 3.3. To establish the relationship of \(\mathcal{T}^{em}_{\mathcal{H}}\) and \(\tilde{\mathcal{T}}_{\mathcal{H}}\), we will copy [3]: They proved the existence of a Crapo-type decomposition of \(\mathbb{Z}^{E}\) for activities with respect to a hyperedge ordering, then used the Crapo decomposition to relate \(\mathcal{T}_{\mathcal{H}}\) to \(\tilde{\mathcal{T}}_{\mathcal{H}}\). We will apply the same argument, with the only exception that we need to prove the existence of a Crapo decomposition for embedding activities. The proof of the embedding Crapo decomposition is the main result of this paper, and it is deferred to Section 4. Let us introduce the necessary notions. **Definition 3.5** (Embedding Crapo interval).: The Crapo interval of a hypertree \(h\) with respect to a ribbon structure and basis is \[C_{em}(h)=\{y\in\mathbb{Z}^{E}:(y(e)>h(e)\Rightarrow e\in Ext_{em}(h)),(y(e)< h(e)\Rightarrow e\in Int_{em}(h))\}\] For \(c\in\mathbb{Z}^{E}\) and \(S\subseteq\mathbb{Z}^{E}\) let us denote \(d_{1}(S,c)=\min\{\sum_{e\in E}|s(e)-c(e)|\mid s\in S\}\). This is commonly called the Manhattan distance. The following is the main technical theorem of the paper. **Theorem 3.6**.: 1. _The Crapo intervals_ \(\{C_{em}(h)\mid h\in H(\mathcal{H})\}\) _partition_ \(\mathbb{Z}^{E}\)_._ 2. _For each_ \(c\in\mathbb{Z}^{E}\)_, if_ \(c\in C_{em}(h)\)_, then_ \(d_{1}(H,c)=d_{1}(h,c)\)_._ We give the proof in Section 4. We also need the generalizations of corank and nullity from [3]: **Definition 3.7**.: [3] Let \(S\subseteq\mathbb{Z}^{E}\) and \(c\in\mathbb{Z}^{E}\). We denote \(d_{1}^{<}(S,c)=\min\{\sum_{e\in E}\max\{0,c(e)-s(e)\}\mid s\in S\}\), \(d_{1}^{>}(S,c)=\min\{\sum_{e\in E}\max\{0,s(e)-c(e)\}\mid s\in S\}\). Here \(d_{1}^{<}(H,c)\) is a generalization of corank. Indeed, if we take a graph, that is, the set of hypertrees \(H\) is the set of characteristic vectors of spanning trees, then for a \(c\in\{0,1\}^{E}\), \(d_{1}(H,c)\) is exactly the corank of \(c\). Similarly, \(d_{1}^{>}(H,c)\) is the generalization of nullity. Using these notions, Bernardi, Kalman and Postnikov define the following corank-nullity polynomial: \[\tilde{\mathcal{T}}_{\mathcal{H}}(u,v)=\sum_{c\in\mathbb{Z}^{E}}u^{d_{1}^{ >}(H(\mathcal{H}),c)}v^{d_{1}^{<}(H(\mathcal{H}),c)}\] It is clear that \(\tilde{\mathcal{T}}_{\mathcal{H}}\) is well-defined. Bernardi, Kalman and Postnikov [3] shows the well-definedness of \(\mathcal{T}_{\mathcal{H}}\) by showing that \[\tilde{\mathcal{T}}_{\mathcal{H}}(u,v)=\mathcal{T}_{\mathcal{H}}\left(\frac{ 1}{1-u},\frac{1}{1-v}\right)\] which is understood as an identity of formal power series, where \(\frac{1}{1-u}=\sum_{i=0}^{\infty}u^{i}\). We will show the same for \(\mathcal{T}_{\mathcal{H}}^{em}\). A key Lemma for [3] (and also for us) is the following. **Lemma 3.8**.: _[_3_, Lemma 10.3]_ _For any \(c\in\mathbb{Z}^{E}\) there is at least one \(h\in H\) such that_ \[d_{1}^{<}(H,c)=d_{1}^{<}(h,c)\text{ and }d_{1}^{>}(H,c)=d_{1}^{>}(h,c).\] _Hence \(d_{1}(H,c)=d_{1}(h,c)=d_{1}^{<}(h,c)+d_{1}^{>}(h,c)\), and for any \(h^{\prime}\) with \(d_{1}(H,c)=d_{1}(h^{\prime},c)\), we have \(d_{1}^{<}(H,c)=d_{1}^{<}(h^{\prime},c)\) and \(d_{1}^{>}(H,c)=d_{1}^{>}(h^{\prime},c)\)._ Proof of Theorem 3.3.: We show that \(\tilde{\mathcal{T}}_{\mathcal{H}}(u,v)=\mathcal{T}_{\mathcal{H}}^{em}\left( \frac{1}{1-u},\frac{1}{1-v}\right)\) as formal power series, where \(\frac{1}{1-u}=\sum_{i=0}^{\infty}u^{i}\). As by [3], \(\tilde{\mathcal{T}}_{\mathcal{H}}(u,v)=\mathcal{T}_{\mathcal{H}}\left(\frac{ 1}{1-u},\frac{1}{1-v}\right)\), this means that \(\mathcal{T}_{\mathcal{H}}^{em}\) does not depend on the chosen ribbon structure and basis, and it agrees with \(\mathcal{T}_{\mathcal{H}}\). By the definition of \(\tilde{\mathcal{T}}_{\mathcal{H}}\) and Theorem 3.6 (1), we have \[\tilde{\mathcal{T}}_{\mathcal{H}}(u,v)=\sum_{c\in\mathbb{Z}^{E}}u^{d_{1}^{>} (H,c)}v^{d_{1}^{<}(H,c)}=\sum_{h\in H(\mathcal{H})}\sum_{c\in C_{em}(h)}u^{d_ {1}^{>}(H,c)}v^{d_{1}^{<}(H,c)}\] By Theorem 3.6 (2) and Lemma 3.8 this further equals \[=\sum_{h\in H(\mathcal{H})}\sum_{c\in C_{em}(h)}u^{d_{1}^{>}(h,c)}v^{d_{1}^{ <}(h,c)}\] Now, by the definition of \(C_{em}(h)\), we have \[=\sum_{h\in H(\mathcal{H})}\left(\frac{1}{1-u}\right)^{oi_{em}(h)} \left(\frac{1}{1-v}\right)^{oe_{em}(h)}\left(\frac{1}{1-u}+\frac{1}{1-v}-1 \right)^{ie_{em}(h)}=\] \[=\mathcal{T}_{\mathcal{H}}^{em}\left(\frac{1}{1-u},\frac{1}{1-v}\right)\] where the equalities are understood as equalities of formal power series, with \(\frac{1}{1-u}=\sum_{i=0}^{\infty}u^{i}\) ## 4. Proving the existence of the embedding Crapo decomposition This section is dedicated to proving Theorem 3.6, the existence of a Crapo decomposition for embedding activities of hypergraphs. ### Lemmas on Jaeger trees and embedding orders We collect some properties of Jaeger trees that we will need. Throughout this section, fix a ribbon structure and basis for \(\operatorname{Bip}\mathcal{H}\). We first recall an alternative characterization of Jaeger trees from [7] that uses an ordering of all spanning trees of \(\operatorname{Bip}\mathcal{H}\). For any two trees \(T\) and \(T^{\prime}\), their tours (Definition 2.3)) have some initial segment that coincides (this segment might be empty), and in this initial segment the next node-edge pair is chosen the same way for the two trees. Hence, the first difference between the tours of \(T\) and \(T^{\prime}\) needs to be that some node-edge pair \((x,xy)\) is treated differently for the two trees: \(xy\in T-T^{\prime}\) or \(xy\in T^{\prime}-T\). This enables us to define an ordering \(\prec\) of spanning trees of \(\operatorname{Bip}\mathcal{H}\). **Definition 4.1** (\(\prec\)).: For two spanning trees \(T\) and \(T^{\prime}\) of \(\operatorname{Bip}\mathcal{H}\), we set \(T\prec T^{\prime}\) if for the first difference \((x,xy)\) of their respective tour, either \(x\) is an emerald node and \(xy\in T^{\prime}-T\) or \(x\) is a violet node and \(xy\in T-T^{\prime}\). The following is the key property of Jaeger trees. **Theorem 4.2**.: _[_7_, Theorems 7.1 and 7.6]_ _For each hypertree \(h\), there is a unique realizing hypertree \(T\). Moreover, \(T\) is the minimal tree according to \(\prec\) among all trees representing \(h\)._ We note that the minimality in the above theorem implies that for a given hypertree, the unique realizing Jaeger tree can be computed greedily. This is explained in more detail in [7, Section 7]. **Example 4.3**.: Take Figure 3. Let the ribbon structure be the one induced by the positive orientation of the plane, and take the basis \((b_{0},b_{0}b_{1})\) according to the figure. \(T\) (on the left panel) and \(T^{\prime}\) (on the right panel) are two spanning trees realizing the same hypertree. \(T\) is Jaeger, while \(T^{\prime}\) is not, since \(b_{0}e\notin T^{\prime}\) is such that \((b_{0},b_{0}e)\) precedes \((e,b_{0}e)\) in the tour of \(T^{\prime}\). The first difference between the tours os \(T\) and \(T^{\prime}\) is at \((b_{0},b_{0}e)\), where \(b_{0}\) is violet and \(b_{0}b_{1}\in T-T^{\prime}\). Hence indeed \(T\prec T^{\prime}\), cf. Theorem 4.2. For a spanning tree \(T\) of \(\operatorname{Bip}\mathcal{H}\) and edge \(\varepsilon\in T\), we call the component of \(T-\varepsilon\) containing \(b_{0}\) the _base component_. The following property of fundamental cuts of Jaeger trees (an updated version of [6, Lemma 6.13]) makes them very useful to us. Figure 3. Illustration for Example 4.3. **Lemma 4.4**.: _[_6_, Lemma 6.13]_ _Let \(T\) be a Jaeger tree of \(\operatorname{Bip}\mathcal{H}\), and \(ev\in T\). If \(e^{\prime}v^{\prime}\in C^{*}(T,ev)-T\) is an edge such that its emerald endpoint \(e^{\prime}\) is in the base component, then \((e^{\prime},e^{\prime}v^{\prime})\) comes earlier in the tour of \(T\) than \((e,ev)\)._ **Lemma 4.5**.: _Let \(T\) be a Jaeger tree of \(\operatorname{Bip}\mathcal{H}\), realizing hypertree \(h\), and let \(ev\in T\) such that \(e\) is in the base component of \(T-ev\). Let \(E_{1}\) be the set of emerald nodes not in the base component of \(T-ev\). Then for any \(f\in E_{1}\), we have \(f>_{h}e\)._ Proof.: The tour of \(T\) only reaches the nodes of \(E_{1}\) after traversing \(ev\). ### Lemmas on exchanges The following lemma is a special case of a well-known lemma in matroid theory, see for example [8, Theorem 39.13]. We sketch its proof for completeness. **Lemma 4.6**.: _[_8_, Theorem 39.13]_ _Let \(\eta_{1},\ldots\eta_{t},\varepsilon_{1},\ldots,\varepsilon_{t}\) be edges in the graph \(G\), and let \(T\) be a spanning tree such that \(\eta_{1},\ldots\eta_{t}\in T\), \(\varepsilon_{1},\ldots\varepsilon_{t}\notin T\), \(\eta_{i}\in C(T,\varepsilon_{i})\)\(\forall i\) and \(\eta_{i}\notin C(T,\varepsilon_{j})\) for \(j<i\). Then \(T-\{\eta_{1},\ldots\eta_{t}\}+\{\varepsilon_{1},\ldots\varepsilon_{t}\}\) is also a spanning tree._ Proof.: Let \(T_{t}=T-\eta_{t}+\varepsilon_{t}\). Then \(T_{t}\) is a spanning tree since \(\eta_{t}\in C(T,\varepsilon_{t})\). Moreover, since \(\eta_{t}\notin C(T,\varepsilon_{j})\) for \(j<t\), we have \(C(T_{t},\varepsilon_{j})=C(T,\varepsilon_{j})\) for all \(j<t\). This implies that \(T_{t-1}=T_{t}-\eta_{t-1}+\varepsilon_{t-1}\) is also a spanning tree, moreover, since \(\eta_{t},\eta_{t-1}\notin C(T,\varepsilon_{j})\) for \(j<t-1\), \(C(T_{t-1},\varepsilon_{j})=C(T,\varepsilon_{j})\) for all \(j<t-1\). We may continue switching edges like this until we arrive at \(T^{\prime}\). Let us state two variations of Lemma 4.6, since these will be the formulations convenient to us. **Lemma 4.7**.: _Let \(e_{0}v_{0},\ldots,e_{t-1}v_{t-1}\) and \(e_{1}u_{1},\ldots,e_{t}u_{t}\) be edges in the graph \(G\), and let \(T\) be a spanning tree such that \(e_{0}v_{0},\ldots,e_{t-1}v_{t-1}\in T\), \(e_{1}u_{1},\ldots,e_{t}u_{t}\notin T\) and \(e_{i}v_{i}\in C(T,e_{i+1}u_{i+1})\) for \(i=0,\ldots t-1\). Then there exist indices \(i_{0}=0<i_{1}<\cdots<i_{s}=t\) such that_ \[T-\{e_{i_{0}}v_{i_{0}},\ldots,e_{i_{s-1}}v_{i_{s-1}}\}+\{e_{i_{1}}u_{i_{1}}, \ldots,e_{i_{s}}u_{i_{s}}\}\] _is a spanning tree._ Proof.: Take the following auxiliary digraph \(D\): Let the vertex set be \(\{0,\ldots,t\}\). Draw an edge from \(i\) to \(j\) if \(e_{i}v_{i}\in C(T,e_{j}u_{j})\). Then, by our assumptions, an edge points from \(i\) to \(i+1\) for each \(0\leq i\leq t-1\), and there might also be other edges in the digraph. In any case, there is a directed path in \(D\) from \(0\) to \(t\). Choose a shortest directed path from \(0\) to \(t\) in \(D\), and let the vertices of this path be \(i_{0}=0,i_{1},\ldots,i_{s}=t\). Then \(e_{i_{j}}v_{i_{j}}\in C(T,e_{i_{j+1}}u_{i_{j+1}})\) for each \(0\leq j<s\) by construction, and \(e_{i_{j}}v_{i_{j}}\notin C(T,e_{i_{j+1+r}}u_{i_{j+1+r}})\) for \(r>0\) because otherwise we could pick a shorter path from \(0\) to \(t\) in \(D\). Now we can apply Lemma 4.6 for \(T\) with \(\eta_{j}=e_{i_{s-j}}v_{i_{s-j}}\) and \(\varepsilon_{j}=e_{i_{s-j+1}}u_{i_{s-j+1}}\). We get that \(T-\{e_{i_{0}}v_{i_{0}},\ldots,e_{i_{s-1}}v_{i_{s-1}}\}+\{e_{i_{1}}u_{i_{1}}, \ldots,e_{i_{s}}u_{i_{s}}\}\) is indeed a spanning tree. **Lemma 4.8**.: _Let \(e_{0}v_{0},\ldots,e_{t-1}v_{t-1}\) and \(e_{1}u_{1},\ldots,e_{t}u_{t}\) be edges in the graph \(G\), and let \(T\) be a spanning tree such that \(e_{0}v_{0},\ldots,e_{t-1}v_{t-1}\notin T\), \(e_{1}u_{1},\ldots,e_{t}u_{t}\in T\) and \(e_{i+1}u_{i+1}\in C(T,e_{i}v_{i})\) for \(i=0,\ldots t-1\). Then there exist indices \(i_{0}=0<i_{1}<\cdots<i_{s}=t\) such that_ \[T+\{e_{i_{0}}v_{i_{0}},\ldots,e_{i_{s-1}}v_{i_{s-1}}\}-\{e_{i_{1}}u_{i_{1}}, \ldots,e_{i_{s}}u_{i_{s}}\}\] _is a spanning tree._ Proof.: Let us again define an auxiliary digraph \(D\) on the vertex set \(\{0,\ldots,t\}\). Draw an edge from \(i\) to \(j\) if \(e_{j}u_{j}\in C(T,e_{i}v_{i})\). Then, by our assumptions, an edge points from \(i\) to \(i+1\) for each \(0\leq i\leq t-1\), and there might also be other edges in the digraph. In any case, there is a directed path in \(D\) from \(0\) to \(t\). Choose a shortest directed path from \(0\) to \(t\) in \(D\), and let the vertices of this path be \(i_{0}=0,i_{1},\ldots,i_{s}=t\). Then \(e_{i_{j+1}}u_{i_{j+1}}\in C(T,e_{i_{j}}v_{i_{j}})\) for each \(0\leq j<s\), and \(e_{i_{j+1+r}u_{i_{j+1+r}}}\notin C(T,e_{i_{j}}v_{i_{j}})\) for \(r>0\) because otherwise we could pick a shorter path from \(0\) to \(t\) in \(D\). Now we can apply Lemma 4.6 for \(T\) with \(\eta_{j}=e_{i_{k+j}}u_{i_{k+j}}\) and \(\varepsilon_{j}=e_{i_{k+j-1}}v_{i_{k+j-1}}\). We get that \(T+\{e_{i_{0}}v_{i_{0}},\ldots,e_{i_{s-1}}v_{i_{s-1}}\}-\{e_{i_{1}}u_{i_{1}}, \ldots,e_{i_{s}}u_{i_{s}}\}\) is indeed a spanning tree. ### Three technical lemmas on first differences of Jaeger trees In this section, we collect the \(3\) main technical lemmas needed to prove Theorem 3.6. **Lemma 4.9**.: _Let \(h\) and \(h^{\prime}\) be hypertrees with respective Jaeger trees \(T\) and \(T^{\prime}\). Suppose that the first difference between the tours of \(T\) and \(T^{\prime}\) is at \((e,\varepsilon)\), where \(\varepsilon\in T^{\prime}-T\)._ _Then there exist a hyperedge \(f\) such that \(h(f)>h^{\prime}(f)\), moreover, \(h-\mathbf{1}_{f}+\mathbf{1}_{e}\) and \(h^{\prime}+\mathbf{1}_{f}-\mathbf{1}_{e}\) are both hypertrees, and \(e<_{h}f\) and \(e<_{h^{\prime}}f\)._ Proof.: Recall that \(<_{h}\) is defined as the order in which the emerald tour of \(T\) reaches the nodes in \(E\). Take the fundamental cut \(C^{*}(T^{\prime},\varepsilon)\). We call the shore containing \(b_{0}\) the _base component_ of \(T^{\prime}-\varepsilon\), moreover, we denote its set of violet nodes by \(V_{0}\) and its set of emerald nodes by \(E_{0}\). Also, we denote \(V_{1}=V-V_{0}\) and \(E_{1}=E-E_{0}\). Apply Lemma 4.4 to the Jaeger tree \(T^{\prime}\). This implies that each edge of \(C^{*}(T^{\prime},\varepsilon)-e\) that has its emerald endpoint in the base component became current in the emerald tour of \(T^{\prime}\) before \((e,\varepsilon)\). Since the emerald tours of \(T\) and \(T^{\prime}\) coincide until \((e,\varepsilon)\), these edges also became current in the tour of \(T\), and as they are not in \(T^{\prime}\), they are also not in \(T\). As also \(\varepsilon\notin T\), we conclude: \[T\text{ has no edge from }C^{*}(T^{\prime},\varepsilon)\text{ that has an endpoint in }E_{0}. \tag{4.1}\] We claim that for any \(g\in E_{1}\), we have \(e<_{h}g\) and \(e<_{h^{\prime}}g\). Indeed, by Lemma 4.5, we have \(e<_{h^{\prime}}g\). As the tours of \(T\) and \(T^{\prime}\) agree until reaching \(e\), this implies that \(e\) is also reached before \(g\) in \(T\), hence also \(e<_{h}g\). We will look for our hyperedge \(f\) within \(E_{1}\). Take the fundamental cycle \(C(T,\varepsilon)\). A cycle and a cut always meet in an even number of edges, and \(\varepsilon\in C^{*}(T^{\prime},\varepsilon)\cap C(T,\varepsilon)\), hence the cycle \(C(T,\varepsilon)\) contains at least one edge \(\eta=e_{1}u_{1}\) of \(T\cap C^{*}(T^{\prime},\varepsilon)\). For this \(\eta\), we also have \(\varepsilon\in C(T^{\prime},\eta)\), as \(\eta\) has one endpoint in \(E_{0}\cup V_{0}\) and one endpoint in \(E_{1}\cup V_{1}\), and the only edge of \(T^{\prime}\) connecting these two sets is \(\varepsilon\). By (4.1), \(e_{1}\) cannot be in \(E_{0}\), thus, \(e_{1}\in E_{1}\). Since \(\eta\in C(T,\varepsilon)\), the graph \(T-\eta+\varepsilon\) is a spanning tree. Moreover, it realizes \(h-\mathbf{1}_{e_{1}}+\mathbf{1}_{e}\) where \(e<_{h}e_{1}\). Also, \(\varepsilon\in C(T^{\prime},\eta)\), hence \(T^{\prime}-\varepsilon+\eta\) is a spanning tree, and it realizes \(h^{\prime}+\mathbf{1}_{e_{1}}-\mathbf{1}_{e}\) where \(e<_{h^{\prime}}e_{1}\). Thus, if \(h(e_{1})>h^{\prime}(e_{1})\), then \(e_{1}\) is a suitable choice for \(f\). If \(h(e_{1})\leq h^{\prime}(e_{1})\), then we have to continue looking for an appropriate hyperedge \(f\). We describe the general step of this process. In a general step, we will have a spanning tree satisfying the following properties: \[\tilde{T}\subseteq T\cup T^{\prime}\text{ is a spanning tree representing }h,\] \[\varepsilon\notin\tilde{T}. \tag{4.2}\] At the beginning, we have \(\tilde{T}=T\), that satisfies all requirements. In a general step, \(\tilde{T}\) is typically not a Jaeger tree (as \(T\) is the only Jaeger tree representing \(h\)). However, since \(\tilde{T}\subseteq T\cup T^{\prime}\) and \(\varepsilon\notin\tilde{T}\), we still have the following property: \[\tilde{T}\text{ has no edge from }C^{*}(T^{\prime},\varepsilon)\text{ that has an endpoint in }E_{0}. \tag{4.3}\] Let \(\varepsilon_{0}=\varepsilon\), and suppose that for some \(t\geq 1\), we have a sequence of edges \(\varepsilon_{0},\eta_{1},\varepsilon_{1},\ldots,\eta_{t-1},\varepsilon_{t-1}, \eta_{t}\) with the following properties: \[\forall i\in[0,t-1]: \varepsilon_{i}=e_{i}v_{i}\in T^{\prime}-\tilde{T},\] \[\forall i\in[1,t]: \eta_{i}=e_{i}u_{i}\in\tilde{T}-T^{\prime},\] \[\forall i\in[1,t]: \eta_{i+1}\in C^{*}(T^{\prime},\varepsilon_{i})\cap C(\tilde{T}, \varepsilon_{i})-\varepsilon_{i},\] \[\forall i\in[0,t-1]: \varepsilon_{i}\in C(T^{\prime},\eta_{i+1}),\] \[\forall i\in[1,t]: e_{i}\in E_{1},\] the edges \[\varepsilon_{0},\eta_{1},\varepsilon_{1},\ldots\varepsilon_{t-1},\eta_{t}\] are all distinct, \[h^{\prime}+\mathbf{1}_{e_{t}}-\mathbf{1}_{e}\text{ is a hypertree,}\] \[h-\mathbf{1}_{e_{t}}+\mathbf{1}_{e}\text{ is a hypertree.} \tag{4.4}\] For \(\tilde{T}=T\), we have just proved that such a set of edges can be chosen for \(t=1\). Also, note that in that argument, we did not use the Jaeger property of \(T\), only (4.1). The analogue of that, (4.3) is true for an arbitrary \(\tilde{T}\). Hence for any \(\tilde{T}\) satisfying (4.2), we can choose \(\eta_{1}\) such that (4.4) holds (with \(t=1\)). Suppose that we have \(\varepsilon_{0},\eta_{1},\varepsilon_{1},\ldots,\eta_{t-1},\varepsilon_{t-1}, \eta_{t}\) for some \(t\geq 1\), satisfying (4.4). If \(h(e_{t})>h^{\prime}(e_{t})\), then \(f=e_{t}\) satisfies all properties required by the lemma, and the proof is complete. If \(h(e_{t})\leq h^{\prime}(e_{t})\), then we show that either we can add a new pair of edges \(\varepsilon_{t},\eta_{t+1}\) and still satisfy (4.4), or we can find a new tree \(\tilde{T}\) satisfying (4.2) that has a smaller symmetric difference to \(T^{\prime}\) than the current \(\tilde{T}\) (and start anew with this new \(\tilde{T}\)). If we prove this, then as \(|T\Delta T^{\prime}|\) is finite, moreover, \(\{\varepsilon_{0},\ldots,\varepsilon_{t-1}\}\) is a subset of \(T^{\prime}-\tilde{T}\subseteq T^{\prime}-T\) which is a finite set, the process cannot go on indefinitely. Hence there must be a moment when we conclude that \(h(e_{t})>h^{\prime}(e_{t})\), and this gives us an \(f\) satisfying the requirements of the lemma, thereby finishing the proof. Hence let us suppose that \(h(e_{t})\leq h^{\prime}(e_{t})\). Note that \(e_{t}\neq e\), since \(e\in E_{0}\) but \(e_{t}\in E_{1}\) by our assumption. There might be more than one indices \(i\) such that \(e_{i}=e_{t}\), but in any case, since \(e_{t}\neq e\), the number of edges \(\eta_{i}\) incident to \(e_{t}\) (from \(\tilde{T}-T^{\prime}\)) is one larger than the number of edges \(\varepsilon_{i}\) (from \(T^{\prime}-\tilde{T}\)). By (4.4), these edges are all different. Since \(h(e_{t})=d_{\tilde{T}}(e_{t})\leq h^{\prime}(e_{t})=d_{T^{\prime}}(e_{t})\), the degree of \(e_{t}\) in \(\tilde{T}-T^{\prime}\) is at most the degree of \(e_{t}\) in \(T^{\prime}-\tilde{T}\). Thus, there is at least one so far unchosen edge \(\varepsilon_{t}=e_{t}v_{t}\) incident to \(e_{t}\) that is in \(T^{\prime}-\tilde{T}\). Take \(C^{*}(T^{\prime},\varepsilon_{t})\cap C(\tilde{T},\varepsilon_{t})\). As \(\varepsilon_{t}\) is in this intersection, there need to be at least one more edge \(\eta_{t+1}\in C^{*}(T^{\prime},\varepsilon_{t})\cap C(\tilde{T},\varepsilon_ {t})\) that is necessarily in \(\tilde{T}-T^{\prime}\). Let \(\eta_{t+1}=e_{t+1}u_{t+1}\). This \(\eta_{t+1}\) might or might not agree with some previously chosen \(\eta_{i}\), but before addressing this issue, let us note two things. Note that \(\varepsilon_{t}\in C(T^{\prime},\eta_{t+1})\) as \(\varepsilon_{t}\) is the only edge of \(T^{\prime}\) connecting the two shores of \(C^{*}(T^{\prime},\varepsilon_{t})\). Also, let us show that \(e_{t+1}\in E_{1}\). Recall that \(e_{t}\in E_{1}\). As \((E_{0}\cup V_{0},E_{1}\cup V_{1})\) is a fundamental cut of \(T^{\prime}\), one of the components of \(T^{\prime}-\varepsilon_{t}\) only contains vertices from \(E_{1}\cup V_{1}\). Hence \(e_{t+1}\in E_{0}\) would imply that \(\eta_{t+1}\) has endpoints in \(E_{0}\) and \(V_{1}\), thus, \(\eta_{t+1}\in\tilde{T}\cap C^{*}(T^{\prime},\varepsilon)\). However, by (4.3) such an edge cannot have an endpoint in \(E_{0}\), hence we conclude that \(e_{t+1}\in E_{1}\). From here, we distinguish \(2\) cases: Case 1: \(\eta_{t+1}\) does not agree with any of the edges \(\eta_{1},\ldots,\eta_{t}\). In this case we show that \(\varepsilon_{0},\eta_{1},\varepsilon_{1},\ldots,\eta_{t-1},\varepsilon_{t}, \eta_{t+1}\) satisfies (4.4). To prove this, it is left to show that \(h^{\prime}+\mathbf{1}_{\epsilon_{t+1}}-\mathbf{1}_{e}\) is a hypertree, and \(h-\mathbf{1}_{\epsilon_{t+1}}+\mathbf{1}_{e}\) is a hypertree. We know that \(\varepsilon_{i}\in C(T^{\prime},\eta_{i+1})\), for each \(0\leq i\leq t\). Apply Lemma 4.7 for \(T^{\prime}\), and \(\varepsilon_{0},\eta_{1},\ldots\varepsilon_{t},\eta_{t+1}\). By the Lemma, \(h^{\prime}+\mathbf{1}_{\epsilon_{t+1}}-\mathbf{1}_{e}\) is indeed a hypertree. For each \(0\leq i\leq t\) we also have \(\eta_{i+1}\in C(\tilde{T},\varepsilon_{i})\). Hence we can apply Lemma 4.8 for \(\tilde{T}\) and the edges \(\varepsilon_{0},\eta_{1},\ldots,\varepsilon_{t},\eta_{t+1}\), and obtain that \(h-\mathbf{1}_{\epsilon_{t+1}}+\mathbf{1}_{e}\) is a hypertree. Case 2: \(\eta_{t+1}=\eta_{i}\) for some \(1\leq i\leq t\). In this case we show that we can choose a new \(\tilde{T}\) satisfying (4.2) that has a strictly smaller symmetric difference to \(T^{\prime}\) than the current one. Note that for each \(0\leq i\leq t\), \(\eta_{i+1}\in C(\tilde{T},\varepsilon_{i})\). Hence we can apply Lemma 4.8 for \(\tilde{T}\) and the edges \(\varepsilon_{i},\eta_{i+1},\ldots,\varepsilon_{t},\eta_{t+1}=\eta_{i}\). The Lemma gives us a new tree \(\tilde{T}_{1}=\tilde{T}+\varepsilon_{i_{0}}-\eta_{i_{1}}+\varepsilon_{i_{1}}- \eta_{i_{2}}+\cdots+\varepsilon_{i_{s-1}}-\eta_{i_{s}}\) where \(\varepsilon_{i_{0}}=\varepsilon_{i}\) and \(\eta_{i_{s}}=\eta_{t+1}=\eta_{i}\). Hence \(\tilde{T}_{1}\) is a spanning tree that again realizes \(h\), moreover, \(\tilde{T}_{1}\subseteq\tilde{T}\cup T^{\prime}\subseteq T\cup T^{\prime}\). As neither of the \(\eta_{i+1},\ldots,\varepsilon_{t},\eta_{t+1}\) are incident to \(e\), we also have \(\varepsilon\notin\tilde{T}_{1}\). Also, since \(\tilde{T}_{1}\Delta T^{\prime}\subseteq\tilde{T}\Delta T^{\prime}-\varepsilon _{i}\), by choosing \(\tilde{T}_{1}\) as our new \(\tilde{T}\), we obtain a new tree satisfying (4.2), that has a strictly smaller symmetric difference to \(T^{\prime}\) then the previous one. **Lemma 4.10**.: _Let \(J\) and \(J^{\prime}\) be Jaeger trees realizing hypertrees \(x\) and \(x^{\prime}=x+\mathbf{1}_{a}-\mathbf{1}_{b}\). Suppose that the first difference between the tours of \(J\) and \(J^{\prime}\) is at \((g,gv)\) with \(gv\in J^{\prime}-J\). Then_ 1. \(g\neq b\)_,_ 2. \(x+\mathbf{1}_{a}-\mathbf{1}_{g}\) _is a hypertree,_ 3. \(x+\mathbf{1}_{g}-\mathbf{1}_{b}\) _is a hypertree._ Note that \(a=g\) is possible, and in that case \((ii)\) and \((iii)\) are true by definition. Proof.: Apply Lemma 4.9 with \(h=x\), \(h^{\prime}=x^{\prime}\), \(T=J\), \(T^{\prime}=J^{\prime}\), and \(e=g\). We get that there exist some hyperedge \(f\) such that \(x(f)>x^{\prime}(f)\), moreover, \(x-\mathbf{1}_{f}+\mathbf{1}_{g}\) and \(x^{\prime}+\mathbf{1}_{f}-\mathbf{1}_{g}\) are both hypertrees, and \(g<_{h}f\), \(g<_{h^{\prime}}f\). As \(x^{\prime}=x+\mathbf{1}_{a}-\mathbf{1}_{b}\), the only hyperedge with \(x(f)>x^{\prime}(f)\) is \(f=b\). Hence \(x-\mathbf{1}_{b}+\mathbf{1}_{g}\) and \(x^{\prime}+\mathbf{1}_{b}-\mathbf{1}_{g}=x+\mathbf{1}_{a}-\mathbf{1}_{g}\) are both hypertrees. Finally, as \(g<_{h}b\), we cannot have \(g=b\). **Lemma 4.11**.: _Let \(x\) and \(x^{\prime}=x+\mathbf{1}_{a}-\mathbf{1}_{b}\) be two hypertrees with respective Jaeger trees \(J\) and \(J^{\prime}\). Suppose that the first difference in the tours of \(J\) and \(J^{\prime}\) is at \((g,gv)\). Then \(g\leq_{x}a\), and \(g\leq_{x}b\)._ Note that by the Jaeger property \(g\) is an emerald node, hence \(g\leq_{x}a\) and \(g\leq_{x}b\) indeed makes sense. **Proposition 4.12**.: _Suppose that \(T\) is a Jaeger tree with \(ev\notin T\), and suppose that the first node that we reach from \(C(T,ev)\) in the tour of \(T\) is not \(e\). Then, for each emerald node \(f\) of \(C(T,\varepsilon)\), either \(f=e\) or the last time \(f\) is visited in the tour of \(T\) is later than the last time \(e\) is visited in the tour of \(T\)._ Proof.: Let \(g\) be the first node (either violet or emerald) of \(C(T,ev)\) that is reached in the tour of \(T\). Then \(C(T,ev)\) consists of a path between \(g\) and \(e\), a path between \(x\) and \(v\), and the edge \(ev\). By the Jaeger property, \(ev\) is first reached at \(e\), hence the path between \(g\) and \(e\) is visited first. The nodes on this path are ancestors of \(e\) (or agree with \(e\)), hence their last visit is after the last visit of \(e\). The rest of the vertices of \(C(T,ev)\) (that is, the path between \(g\) and \(v\) excuding \(x\)) is only reached after the last visit of \(e\), hence this is true also for their last visits. Proof of Lemma 4.11.: It is enough to prove the statement if \(gv\in J^{\prime}-J\). Indeed, if we know the statement for \(gv\in J^{\prime}-J\), but we have \(gv\in J-J^{\prime}\), then we can apply the Lemma for \(x^{\prime}\) and \(x=x^{\prime}+\mathbf{1}_{b}-\mathbf{1}_{a}\), and obtain that \(g\leq_{x^{\prime}}b\) and \(g\leq_{x^{\prime}}a\). However, as the tours of \(J\) and \(J^{\prime}\) agree until reaching \(g\), \(g\leq_{x^{\prime}}b\) implies \(g\leq_{x}b\) and \(g\leq_{x^{\prime}}a\) implies \(g\leq_{x}a\). Hence let us suppose from now on that \(gv\in J^{\prime}-J\). Apply Lemma 4.9 with \(T=J\), \(T^{\prime}=J^{\prime}\), \(h=x\), \(h^{\prime}=x^{\prime}\) and \(e=g\). We get that there exist a hyperedge \(f\) such that \(g<_{x}f\) and \(x(f)>x^{\prime}(f)\). As \(x^{\prime}=x+\mathbf{1}_{a}-\mathbf{1}_{b}\), we can only have \(f=b\), and hence \(g<_{x}b\). Now let us prove \(g\leq_{x}a\). Think of \(J\) as a rooted tree, with root \(b_{0}\) (the base vertex). Now we can talk about anchors and descendants within \(J\). Note that if an emerald vertex \(e\) is a descendant of an emerald vertex \(f\) in \(T\), then we have \(f<_{x}e\). We start with showing that \(g\) is not a descendant of \(a\) in \(J\). Suppose for a contradiction that \(g\) is a descendant of \(a\). This means that the last visit of \(g\) in the tour of \(J\) precedes the last visit of \(a\). We will reach a contradiction by finding an alternative tree \(J^{\prime\prime}\) realizing \(x^{\prime}\) such that \(J^{\prime\prime}\prec J^{\prime}\). This will be a contradiction by Theorem 4.2 since we supposed that \(J^{\prime}\) is a Jaeger tree. Take \(e_{0}=a\) and let \(\varepsilon_{0}\) be an edge of \(J^{\prime}-J\) incident to \(a=e_{0}\) (such an edge exist since \(x^{\prime}(a)>x(a)\)). Take \(C(J,\varepsilon_{0})\) and \(C^{*}(J^{\prime},\varepsilon_{0})\). As the intersection of a cycle and a cut has an even number of edges, and \(\varepsilon_{0}\in C(J,\varepsilon_{0})\cap C^{*}(J^{\prime},\varepsilon_{0})\), there is also an edge \(\eta_{1}=e_{1}u_{1}\in C(J,\varepsilon_{0})\cap C^{*}(J^{\prime},\varepsilon_{0 })-J^{\prime}\). We claim that the last visit of \(e_{1}\) is after the last visit of \(g\). We need to look at two cases. If \(a\) is the first reached vertex within \(C(J,\varepsilon_{0})\), then \(\varepsilon_{0}\) is the first considered edge of \(C(J,\varepsilon_{0})\) in the tour of \(J\). Since \(gv\) is supposed to be a first difference between \(J\) and \(J^{\prime}\), but also \(\varepsilon_{0}\in J^{\prime}-J\), we need to arrive to \(g\) before \((a,\varepsilon_{0})\). As we supposed that \(g\) is a descendant of \(a\), in this case the last visit of \(g\) is also before \((a,\varepsilon_{0})\). All the vertices on \(C(J,\varepsilon_{0})\) except for \(a\) are reached after \((a,\varepsilon_{0})\) hence we conclude that the last visit of \(e_{1}\) is after the last visit of \(g\). If \(C(J,\varepsilon_{0})\) is not reached at \(a\), then Proposition 4.12 implies that the last visit of \(e_{1}\) is after the last visit of \(a\), and hence also after the last visit of \(g\). If \(e_{1}=b\), then \(J^{\prime\prime}=J+\varepsilon_{0}-\eta_{1}\) is a spanning tree realizing \(h^{\prime}\). Moreover, as \(\varepsilon_{0},\eta_{1}\in J\Delta J^{\prime}\), and the last visit of both \(e_{0}\) and \(e_{1}\) are after the last visit of \(g\), we still have \(gv\notin J^{\prime\prime}\), and the tour of \(J^{\prime\prime}\) agrees with the tour of \(J^{\prime}\) until reaching \((g,gv)\). Hence \(J^{\prime\prime}\prec J^{\prime}\), contradicting Theorem 4.2. If \(e_{1}\neq b\), we continue looking for a tree realizing \(x^{\prime}\). Let us describe the general step of this process. In a general step, we will have a tree \(\tilde{J}\subset J\cup J^{\prime}\) with \(gv\in\tilde{J}\), realizing \(x^{\prime}\). We also have some \(t\geq 0\) and edges \(\varepsilon_{0},\eta_{1},\varepsilon_{1},\ldots,\eta_{t-1},\varepsilon_{t-1}, \eta_{t}\) (in case \(t=0\), we do not have any edges) with the following properties: For \(i\in[0,t-1],\varepsilon_{i}=e_{i}v_{i}\in\tilde{J}-J\), for \(i\in[1,t],\eta_{i}=e_{i}u_{i}\in J-\tilde{J}\). The edges \(\varepsilon_{0},\eta_{1},\varepsilon_{1},\ldots\varepsilon_{t-1},\eta_{t}\) are all distinct, \(\eta_{i+1}\in C(J,\varepsilon_{i})\) for \(i\in[0,t-1]\), and \(\varepsilon_{i}\in C(\tilde{J},\eta_{i+1})\) for \(i\in[0,t-1]\). We moreover require that \(e_{0}=a\), and for each \(1\leq i\leq t\), the last visit of \(e_{i}\) in the tour of \(T\) is after the last visit of \(g\). At the beginning, \(\tilde{J}=J^{\prime}\) and \(t=0\) obviously satisfies the requirements. We show that if \(e_{t}\neq b\) (which also includes the case if \(t=0\)), then we can either find an additional pair of edges \(\varepsilon_{t}\) and \(\eta_{t+1}\) such that the extended sequence of edges still satisfies the required properties, or we can find a new \(\tilde{J}\) realizing \(x^{\prime}\) such that \(|\tilde{J}\Delta J|\) strictly decreases (and take \(t=0\)). If \(e_{t}\neq b\), then in case if \(e_{t}\neq a\), we have \(x(e_{t})=x^{\prime}(e_{t})\) and the number of so far chosen edges from \(J-\tilde{J}\) incident to \(e_{t}\) is one larger than the number of edges from \(\tilde{J}-J\). Hence in this case there is an edge \(\varepsilon_{t}=e_{t}v_{t}\in\tilde{J}-J\) that we have not chosen so far. If \(e_{t}=a\), then \(x^{\prime}(e_{t})=x(e_{t})+1\), but the number of so far chosen edges from \(J-\tilde{J}\) incident to \(e_{t}\) is equal to the number of edges from \(\tilde{J}-J\). Hence in this case, too, there is an edge \(\varepsilon_{t}=e_{t}v_{t}\in\tilde{J}-J\) that we have not chosen so far. As the intersection of a cycle and a cut has an even number of edges, in both cases there is also an edge \(\eta_{t+1}=e_{t+1}u_{t+1}\in C(J,\varepsilon_{t})\cap C^{*}(\tilde{J}, \varepsilon_{t})-\tilde{J}\). This also implies \(\varepsilon_{t}\in C(\tilde{J},\eta_{t+1})\). We claim that the last visiting time of \(e_{t+1}\) is after the last visiting time of \(g\). If \(C(J,e_{t}v_{t})\) was not reached at \(e_{t}\) in the tour of \(J\), then by Proposition 4.12, the last visiting time of \(e_{t+1}\) is after the last visiting time of \(e_{t}\), hence it is after the last visiting time of \(g\). If \(C(J,e_{t}v_{t})\) was reached at \(e_{t}\) in the tour of \(J\), then \(e_{t+1}\) is a descendant of \(e_{t}\). There are two cases: either \(g\) is a descendant of \(e_{t}\) or the first reaching time of \(e_{t}\) is after the last visiting time of \(g\). In the latter case, it is obvious that the last visiting time of \(e_{t+1}\) is after the last visiting time of \(g\). If \(e_{t}\) is an ancestor of \(g\), then as \(e_{t}v_{t}\in\tilde{J}-J\), the pair \((g,gv)\) needs to precede \((e_{t},e_{t}v_{t})\) in the tour of \(J\), and hence by the time the tour of \(J\) is in \((e_{t},e_{t}v_{t})\), the node \(g\) is finished. Also, \(e_{t+1}\) is only reached after \((e_{t},e_{t}v_{t})\). Hence indeed, its last (and also first) visiting time is after the last visiting time of \(g\). If \(\eta_{t+1}\) does not agree with \(\eta_{i}\) for any \(i\in[1,t]\), then we extended our sequence of edges such that \(\varepsilon_{0},\eta_{1},\varepsilon_{1},\ldots,\varepsilon_{t},\eta_{t+1}\) satisfies all required properties. If \(\eta_{t+1}=\eta_{i}\) for some previous \(i\), then apply Lemma 4.8 to \(\tilde{J}\) and the sequence of edges \(\varepsilon_{t+1},\eta_{t},\varepsilon_{t},\ldots,\varepsilon_{i+1},\eta_{i}\). We get that there is a sequence of edges \(t+1=i_{0}>i_{1}>\cdots>i_{s}=i\) such that \(\tilde{J}_{1}=\tilde{J}+\{e_{i_{0}}v_{i_{0}},\ldots,e_{i_{s-1}}v_{i_{s-1}}\}- \{e_{i_{1}}u_{i_{1}},\ldots,e_{i_{s}}u_{i_{s}}\}\) is a spanning tree. As \(e_{i_{0}}=e_{t+1}=e_{i}=e_{i_{s}}\), \(\tilde{J}_{1}\) again realizes \(x^{\prime}\). Moreover, we have \(\tilde{J}_{1}\subseteq\tilde{J}\cup J\subseteq J^{\prime}\cup J\), and as the last visit of each \(e_{j}\) (\(0\leq j\leq t+1\)) is after te last visit of \(g\), the edge \(gv\) was not among the edges that we switched, hence also \(gv\in\tilde{J}_{1}\). As \(e_{i}u_{i}\in\tilde{J}-J\), we have \(|\tilde{J}_{1}\Delta J|<|\tilde{J}\Delta J|\), hence we can choose \(\tilde{J}_{1}\) as our new \(\tilde{J}\). Altogether, as \(J^{\prime}\Delta J\) is finite, we cannot have the above two cases infinitely many times, hence after a while, we need to have a case where \(e_{t}=b\). By Lemma 4.8 applied to \(J\) and our sequence of edges, there exist a subset of these edges such that \(i_{0}=0<i_{1}<\cdots<i_{s}=t\) and \(J^{\prime\prime}=J+e_{i_{0}}v_{i_{0}}-e_{i_{1}}u_{i_{1}}+e_{i_{1}}v_{i_{1}}- \cdots+e_{i_{s-1}}v_{i_{s-1}}-e_{i_{s}}u_{i_{s}}\) is a spanning tree. \(J^{\prime\prime}\) realizes \(x^{\prime}=x+\mathbf{1}_{a}-\mathbf{1}_{b}\). Moreover, as for each \(i\), the last visit of \(e_{i}\) is after the last visit of \(g\), none of the \(e_{i}\) agree with \(g\). Hence none of the edges \(e_{0}v_{0},e_{1}u_{1},e_{1}v_{1},\ldots,e_{t-1}v_{t-1},e_{t}u_{t}\) agrees with \(gv\). As \((g,gv)\) is the first difference between \(J\) and \(J^{\prime}\), and \(J^{\prime\prime}\subseteq J\cup\tilde{J}\subseteq J\Delta J^{\prime}\), the first difference between \(J\) and \(J^{\prime\prime}\) is at \((g,gv)\), where \(gv\in J^{\prime}-J^{\prime\prime}\). This means that \(J^{\prime\prime}\prec J^{\prime}\), contradicting Theorem 4.2, as both trees realize \(x^{\prime}\). This proves that \(a\) cannot be an ancestor of \(g\) in \(J\). If \(a\) is not an ancestor of \(g\), then there are three possibilities. If \(g\) is an ancestor of \(a\) or \(g=a\), then \(g\leq_{x}a\). The only other possibility is that neither of \(a\) and \(g\) are ancestors of each other. In this case, the last visit of the earlier visited node is before the first visit of the other node. As the first difference between the tours of \(J\) and \(J^{\prime}\) is at \(g\), and there is also at least one edge \(\varepsilon_{0}\) of \(J^{\prime}-J\) incident to \(a\), we conclude that \(g\) needs to be visited earlier than \(a\), otherwise \(\varepsilon_{0}\) would be visited before \(gv\). Hence we also conclude that \(g\leq_{x}a\). ### Proof of Theorem 3.6 **Theorem 4.13**.: _Fix an arbitrary ribbon structure and basis. If \(h\) and \(h^{\prime}\) are two hypertrees of a hypergraph \(\mathcal{H}\), then \(C_{em}(h)\cap C_{em}(h^{\prime})=\emptyset\)._ Proof.: Let \(T\) and \(T^{\prime}\) be the respective Jaeger trees of \(h\) and \(h^{\prime}\). By symmetry, we may suppose that for the first difference \((e,\varepsilon)\) between the tours of \(T\) and \(T^{\prime}\), we have \(\varepsilon\in T-T^{\prime}\). (As both \(T\) and \(T^{\prime}\) are Jaeger trees, \(e\) needs to be a hyperedge.) Apply Lemma 4.9. Notice that the hyperedge \(f\) given by the Lemma has the following properties: \[h(f)>h^{\prime}(f),\] \[f\text{ is not internally embedding active in }h,\] \[f\text{ is not externally embedding active in }h^{\prime}. \tag{4.5}\] This implies that for each \(x\in C_{em}(h)\) and \(y\in C_{em}(h^{\prime})\) we have \(x(f)\geq h(f)>h^{\prime}(f)\geq y(f)\), hence \(x\) and \(y\) cannot be the same. **Theorem 4.14**.: _For any \(c\in\mathbb{Z}^{E}\), there is at least one hypertree \(h\) such that \(c\in C_{em}(h)\), moreover, \(d_{1}(h,c)=d_{1}(H,c)\)._ Before proving the theorem, let us recall some constructions, and prove some lemmas. For a given vector \(c\in\mathbb{Z}^{E}\), let \(H_{c}=\{h\in H\mid d_{1}(H,c)=d_{1}(h,c)\}\). We are looking for a hypertree \(h\in H_{c}\) such that \(c\in C_{em}(h)\). Our method will be the following: We define a complete ordering \(<_{c}\) of the hypertrees, that orders them according to the first differences between their Jaeger trees in a way depending on \(c\). Then, we show that if a hypertree \(h\in H_{c}\) does not have \(c\in C_{em}(h)\), then we can find another hypertree \(h^{\prime}\in H_{c}\) with \(h^{\prime}<_{c}h\). As there are finitely many hypertrees, this implies that at some point, we need to find a hypertree \(h\in H_{c}\) with \(c\in C_{em}(h)\). To define the ordering \(<_{c}\), we need some more preparations. **Proposition 4.15**.: _If for an edge \(e\), there exist \(h\in H_{c}\) with \(h(e)>c(e)\), then for each \(h^{\prime}\in H_{c}\), we have \(h^{\prime}(e)\geq c(e)\), and symmetrically, if there exist \(h\in H_{c}\) with \(h(e)<c(e)\), then for each \(h^{\prime}\in H_{c}\), we have \(h^{\prime}(e)\leq c(e)\)._ Proof.: Suppose for a contradiction that there is a hyperedge \(e\) such that there are \(h,h^{\prime}\in H_{c}\) with \(h(e)<c(e)<h^{\prime}(e)\). Then, by Proposition 2.5, there exist a hyperedge \(f\) with \(h(f)>h^{\prime}(f)\) such that \(h+\mathbf{1}_{e}-\mathbf{1}_{f}\) and \(h^{\prime}-\mathbf{1}_{e}+\mathbf{1}_{f}\) are both hypertrees. Then \(d_{1}(h+\mathbf{1}_{e}-\mathbf{1}_{f},c)\leq d_{1}(h,c)\). As \(d_{1}(h,c)\) is minimal among all hypertrees, we need to have \(d_{1}(h+\mathbf{1}_{e}-\mathbf{1}_{f},c)=d_{1}(h,c)\). This implies that \(c(f)\geq h(f)\). Similarly, \(d_{1}(h^{\prime}-\mathbf{1}_{e}+\mathbf{1}_{f},c)\leq d_{1}(h^{\prime},c)\). As \(d_{1}(h^{\prime},c)\) is also minimal among all hypertrees, we need to have \(d_{1}(h^{\prime}-\mathbf{1}_{e}+\mathbf{1}_{f},c)=d_{1}(h^{\prime},c)\). Hence \(c(f)\leq h^{\prime}(f)\). These together imply that \(h(f)\leq h^{\prime}(f)\), which is a contradiction. We say that a hyperedge \(e\) is _external_ with respect to \(c\) if there is a hypertree \(h\in H_{c}\) such that \(h(e)<c(e)\). We say that a hyperedge \(e\) is _internal_ with respect to \(c\) if there exist a hypertree \(h\in H_{c}\) with \(h(e)>c(e)\). Then by Proposition 4.15, each hyperedge is either internal or external, or neither of them, in which case each \(h\in H_{c}\) has \(h(e)=c(e)\). Moreover, for an internal hyperedge, each \(h\in H_{c}\) has \(h(e)\geq c(e)\), and for an external hyperedge, each \(h\in H_{c}\) has \(h(e)\leq c(e)\). Now we are ready to define the ordering \(<_{c}\). **Definition 4.16** (\(<_{c}\)).: Let \(h\) and \(h^{\prime}\) be two hypertrees with respective Jaeger trees \(T\) and \(T^{\prime}\). Suppose that the fist difference between the tours of \(T\) and \(T^{\prime}\) is at \((e,ev)\), where \(ev\in T-T^{\prime}\). Then 1. if \(e\) is an external hyperedge, we define \(h^{\prime}<_{c}h\) 2. if \(e\) is an internal hyperedge, we define \(h<_{c}h^{\prime}\) 3. if \(e\) is neither an external, nor an internal hyperedge, we define \(h<_{c}h^{\prime}\). **Proposition 4.17**.: _For any \(c\in\mathbb{Z}^{E}\), the above defined \(<_{c}\) is a complete ordering of \(H\)._ Proof.: As any two trees are comparable, we only need to prove transitivity. Suppose that \(T_{1}<_{c}T_{2}\) and \(T_{2}<_{c}T_{3}\). Assume that the first difference between \(T_{1}\) and \(T_{2}\) is at \((e,\varepsilon)\) and the first difference between \(T_{2}\) and \(T_{3}\) is at \((e^{\prime},\varepsilon^{\prime})\). Suppose that \(e\) and \(e^{\prime}\) are both external with respect to \(c\). Then we might have \(e=e^{\prime}\), but in any case, \(\varepsilon\neq\varepsilon^{\prime}\) since \(\varepsilon\in T_{2}\) while \(\varepsilon^{\prime}\notin T_{2}\). If \((e,\varepsilon)\) precedes \((e^{\prime},\varepsilon^{\prime})\) in the tour of \(T_{2}\), then the first difference between \(T_{1}\) and \(T_{3}\) is \((e,\varepsilon)\), and this orders them as \(T_{1}\) and \(T_{2}\) are ordered, i.e., \(T_{1}\prec T_{3}\). If \((e^{\prime},\varepsilon^{\prime})\) precedes \((e,\varepsilon)\) in the tour of \(T_{2}\), then the first difference between \(T_{1}\) and \(T_{3}\) is \((e^{\prime},\varepsilon^{\prime})\), and this implies that they are ordered as \(T_{2}\) and \(T_{3}\) are, that is, again, \(T_{1}\prec T_{3}\). If \(e\) is external and \(e^{\prime}\) is internal with respect to \(c\), then \(e\neq e^{\prime}\) and thus \(\varepsilon\neq\varepsilon^{\prime}\) again. The rest of the argument can be repeated. The reasoning is analogous also in the cases when \(e\) and \(e^{\prime}\) are both internal, and when \(e\) is internal and \(e^{\prime}\) is external with respect to \(c\). Proof of Theorem 4.14.: Let us spell out the property \(c\in C_{em}(h)\) in another way: (4.6) \[\begin{split}&\text{If $e$ is an external edge of $c$ and $c(e)>h(e)$,}\\ &\text{then $e$ is externally embedding active in $h$,}\\ &\text{if $e$ is an internal edge of $c$ and $c(e)<h(e)$,}\\ &\text{then $e$ is internally embedding active in $h$.}\end{split}\] (4.7) Clearly, \(c\in C(h)\) is equivalent to (4.6) plus (4.7). Take an arbitrary hypertree \(h\in H_{c}\). If \(h\) satisfies (4.6) and (4.7) then we are done. If not, then we will modify \(h\) to get a new hypertree \(h^{\prime}\in H_{c}\) with \(h^{\prime}<_{c}h\). As \(<_{c}\) is a complete ordering on hypertrees and there are finitely many hypertrees, at some point we need to have a hypertree \(h\) satisfying (4.6) and (4.7). If \(h\) does not satisfy either (4.6) or (4.7), then there is either an external hyperedge \(e\) not satisfying (4.6) or an internal hyperedge \(e\) not satisfying (4.7) (or both). We next show how to modify \(h\) in these two cases. Suppose that \(e\) is an external hyperedge such that \(c(e)>h(e)\) but \(e\) is not externally embedding active in \(h\). By the definition of external embedding activity, this means that there exist a hyperedge \(f<_{h}e\) such that \(h^{\prime}=h+\mathbf{1}_{e}-\mathbf{1}_{f}\) is also a hypertree. Note that such an \(f\) is necessarily an external hyperedge, and \(h^{\prime}\in H_{c}\). Indeed, \(h\in H_{c}\) implies \(d_{1}(h,c)\leq d_{1}(h^{\prime},c)\). As \(|c(e)-h^{\prime}(e)|<|c(e)-h(e)|\) we conclude that \(c(f)>h^{\prime}(f)\), thus, \(f\) is also external, and \(h^{\prime}\in H_{c}\). We show that we can choose \(f\) such that \(h^{\prime}<_{c}h\). More precisely, we show that if we choose \(f\) to be the earliest emerald node according to \(<_{h}\) such that \(h+\mathbf{1}_{e}-\mathbf{1}_{f}\) is a hypertree, then for the obtained \(h^{\prime}=h+\mathbf{1}_{e}-\mathbf{1}_{f}\) and its Jaeger tree \(T^{\prime}\), the first difference between the tours of \(T\) and \(T^{\prime}\) is at a pair \((f,fv)\), where \(fv\in T-T^{\prime}\). As \(f\) is an external hyperedge (with respect to \(c\)), this means that \(h^{\prime}<_{c}h\). Suppose for a contradiction that \(f\) is chosen to be the earliest emerald node according to \(<_{h}\) such that \(h+\mathbf{1}_{e}-\mathbf{1}_{f}\) is a hypertree, but the above properties do not hold for the Jaeger trees \(T\) and \(T^{\prime}\). There are 3 ways in which the property can be violated. Case 1: The first difference between \(T\) and \(T^{\prime}\) is at \((g,gv)\) where \(g\neq f\), and \(gv\in T-T^{\prime}\). By Lemma 4.11 applied to \(x=h\), \(x^{\prime}=h^{\prime}\), \(a=e\), \(b=f\), in this case \(g\leq_{h}f\) and \(g\leq_{h}e\). By part (ii) of Lemma 4.10, applied with \(J=T^{\prime}\), \(J^{\prime}=T\), \(x=h^{\prime}\), \(a=f\), \(b=e\), we get that \(h^{\prime}+\mathbf{1}_{f}-\mathbf{1}_{g}=h+\mathbf{1}_{e}-\mathbf{1}_{g}\) is also a hypertree. This contradicts the fact that \(f\) was the earliest emerald node according to \(<_{h}\) such that \(h+\mathbf{1}_{e}-\mathbf{1}_{f}\) is a hypertree. Case 2: The first difference between \(T\) and \(T^{\prime}\) is at \((g,gv)\) where \(g\neq f\), and \(gv\in T^{\prime}-T\). By Lemma 4.11 in this case \(g\leq_{h}f\) and \(g\leq_{h}e\). By part (ii) of Lemma 4.10, applied for \(J=T\), \(J^{\prime}=T^{\prime}\), \(x=h\), \(a=e\), \(b=f\), we get that \(h+\mathbf{1}_{e}-\mathbf{1}_{g}\) is also a hypertree. This contradicts the fact that \(f\) was the earliest emerald node according to \(<_{h}\) such that \(h+\mathbf{1}_{e}-\mathbf{1}_{f}\) is a hypertree. Case 3: The first difference between \(T\) and \(T^{\prime}\) is at \((f,fv)\), but we have \(fv\in T^{\prime}-T\). This is impossible by (i) of Lemma 4.10 (applied for \(J=T\), \(J^{\prime}=T^{\prime}\), \(a=e\), \(b=f\)). Now let us look at the case if there is an internal hyperedge violating (4.7). Let \(e\) be an internal hyperedge such that \(c(e)<h(e)\) and \(e\) is not internally embedding active in \(h\). By the definition of internal embedding activity, this means that there exist a hyperedge \(f<_{h}e\) such that \(h^{\prime}=h-\mathbf{1}_{e}+\mathbf{1}_{f}\) is also a hypertree. Note that \(f\) is necessarily an internal hyperedge, and \(h^{\prime}\in H_{c}\). Indeed, \(h\in H_{c}\) implies \(d_{1}(h,c)\leq d_{1}(h^{\prime},c)\). As \(|c(e)-h^{\prime}(e)|<|c(e)-h(e)|\) we conclude that \(c(f)<h^{\prime}(f)\), thus, \(f\) is also internal, and \(h^{\prime}\in H_{c}\). We show that we can choose \(f\) such that \(h^{\prime}<_{c}h\). More precisely, we will show that if we choose \(f\) to be the earliest emerald node according to \(<_{h}\) such that \(h-\mathbf{1}_{e}+\mathbf{1}_{f}\) is a hypertree, then for the obtained \(h^{\prime}=h-\mathbf{1}_{e}+\mathbf{1}_{f}\) and its Jaeger tree \(T^{\prime}\), the first difference between the tours of \(T\) and \(T^{\prime}\) is at a pair \((f,fv)\), where \(fv\in T^{\prime}-T\). As \(f\) is an internal hyperedge (with respect to \(c\)), this means that \(h^{\prime}<_{c}h\). Suppose for a contradiction that \(f\) is chosen to be the earliest emerald node according to \(<_{h}\) such that \(h-\mathbf{1}_{e}+\mathbf{1}_{f}\) is a hypertree, but the above properties do not hold for the Jaeger trees. There are 3 ways in which the property can be violated. Case 1: The first difference between \(T\) and \(T^{\prime}\) is at \((g,gv)\) where \(g\neq f\), and \(gv\in T-T^{\prime}\). By Lemma 4.11 applied with \(x=h\), \(x^{\prime}=h^{\prime}\), \(a=f\) and \(b=e\), in this case \(g\leq_{h}f\) and \(g\leq_{h}e\). By (iii) of Lemma 4.10 applied with \(J=T^{\prime}\), \(J^{\prime}=T\), \(x=h^{\prime}\), \(x^{\prime}=h\), \(a=e\) and \(b=f\), we get that \(h^{\prime}+\mathbf{1}_{g}-\mathbf{1}_{f}=h-\mathbf{1}_{e}+\mathbf{1}_{g}\) is a hypertree. This contradicts the fact that \(f\) was the earliest emerald node according to \(<_{h}\) such that \(h-\mathbf{1}_{e}+\mathbf{1}_{f}\) is a hypertree. Case 2: The first difference between \(T\) and \(T^{\prime}\) is at \((g,gv)\) where \(g\neq f\), and \(gv\in T^{\prime}-T\). By Lemma 4.11 in this case \(g\leq_{h}f\) and \(g\leq_{h}e\). By (iii) of Lemma 4.10 applied with \(J=T\), \(J^{\prime}=T^{\prime}\), \(x=h\), \(x^{\prime}=h^{\prime}\), \(a=f\) and \(b=e\), we get that \(h+\mathbf{1}_{g}-\mathbf{1}_{e}\) is a hypertree. This again contradicts the fact that \(f\) was the earliest emerald node according to \(<_{h}\) such that \(h-\mathbf{1}_{e}+\mathbf{1}_{f}\) is a hypertree. Case 3: The first difference between \(T\) and \(T^{\prime}\) is at \((f,fv)\), but we have \(fv\in T-T^{\prime}\). This is impossible by (i) of Lemma 4.10 applied for \(J=T^{\prime}\), \(J^{\prime}=T\), \(x=h^{\prime}\), \(x^{\prime}=h\), \(a=e\) and \(b=f\). Proof of Theorem 3.6.: The statement follows from Theorems 4.13 and 4.14. ## 5. Open questions concerning variants of embedding activities Let us recall an alternative definition for embedding activities from [6]. In this paper, Jaeger trees were defined via the rule that each non-tree edge is first seen at its emerald endpoint. However, as mentioned in Remark 2.10, one could also define Jaeger trees with the rule that each non-tree edge is first seen at its violet endpoint (see Figure 4 for an example). In [6] both types of trees are investigated, and are called emerald, and violet Jaeger trees, respectively. It is also true that for any hypertree \(h\) of \(\mathcal{H}\), there is a unique violet Jaeger tree representing \(h\). Hence one could potentially associate an ordering to \(h\) using the violet Jaeger tree. There are in fact two natural ways to do this. One possibility is to define the ordering \(<_{h}^{violet}\) as the ordering in which emerald nodes are reached (that is, become current node) in the tour of \(T\), where \(T\) is the violet Jaeger tree representing \(h\). Another possibility is to take the ordering \(<_{h}^{violet^{\prime}}\) which is the ordering in which emerald nodes appear as an endpoint of the current edge in the tour of \(T\), where \(T\) is the violet Jaeger tree representing \(h\). **Example 5.1**.: Figure 4 shows the unique violet Jaeger tree \(T\) for the hypertree \(h\) with \(h(e_{0})=h(e_{1})=1\), \(h(e_{2})=h(e_{3})=0\). We have \(e_{0}<_{h}^{violet}e_{1}<_{h}^{violet}e_{2}<_{h}^{violet}e_{3}\) since this is the order in which emerald nodes become current in the tour of \(T\) However, we have \(e_{0}<_{h}^{violet^{\prime}}e_{2}<_{h}^{violet^{\prime}}e_{1}<_{h}^{violet^{ \prime}}e_{1}<_{h}^{violet^{\prime}}e_{3}\), since \((v_{1},v_{1}e_{2})\) precedes \((v_{1},v_{1}e_{1})\) in the tour of \(T\). Note that for an emerald Jaeger tree, these two orders coincide, since if an emerald endpoint \(e\) first appears as an endpoint of the current edge such that the current node-edge pair is \((v,ve)\), then in the next step, the current node needs to be \(e\). This is not true, however, if \(T\) is a violet Jaeger tree, as also the previous example shows. In some sense, \(<_{h}^{violet}\) seems to be the more natural ordering, however, it is easy to come up with examples where it does not give \(\mathcal{T}_{\mathcal{H}}\). However, \(<_{h}^{violet^{\prime}}\) seems to yield \(\mathcal{T}_{\mathcal{H}}\). **Conjecture 5.2**.: Embedding activities defined via \(<_{h}^{violet^{\prime}}\) produce the polynomial \(\mathcal{T}_{\mathcal{H}}\) for hypergraphs.
2309.13390
Sens-BERT: Enabling Transferability and Re-calibration of Calibration Models for Low-cost Sensors under Reference Measurements Scarcity
Low-cost sensors measurements are noisy, which limits large-scale adaptability in airquality monitoirng. Calibration is generally used to get good estimates of air quality measurements out from LCS. In order to do this, LCS sensors are typically co-located with reference stations for some duration. A calibration model is then developed to transfer the LCS sensor measurements to the reference station measurements. Existing works implement the calibration of LCS as an optimization problem in which a model is trained with the data obtained from real-time deployments; later, the trained model is employed to estimate the air quality measurements of that location. However, this approach is sensor-specific and location-specific and needs frequent re-calibration. The re-calibration also needs massive data like initial calibration, which is a cumbersome process in practical scenarios. To overcome these limitations, in this work, we propose Sens-BERT, a BERT-inspired learning approach to calibrate LCS, and it achieves the calibration in two phases: self-supervised pre-training and supervised fine-tuning. In the pre-training phase, we train Sens-BERT with only LCS data (without reference station observations) to learn the data distributional features and produce corresponding embeddings. We then use the Sens-BERT embeddings to learn a calibration model in the fine-tuning phase. Our proposed approach has many advantages over the previous works. Since the Sens-BERT learns the behaviour of the LCS, it can be transferable to any sensor of the same sensing principle without explicitly training on that sensor. It requires only LCS measurements in pre-training to learn the characters of LCS, thus enabling calibration even with a tiny amount of paired data in fine-tuning. We have exhaustively tested our approach with the Community Air Sensor Network (CAIRSENSE) data set, an open repository for LCS.
M V Narayana, Kranthi Kumar Rachvarapu, Devendra Jalihal, Shiva Nagendra S M
2023-09-23T14:14:25Z
http://arxiv.org/abs/2309.13390v1
_Sens-BERT_: Enabling Transferability and Re-calibration of Calibration Models for Low-cost Sensors under Reference Measurements Scarcity ###### Abstract. Low-cost sensors (LCS) are becoming increasingly relevant for monitoring and understanding air quality at high spatial and temporal resolution. However, LCS measurements are noisy, which limits large-scale adaptability. Calibration is generally used to get good estimates of air quality measurements out from LCS. In order to do this, LCS sensors are typically co-located with reference stations for some duration. A calibration model is then developed to transfer the LCS sensor measurements to the reference station measurements. Existing works implement the calibration of LCS as an optimization problem in which a model is trained with the data obtained from real-time deployments; later, the trained model is employed to estimate the air quality measurements of that location. However, this approach is sensor-specific and location specific and needs frequent re-calibration. The re-calibration also needs massive data like initial calibration, which is a cumbersome process in practical scenarios. To overcome these limitations, in this work, we propose _Sens-BERT_, a BERT-inspired learning approach to calibrate LCS, and it achieves the calibration in two phases: self-supervised pre-training and supervised fine-tuning. In the pre-training phase, we train _Sens-BERT_ with only LCS data (without reference station observations) to learn the data distributional features and produce corresponding embeddings. We then use the _Sens-BERT_ embeddings to learn a calibration model in the fine-tuning phase. Our proposed approach has many advantages over the previous works. Since the _Sens-BERT_ learns the behavior of the LCS, it can be transferable to any sensor of the same sensing principle without explicitly training on that sensor. It requires only LCS measurements in pre-training to learn the characters of LCS, thus enabling calibration even with a tiny amount of paired data in fine-tuning. We have exhaustively tested our approach with the Community Air Sensor Network (CAIRSENSE) data set, an open repository for LCS. We show that the proposed method outperforms well-known calibration models such as single-variable linear regression, multiple-variable linear regression, and Random forest models. Keywords:LCS, Learning Learning, BERT + Footnote †: journal: _Sens-BERT_: Enabling Transferability and Re-calibration of Calibration Models for Low-cost Sensors under Reference Measurements Scarcity ## 1. Introduction Air pollution is a significant environmental concern that affects human health and well-being. Monitoring air quality is crucial for understanding the levels of pollutants in the atmosphere and taking appropriate actions to mitigate their effects. While professional-grade air quality monitoring (AQM) systems are widely used, they can be expensive and limited in availability (Bertson et al., 2016). To address this issue, low-cost sensors (LCS) for AQM have emerged as a viable solution. LCS provide an affordable and accessible means of measuring various air pollutants, enabling individuals, communities, and organizations to monitor their local air quality actively (Bertson et al., 2016; Bertson et al., 2017; Bertson et al., 2018). These sensors utilize innovative technologies and compact designs to deliver real-time data on pollutant concentrations, enabling timely responses and informed decision-making. LCS can measure parameters such as particulate matter (_PM_), gases like carbon dioxide (_CO_2), carbon monoxide (_CO_), nitrogen dioxide (_NO_2), and volatile organic compounds (_VO_Cs). They may also include temperature (_T_) and humidity (_Rh_) sensors to assess environmental conditions comprehensively. Despite these advantages, the measurements produced by the LCS are susceptible to various error sources that can affect the accuracy, rendering the LCS less reliable than the reference grade instruments. Typical error sources include variations in temperature and relative humidity, cross-sensitivities (sensitive towards pollutants that existed other than the target parameters) and drift (Bertson et al., 2016; Bertson et al., 2016). Therefore, proper calibration is crucial to address these issues, and by doing so, we can improve the accuracy of LCS measurements [7]. Calibration involves transforming the LCS measurements to align with reference station observations. Applying correction factors to the sensor signal for the respective parameters influencing its response is a preliminary way to calibrate the LCS [8, 9, 10, 11]. For instance, \(\mathcal{S}\) is an LCS signal, which corrects for the effect of temperature (\(T\)) and relative humidity (\(Rh\)) in the equation below. \[\hat{\mathcal{S}}=\mathcal{S}+\alpha_{1}(T)+\alpha_{2}(Rh) \tag{1}\] where \(\alpha_{1}(.)\) and \(\alpha_{2}(.)\) are the correction factors modelled by exposing the LCS to various \(T\) and \(Rh\) values in controlled environments. The correction factors are sensor specific and often derived based on a limited number of calibration points. As a result, they may not accurately account for variations in the sensor's response across its entire operating range. At the same time, LCS may experience calibration drift over time due to environmental factors, sensor ageing, or changes in response characteristics which needs the frequent modelling of calibration factors, which is a cumbersome process in practice. To overcome the limitations in the correction factors approach, a learning-based approach is proposed to calibrate the LCS [12, 13, 14]. In this approach, calibration of LCS is framed as an optimisation problem, where a calibration model (\(f_{\theta}\)) is trained to minimise the mean square error (_MSE_) between the LCS measurements (\(\mathcal{X}_{t}\)) and reference station observations (\(\mathcal{Y}_{t}\)). Since the calibration models are trained with ambient conditions in real-time deployments, they can handle the error sources effectively, provided they are trained on a substantial amount of the data that covers all these variables. However, in the existing learning-based calibration works, the calibration model is sensor-specific and location-specific since their optimization is limited to a particular scenario, and they do not generalize the characteristics of the LCS measurements. Due to this, their calibration accuracy is limited, and transferring the calibration models trained on one sensor to other sensors further limits their effectiveness in calibrating the Low-cost sensors. At the same time, it needs frequent re-calibration to handle the calibration drift, which requires substantial co-location measurements of LCS and reference stations. In summary, the existing learning-based works mainly focus on transforming the LCS measurements without learning their distributional characteristics, such as temporal and environmental dependencies. It is essential to learn the distributional information of measurements to generalize the calibration model to the LCS and to improve the calibration accuracy with limited paired data. To address these limitations, we propose _Sens-BERT_, a BERT-based (Bidirectional Encoder Representations from Transformers) [15] approach to calibrate the Low-cost sensors and it achieves the calibration in two steps: pre-training and fine-tuning. The pre-training step is self-supervised, where a BERT-based model, _Sens-BERT_, is trained only on LCS measurements without using the reference station observations, and this pre-training enables _Sens-BERT_ to learn the characteristics of LCS measurement. Therefore, the pre-trained _Sens-BERT_ can produce the embeddings corresponding to the LCS measurements, which are then used in the fine-tuning step, where a calibration model can be trained with limited paired data. At the same time, the pre-trained _Sens-BERT_ can be utilized in the re-calibration, and it can be transferred to other sensors of the same sensing principle that needs to be calibrated since it already learned the data characteristics. To the best of our knowledge, this is the first work that implements a BERT-based approach to calibrate the LCS, and our contributions are as follows, * We implement a BERT-based deep learning architecture, Sens-BERT, to learn the characteristics of LCS measurements, which enables calibration and re-calibration with limited paired data. * We empirically validate the transferability of _Sens-BERT_ to other sensors without explicitly pre-training it on those sensors. * We show that the proposed approach outperforms the existing optimization-based calibration works for different sensors in the CAIRSENSE data set [16], an open repository for the LCS. ## 2. Related Work Calibration approaches for low-cost sensors (LCS) have been an active area of research, and several works have been disseminated to enhance the accuracy of LCS through calibration. This section summarises the existing works on the calibration of LCS. **Laboratory and field studies.** Numerous laboratory and field investigations have been undertaken to gather data from LCS deployed in diverse environments (Kang et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019). This data collection enables examining sensor responses across different conditions and provides valuable inputs for developing calibration techniques. Concurrently, comparing LCS measurements with those obtained from established air quality monitoring stations facilitates the validation of LCS performance. **Correction factors approach.** The data collected in both field and laboratory settings enables the development of correction factors for sensor response, effectively addressing biases in the sensor response. Carl et al. (Carl et al., 2019) and Barkjohn et al. (Barkjohn et al., 2019) applied hygroscopic growth factor correction to PM sensor response in order to correct the effect of relative humidity on PM sensors. Similarly, the gas sensors also corrected for meteorological parameters to improve accuracy (Li et al., 2019; Li et al., 2019; Li et al., 2019). **Machine learning-based calibration.** Machine learning algorithms have been employed to calibrate LCS. The algorithm learns the relationship between the sensor responses and the actual pollutant concentrations by training the algorithm on data collected from LCS and corresponding reference instruments. This enables the algorithm to make more accurate predictions and calibrate the sensor readings. These models include single-variable linear regression(Kal etc., and \(\mathcal{Y}_{t}\) consists corresponding reference station observations. Traditionally, this problem is approached as a learning-based optimization problem, as shown in Fig. 1, where the parameters (\(\theta\)) of a calibration model (\(f_{\theta}\)) are optimized such that it can transform \(\mathcal{X}_{t}\) to \(\mathcal{Y}_{t}\), as shown in equations (2) and (3). \[\hat{\mathcal{Y}}_{t} =f_{\theta}(\mathcal{X}_{t})\qquad\forall t \tag{2}\] \[\theta^{*} =\arg\min_{\theta}\frac{1}{N}\sum_{t=1}^{N}MSE(\hat{\mathcal{Y}} _{t},\mathcal{Y}_{t}) \tag{3}\] However, the measurements in \(\mathcal{X}_{t}\) show temporal dependency, which means the current observations are greatly influenced by previous measurements [44], and these temporal dependencies cannot be tackled by the traditional approaches. Therefore, it needs to be hypothesised that the calibration model (\(f_{\theta}\)) can learn the temporal dependencies and other distributional information of \(\mathcal{X}_{t}\) to improve calibration accuracy, as shown in equation (4). \[\hat{\mathcal{Y}}_{t}=f_{\theta}(\mathcal{X}_{t-M},\mathcal{X}_{t-M-1},.. \mathcal{X}_{t})\qquad\forall t \tag{4}\] Where \(M\) is the temporal context slice which is the experimental hyperparameter. Since the calibration model \(f_{\theta}\) in equation (4) focuses only on transforming \(\mathcal{X}_{t}\) to \(\mathcal{Y}_{t}\) without learning the temporal dependencies and distributional information, it needs to alter the \(f_{\theta}\) such that it can learn the characteristics of the measurements first and then it can transform the LCS measurements to reference station observations. We propose a two-step calibration framework shown in Fig. 2 to implement such a calibration process. In the proposed approach, we first focus on learning the temporal dependencies and other distributional information of \(\mathcal{X}_{t}\) in the self-supervised pre-training step using a BERT-based deep learning architecture, _Sens-BERT_. The _Sens-BERT_ shown in Fig. 2 inherits an encoder and decoder to learn the characteristics of LCS measurements. To force the _Sens-BERT_ to learn the characteristics of LCS measurements, we mask sequences of samples in \(\mathcal{X}_{t}\) using the span masking technique [45] with probability \(P\) and then operated a sliding window of length \(M\) with an overlapping length of \(O_{l}\) between two successive windows such that it can feed the _Sens-BERT_ with temporal context. Let \(\mathcal{X}\in\mathbb{R}^{M\times K}\) is a sample obtained by operating sliding window on \(\mathcal{X}_{t}\) at time \(t\), then the encoder transforms the measurements in \(\mathcal{X}\), including masked samples, into embeddings (\(\mathcal{E}\)), and the decoder re-transforms \(\mathcal{E}\) to \(\hat{\mathcal{X}}\), which is the estimated \(\mathcal{X}\). Therefore, in the pre-training phase, _Sens-BERT_ transforms \(\mathcal{X}\) to \(\mathcal{E}\) and then to \(\hat{\mathcal{X}}\) as shown in equations (5) and (6) and learns the characteristics of \(\mathcal{X}_{t}\) while doing these transformations, by minimizing the loss in equation (7), that is, the _MSE_ between \(\mathcal{X}\) and \(\hat{\mathcal{X}}\). Figure 1: Calibration of LCS as an optimization problem in the existing works \[\mathcal{E} =f_{enc}(\mathcal{X});\quad\mathcal{X}\in\mathbb{R}^{M\times K}\sim \{\mathcal{X}_{t}\}_{t=1}^{T}\quad\forall t \tag{5}\] \[\widehat{\mathcal{X}} =f_{dec}(\mathcal{E})\] (6) \[\text{loss}_{\text{pre-train}} =\frac{1}{N}\sum_{i=1}^{N}MSE(\mathcal{X},\widehat{\mathcal{X}}) \tag{7}\] Once Sens-BERT is trained in pre-training, it can be used in the second step, which is a fine-tuning phase, where a calibration model (\(f_{\theta}\)) is trained with generated embedding against the reference station observations \(\mathcal{Y}_{t}\), as shown in the below equations. \[\hat{\mathcal{X}} =\text{\it Sens-BERT}(\mathcal{X})\qquad\forall t \tag{8}\] \[\hat{\mathcal{Y}}_{t} =f_{\theta}(\hat{\mathcal{X}})\] (9) \[\text{loss}_{\text{fine-tune}} =\frac{1}{N}\sum_{t=1}^{N}MSE(\hat{\mathcal{Y}}_{t}-\mathcal{Y}_ {t}) \tag{10}\] Since our approach learns the temporal dependencies and other distributional information of the LCS measurements, it can calibrate the LCS effectively with limited paired data. At the same time, the pre-trained _Sens-BERT_ can be transferred to other sensors, which need to be calibrated. ### Characteristics of LCS data Generally, low-cost sensors predominant in AQM are divided into particle sensors that work on the light scattering principle and gas sensors that work on metal oxide or electro-chemical principle. Particle sensors are greatly influenced by relative humidity (\(Rh\)) due to the hygroscopic properties of the particulate matter (\(PM\)) [41]. At higher relative humidity, tiny particles floating around (aerosols) tend to stick together more because of the water Figure 2: Calibration of Low-cost sensors with _Sens-BERT_ vapour, which significantly changes the amount of scattered light, thus changing the response of the sensors. To illustrate the effect of relative humidity on particle sensors, we plotted sample data of AirAssure, an LCS that works on the light scattering principle from the CAIRSESE data set [(16)] in Fig. 3(a). The graph is plotted between the PM\({}_{25}\) (\(PM\) of size less than \(0.25\)\(\mu m\)) on the y-axis and \(Rh\) on the x-axis. From Fig. 3(a), it can be observed that the PM\({}_{25}\) values have evident fluctuations for higher \(Rh\), which is highlighted in red shade. Temperature (\(T\)) and relative humidity (\(Rh\)) are the prime environmental factors influencing the accuracy of gas sensors. These factors influence the resistance of the heating element under the metal oxide surface in metal-oxide sensors; thus, changing its sensitivity causes inaccurate measurements [(46; 8)]. Fig. 3(b), plotted between the temperature and response of Aeroqual, a metal oxide sensor measures \(O_{3}\), shows the apparent drift (drift in trend line) in the response with increase in temperature. In the case of electrochemical sensors, temperature changes the rate of the chemical reaction and moisture content depletes the sensors' electrodes, thus changing the response of the sensors [(11)]. Further, they suffer from cross-sensitive issues, which means they are also sensitive to other gasses in the atmosphere [(47; 48)]. Based on the above observations, we select the variables that need to be considered in the calibration implementation, discussed in Sec. 4. ### Sens-Bert _Sens-Bert_, shown in Fig. 2, is implemented on BERT architecture, consisting of an encoder and a decoder. Encoders transform the given sequences (\(\mathcal{X}\)) into embeddings (\(\mathcal{E}\in\mathbb{R}^{H_{dim}\times M}\)), and the decoder re-transforms the embeddings into sequences, as represented in the below equations. While doing these transformations, it learns the characteristics of low-cost sensor measurements by minimizing the loss equation in (7). The _Sens-BERT_ that learned the characteristics of LCS can be utilised to calibrate the LCS with limited paired data. At the same time, it can be transferable to other sensors that need a fresh calibration without explicitly training the _Sens-BERT_ on those sensors. Calibration of LCS with _Sens-BERT_ can be achieved in two steps - Figure 3. Effect of temperature and relative humidity on the response of low-cost sensors. The plots are drawn with the data from the CAIRSENSE data set [(16)]. 1. _Self-supervised pre-training_ to learn good representations using low-cost sensor measurements alone. 2. _Supervised fine-tuning_ to learn the final calibration model. #### 3.3.1. Self-supervised pre-training phase The objective of the pre-training phase is to learn the representations of LCS measurements \((\mathcal{X}_{t})\) with _Sens-BERT_, and the sequence of steps involved in the pre-training phase are illustrated in Fig 4. To force _Sens-BERT_ to learn the representations of \(\mathcal{X}_{t}\), we first mask the sequence of samples in \(\mathcal{X}_{t}\) with probability \(P\) by using the span masking technique [45]. Next, we operated a sliding window of size \(M\) on \(\mathcal{X}_{t}\) as shown in equation (11), to slice M samples from \(\mathcal{X}_{t}\) at every time t. Where \(M\) is the temporal context slice, an experimental parameter, \(K\) is the number of variables (parameters considered for calibration) in \(\mathcal{X}_{t}\) and \(N\) is the number of samples. The sliding window operation enables to pass temporal context to the _Sens-BERT_, so it can learn the temporal dependencies in the measurements. \[\mathcal{X}\in\mathbb{R}^{M\times K}\sim\{\mathcal{X}_{t}\}_{t=1}^{T}\quad; \mathcal{X}_{t}\in\mathbb{R}^{N\times K},\mathcal{X}=\{\mathcal{X}_{t-M}, \mathcal{X}_{t-M-1}..\mathcal{X}_{t}\}\forall t \tag{11}\] Then we apply projection (Proj(.)) operation to the input sequences in \(\mathcal{X}\). Since the number of features or variables (\(K\)) in \(\mathcal{X}\) is tiny to feed to the BERT architecture [49], that is, the encoder and decoders of _Sens-BERT_, it Figure 4. Self-supervised pre-training phase of Sens-BERT with transformer architecture needs to project the measurements in \(\mathcal{X}\) to higher dimensional space, and we implemented this by using a linear layer shown in equation (12). \[D=\textbf{Proj}(\mathcal{X})=\textbf{W}^{T}\mathcal{X} \tag{12}\] Where **W** is a weight matrix of size \(H_{dim}\times K\) and \(H_{dim}\) is the hidden dimension size, the dimension expansion parameter of the neural networks. Once the values in \(\mathcal{X}\) transform to higher dimensional space \(D\), the vectors in D need to be normalized to train the transformer-based architectures effectively [50]. We applied the layer normalization technique proposed by Jimmy et al. [50] shown in equation (13) in our experimentation to normalize the values in \(D\). \[D^{{}^{\prime}}_{ji}=\texttt{LayerNorm}(D)=\frac{D_{ji}-\mu_{j}}{\sqrt{(\sigma _{j}^{2}+\epsilon)}}.y+\beta \tag{13}\] Where \(\mu_{j}\) and \(\sigma_{j}\) are the mean and standard deviation of \(j^{th}\) column of D, \(i\) is the row number and \(\epsilon\) is a small number to eliminate the occurrence of infinite for zero value of \(\sigma\). \(\gamma\) and \(\beta\) are the learning hyperparameters that help neural networks learn the dynamical normalization of features. Note that where ever the projection and normalization come in pre-training, it does the same in equations (12) and (13). Once the values in \(D\) are normalized, it is added with the positional encoding with the help of a positional embedding function \(PE(.)\)[49], which maps the column (\(j\)) index of D to a vector of length \(H_{dim}\). Positional encoding helps to preserve the order of information that can be utilized at the decoder. Then, the data is passed through second-layer normalization, and the output after second-layer normalization, that is, the hidden features, are expressed as follows: \[\mathcal{H}_{.j}=\texttt{LayerNorm}(D^{\prime}_{.j}+\texttt{PE}(j)) \tag{14}\] Then, the attention-enteric block shown in Fig. 4 (big pink rectangular box) takes \(H\) as input and repeats for \(R_{num}\) of times before producing the final representations \(\mathcal{E}\). It performs the two primary operations, Multi-head attention (**Mult-Attn(.)**) and feed-forward (**FeedFrd(.)**) operations, with projection and layer normalization after every operation. The **Mult-Attn(.)** is the self-attention layer of the transformer architecture [49] with multiple attention heads. Here, we employ the scaled dot-product attention mechanism to learn the latent interaction within the embeddings, which can be considered as transforming three sets of LCS measurements, i.e., Query, Key, and Value representation. Here, the attention score is generated from Query and Key. Then, the Value vector is weighted according to the attention score to generate the final embedding. The **FeedFrd(.)** operation enables the element-wise non-linearity transformation of incoming embeddings.We implemented the feed-forward operation using two fully connected layers coupled with a Gaussian Error Linear Unit (GELU) [51] activation function. \[\{\mathcal{H}=\texttt{LayerNorm}(\texttt{FeedForward}(\texttt{LayerNorm}( \texttt{Proj}(\texttt{LayerNorm}(\texttt{Multi-Attn}(\mathcal{H})))))\}^{R_{num} }=\mathcal{E}\in\mathbb{R}^{H_{dim}\times M} \tag{15}\] Finally, the embeddings produced by the attention-entric block are passed to the decoder to reconstruct the masked samples. The decoder implementation involves a projection layer, an activated and normalization layer and a prediction head. At first, the projection layer projects the embeddings, \(\mathcal{E}\), into D. Then the prediction layer (**Pred(.)**), followed by \(\texttt{LayerNorm}(.)\) operation, reconstructs \(\hat{\mathcal{X}}\) from the D. \[D =\texttt{Proj}(\texttt{GELU}(\mathcal{E})) \tag{16}\] \[\hat{\mathcal{X}} =\texttt{LayerNorm}(\texttt{Pred}(D)) \tag{17}\] Putting all the operations together, the _self-supervised pre-training phase_ transforms \(\mathcal{X}\) into \(\mathcal{E}\) then to \(\hat{\mathcal{X}}\) and makes _Sens-BERT_ learn the characteristics of low-cost sensor measurements while doing these transformations by minimizing the loss in the below equation. \[\text{loss}_{\text{pre-train}}=\frac{1}{N}\sum_{i=1}^{N}MSE(\mathcal{X},\hat{ \mathcal{X}})\] #### 3.3.2. Supervised fine-tuning phase The trained Sens-BERT with frozen parameters can be utilized in the supervised fine-tuning phase, where a calibration-specific model can be trained with the limited paired data. The calibration-specific model is of user interest, and we implemented this by stacking three Gated Recurrent Unit (GRU) layers [52], one drop-out layer and two fully connected layers one on another, as shown in Fig. 5. Specifically, for given observations \(\{\mathcal{X},\mathcal{Y}_{t}\}\), where \(\mathcal{X}\in\mathbb{R}^{M\times K}\sim\{\mathcal{X}_{t}\}_{t=1}^{T}\) at time \(t\), is the low-cost sensor measurements from \(K\) sensors over \(M\), \(\mathcal{Y}_{t}\in\mathbb{R}\) is the reference station measurement, we first extract the embeddings using the pre-trained sens-bert encoder as, \[\mathcal{E}_{t}=f_{enc}(\mathcal{X}_{t})\in\mathbb{R}^{M\times d}, \tag{18}\] where \(d\) is the embedding dimension. This embedding is then fed as input to the calibration model (\(f_{\theta}\)) to predict the corrected sensor measurement \(\mathcal{Y}_{t}\) as, \[\hat{\mathcal{Y}}_{t}=f_{\theta}(\mathcal{E}_{t})=\texttt{FC}(\texttt{Dropout}( \texttt{GRU}(\mathcal{E}_{t}))). \tag{19}\] Finally, the calibration-specific model is trained by minimizing the loss in equation (20), which can be used to obtain accurate pollution values from LCS. \[\text{loss}_{\text{fine-tune}}=\frac{1}{N}\sum_{i=1}^{N}MSE(\mathcal{Y}_{t}, \hat{\mathcal{Y}}_{t}). \tag{20}\] Therefore, _Sens-BERT_ learns the characteristics of LCS measurements in the pre-training phase, which can be used to train a calibration model with limited paired data in the fine-tuning phase. Further, the trained _Sens-BERT_ can be transferred to other sensors needing fresh calibration. The implementation details on the real-time data set are presented in Sec. 4.2, and the corresponding results are available in Sec. 4.3 Figure 5. Supervised fine-tuning phase of calibration model with trained _Sens-BERT_ ## 4. Experiments ### Data sets We evaluate our approach on Community Air Sensor Network (CAIRSENSE) data set, an open repository developed by Feinberg et al. (Feinberg et al., 2017) for low-cost sensors, and it is available at the US environmental protection (USEPA) data website (Steinberg et al., 2018). The data set contains measurements from various low-cost sensors and the time corresponding temperature, relative humidity and reference satiation observations. The measurements are taken from the Colorado state at a one-minute resolution. Details of the low-cost sensors considered from the CAIRSENSE data set are as follows, * measures PM - works on light scattering principle * measures PM - work on light scattering principle * measures PM - works on light scattering principle * measures PM - work on light scattering principle * measures PM - work on light scattering principle ### Implementation details As discussed in Sec. 3.1, calibration models aim to transform the raw sensor signal (\(\mathcal{S}_{t}\)) to the corresponding reference station observations (\(\mathcal{Y}_{t}\)). Though it seems like handling a single variable (\(\mathcal{S}_{t}\)) transformation, in practice, it needs to be handled more than one variable since the LCS performance is severely affected by other covariates discussed in Sec 3.2. Therefore, we framed the data set (\(\mathcal{X}_{t}\)) by including all the variables (\(v_{K};K=\{1,2,..\}\)) that affect the performance of the selected sensor as; \[\mathcal{X}_{t}=\begin{bmatrix}v_{1}&v_{2}&..\,,v_{K}\end{bmatrix}_{N\times K}\] Where \(K\) is the number of variables, N is the number of samples and \(v_{k}\in\mathbb{R}^{N\times 1}\) is the columns vector of measurements from \(K^{th}\) variable. In our experiments in Sec. 4.3, we consider output from the LCS (\(\mathcal{S}\)), temperature (\(T\)) and relative humidity \(Rh\) as the variables in \(\mathcal{X}_{t}\). \(\mathcal{Y}_{t}\in\mathbb{R}^{N\times 1}\) are the time corresponding reference station observations to that of \(\mathcal{X}_{t}\). Point to be noted that the \(K\) value depends on the sensor that needs to be calibrated and influencing factors in the sensor deployment site. _Elimination of outliers_: We apply threshold-based filtering to discard the outlier samples. Since the air pollutants follow skewed distributions (Steinberg et al., 2018), we used the Inter-Quartile Range (\(IQR=Q3-Q1\)) proximity rule to detect the outliers; that is, the data points that fall below \(Q1-1.5\times IQR\) or above \(Q3+1.5\times IQR\) are outliers, where \(Q1\) and \(Q3\) are the \(25^{th}\) and \(75^{th}\) percentile of the data, respectively. _Creation of overlapping sequences_: The measurements in \(\mathcal{X}_{t}\) show temporal dependency, which means the current observations are greatly influenced by previous measurements (Steinberg et al., 2018). To feed the temporal context to the _Sens-BERT_, we operated a sliding wind of length (\(M\)) \(128\) with overlapping (\(O_{l}\)) of \(M-1\) samples between two successive windows on \(\mathcal{X}_{t}\). That means at time \(t\); we slice \(128\) samples in \(\mathcal{X}_{t}\) from \(t\) to \(t-M\) to train _Sens-BERT_ and we iterate this sliding window till the end of samples in \(\mathcal{X}_{t}\). Here, \(M\) and \(O_{l}\) are the experimental hyperparameters. _Masking of samples_: To ensure _Sens-BERT_ learns characteristics of \(\mathcal{X}_{t}\) in pre-training, we have masked sequence of samples in \(\mathcal{X}_{t}\) using span masking technique (Steinberg et al., 2018) with probability (\(P\)) of \(0.2\). The length of subsequent samples (\(S_{l}\)) to mask and \(P\) are the experimental parameters. _Training and testing_: We divide the data in \(\{\mathcal{X}_{t},\mathcal{Y}_{t}\}\) into training data (\(\{\mathcal{X}_{t}^{train},\mathcal{Y}_{t}^{train}\}\)) and testing data (\(\{\mathcal{X}_{t}^{test},\mathcal{Y}_{t}^{test}\}\)) in \(0.9\) and \(0.1\) ratios, respectively. We then pre-train the _Sens-BERT_ with \(\mathcal{X}_{t}^{train}\), fine-tune the calibration model (\(f_{\theta}\)) and other reference models such as regression and RF models with the paired train data, and test with the testing data. We use Adam optimizer to store and update the parameters while training and testing (Steinberg et al., 2018). ### Evaluations In order to check the adaptability of _Sens-BERT_ to calibrate the LCS, we have performed the following three evaluations, which show that it outperforms the existing models and solves the transferability and re-calibration of calibration models for low-cost sensors under reference measurements scarcity. 1. _Evaluation of Sens-BERT in the calibration of LCS_ - to check whether _Sens-BERT_ calibrates the LCS or not 2. _Evaluation of Sens-BERT under availability of limited paired data_ - to verify the effectiveness of _Sens-BERT_ under reference measurement scarcity 3. _Evaluation of Sens-BERTtransferability to other sensors_ - to find the pre-trained _Sens-BERT_ on some sensor can transfer to other sensors #### 4.3.1. Evaluation of Sens-BERT in the calibration of LCS Here, we first pre-train the _Sens-BERT_ with \(\mathcal{X}_{t}^{train}\) by following the procedure discussed in Sec. 3.3.1. Then we fine-tune the calibration model, \(f_{\theta}\), an LSTM-based model with \(\mathcal{X}_{t}^{train},\mathcal{Y}_{t}^{train}\) by utilizing the embeddings produced by the the pre-trained _Sens-BERT_. Since the regression and RF models do not accept a bunch of samples (\(\mathcal{X}\in\mathbb{R}^{M\times K}\sim\{\mathcal{X}_{t}\}\,\forall t\)) at a time, we have flattened the sequence of samples in \(\mathcal{X}\) for all the variables into a single array and passed them to the regression and RF models. For example, if the input consists of \(M\) samples with \(K\) variables, the flattened array consists of \(M\times K\) values and is passed as a single sample to the regression and RF models. This flattening helps train the regression and RF models on the same data used for training the _Sens-BERT_ and ensures fair performance comparisons. Once the training is finished, all models are tested with test data. The results of this experiment are presented in table 1, and the outperforming models are indicated with \({}^{\star}\). Table 1 shows that _Sens-BERT_ outperforms the MLR and RF models for all the sensors. There is a maximum improvement of 44.4% in \(R^{2}\) and 53.3 % in _RMSE_ for the OPCPMF Sensor. At a minimum, there is a 12.5% \begin{table} \begin{tabular}{c c c c} \hline \hline **Sensor** & **Model** & \(\mathbb{R}^{2}\uparrow\) & **RMSE**\(\downarrow\) \\ \hline \multirow{3}{*}{**Shinyei**} & MLR & 0.66 & 0.58 \\ & RF & 0.8 & 0.44 \\ & _Sens-BERT\({}^{\star}\)_ & 0.9 (12.5\%) & 0.23 (47.7) \\ \hline \multirow{3}{*}{**AirAssure**} & MLR & 0.63 & 0.61 \\ & RF & 0.75 & 0.5 \\ & _Sens-BERT\({}^{\star}\)_ & 0.86 (17.0 \%) & 0.3 (40\%) \\ \hline \multirow{3}{*}{**Speck**} & MLR & 0.32 & 0.82 \\ & RF & 0.65 & 0.58 \\ & _Sens-BERT\({}^{\star}\)_ & 0.83 (27.6 \%) & 0.32 (44.8 \%) \\ \hline \multirow{3}{*}{**TZOA**} & MLR & 0.4 & 0.75 \\ & RF & 0.75 & 0.49 \\ & _Sens-BERT\({}^{\star}\)_ & 0.86 (14.6 \%) & 0.29 (40.8 \%) \\ \hline \multirow{3}{*}{**OPCPMF**} & MLR & 0.26 & 0.85 \\ & RF & 0.56 & 0.66 \\ \cline{1-1} & _Sens-BERT\({}^{\star}\)_ & 0.81 (44.4 \%) & 0.31 (53.3 \%) \\ \hline \hline \end{tabular} \end{table} Table 1. Evaluation of _Sens-BERT_ approach for different sensors in the CAIRSENSE data set. \({}^{\star}\) indicates the model that outperforms. The relative improvement with _Sens-BERT_ compared to the best model (RF), is shown is %. improvement in \(R^{2}\) for Shinyei and a 40% improvement in \(RMSE\) for AirAssure sensors. Therefore, _Sens-BERT_ can be adopted for calibrating low-cost air quality sensors. #### 4.3.2. Evaluation of Sens-BERT under availability of limited paired data The initial assumption of this experiment is that there is a scarcity of reference station observations (\(\mathcal{Y}_{t}\)). In contrast to the experiment in Sec. 4.3.1, Let's assume only a few measurements in \(\mathcal{X}_{t}^{train}\) has corresponding reference station observations in \(\mathcal{Y}_{t}^{train}\). Then it is not possible to train the MLR and RF models on entire train data since they need paired data. However, _Sens-BERT_ has the advantage of learning the characteristics of \(\mathcal{X}_{t}\) without having \(\mathcal{Y}_{t}\) observations. To implement the limited paired data experiments, we assume that only \(P\) % of samples in \(\mathcal{X}_{t}^{train}\) have corresponding reference station observations. We then train the calibration models with that \(P\) % data and test with the testing data. We repeat this by considering other \(P\) % samples in \(\mathcal{X}_{t}^{train}\) until it covers complete data of \(\mathcal{X}_{t}^{train}\) as shown in Fig. 6. Finally, we average the \(R^{2}\) and \(RMSE\) values obtained in testing for all the sets, and we repeat this experiment by increasing the \(P\) value, such that it equals 100%, represents the experiment in Sec 4.3.1. This procedure helps test models' calibration efficiency with limited paired data since we pass only limited paired data. Yet, we make models to expose a complete range of values. We expect the calibration accuracy of MLR and RF models increases as the \(P\) increases since these models can learn better with more paired data. In the case of calibration with _Sens-BERT_, we expect that it maintains the calibration accuracy even the \(P\) values decrease since it learns the characteristics of \(\mathcal{X}_{t}\) in pre-training and does not require more paired data to calibrate LCS. However, a too-low \(P\) value may also influence the calibration performance of _Sens-BERT_ since a neural network architecture needs minimum data. We tested Shinyei and AirAssure sensors in this experiment, and the mean \(R^{2}\) and \(RMSE\) values with one standard deviation are presented in Fig. 7. Fig. 7 shows that For Shinyei Sensors, the _Sens-BERT_ based calibration outperforms the MLR and RF models. However, in the case of AirAssure sensors, _Sens-BERT_ require at least 20 % of the train data to fine-tune the LSTM-based calibration model (\(f_{\theta}\)). After that, it outperforms both the MLR and RF models. Due to computational constraints, we have experimented on two sensors, and we expect the _Sens-BERT_ based calibration can work for other sensors under reference measurement scarcity. #### 4.3.3. Evaluation of transferability of Sens-BERT to other sensors In this experiment, we first adopt the pre-trained models mentioned in Sec. 4.3.1. Then we fine-tune the pre-trained models with other sensors' train data and test with the test data. For example, if a model is pre-trained on the data of sensor \(K1\), then it is fine-tuned with the train data of sensor \(K2\) and tested with the testing data of the \(K2\). This process helps check the transferability of the models trained on one sensor to another. The experimental results for training on the AirAssure sensor and testing on Shinyei sensors are presented in Table 2. Figure 6. Splitting of training data into chunks to test the calibration models under limited pared data ## 5. Conclusions and Future SCOPE This work proposes _Sens-BERT_, a BERT-based deep learning calibration approach for calibrating low-cost sensors for air quality monitoring. _Sens-BERT_ outperforms the regression and random forest machine learning algorithms, which are extensively used in low-cost sensor calibration. We empirically validated the calibration performance of _Sens-BERT_ on different low-cost sensors from the CAIRSENSE data set, available on US Environmental Protection Agency (USEPA) data website. _Sens-BERT_ enables the re-calibration, the frequent calibration after sensor deployment to overcome the calibration drift, with limited paired data. We tested the performance of _Sens-BERT_ under limited paired data by varying Further, Sens-BERT trained on one sensor can be transferred to other sensors. However, we tested the transferability on sensors working on the same sensing principle, that means if _Sens-BERT_ is trained on a sensor works on an optical principle, we transferred the trained _Sens-BERT_ to Figure 7. Mean \(R^{2}\) and _RMSE_ values of different calibration models with one standard deviation with varying amounts of training data other sensors that also work on optical principle. Checking transferability between sensors works on different sensing principles is our future work. In general performance of the BERT-based model improves if it learns a good representation of data. On that note, we expect that _Sens-BERT_ trained on a substantial amount of low-cost sensor measurements curated from various data sets and websites makes it more powerful. However, it needs more computational capacity, which is a constraint for us now.
2309.12930
Quantifying nonclassicality of vacuum-one-photon superpositions via potentials for Bell nonlocality, quantum steering, and entanglement
Entanglement potentials are popular measures of the nonclassicality of single-mode optical fields. These potentials are defined by the amount of entanglement (measured by, e.g., the negativity or concurrence) of the two-mode field generated by mixing a given single-mode field with the vacuum on a balanced beam splitter. We generalize this concept to define the potentials for Bell nonlocality and quantum steering in specific measurement scenarios, in order to quantify single-mode nonclassicality in a more refined way. Thus, we can study the hierarchy of three types of potentials in close analogy to the well-known hierarchy of the corresponding two-mode quantum correlations. For clarity of our presentation, we focus on the analysis of the nonclassicality potentials for arbitrary vacuum-one-photon superpositions (VOPSs), corresponding to a photon-number qubit. We discuss experimentally feasible implementations for the generation of single-mode VOPS states, their mixing with the vacuum on a balanced beam splitter, and their two-mode Wigner-function reconstruction using homodyne tomography to determine the potentials. We analyze the effects of imperfections, including phase damping and unbalanced beam splitting on the quality of the reconstructed two-mode states and nonclassicality potentials. Although we focus on the analysis of VOPS states, single-mode potentials can also be applied to study the nonclassicality of qudits or continuous-variable systems.
Adam Miranowicz, Josef Kadlec, Karol Bartkiewicz, Antonín Černoch, Yueh-Nan Chen, Karel Lemr, Franco Nori
2023-09-22T15:28:37Z
http://arxiv.org/abs/2309.12930v1
Quantifying nonclassicality of vacuum-one-photon superpositions via potentials for Bell nonlocality, quantum steering, and entanglement ###### Abstract Entanglement potentials are popular measures of the nonclassicality of single-mode optical fields. These potentials are defined by the amount of entanglement (measured by, e.g., the negativity or concurrence) of the two-mode field generated by mixing a given single-mode field with the vacuum on a balanced beam splitter. We generalize this concept to define the potentials for Bell nonlocality and quantum steering in specific measurement scenarios, in order to quantify single-mode nonclassicality in a more refined way. Thus, we can study the hierarchy of three types of potentials in close analogy to the well-known hierarchy of the corresponding two-mode quantum correlations. For clarity of our presentation, we focus on the analysis of the nonclassicality potentials for arbitrary vacuum-one-photon superpositions (VOPSs), corresponding to a photon-number qubit. We discuss experimentally feasible implementations for the generation of single-mode VOPS states, their mixing with the vacuum on a balanced beam splitter, and their two-mode Wigner-function reconstruction using homodyne tomography to determine the potentials. We analyze the effects of imperfections, including phase damping and unbalanced beam splitting on the quality of the reconstructed two-mode states and nonclassicality potentials. Although we focus on the analysis of VOPS states, single-mode potentials can also be applied to study the nonclassicality of qudits or continuous-variable systems. ## I Introduction Nonclassical optical states (including entangled, squeezed, or photon antibunched) are the main resources for quantum technologies and quantum information processing with photons. Thus, testing and quantifying the nonclassicality (NC) of optical states has been attracting attention in quantum physics since the pioneering work of Kennard [1] on squeezed states published almost a century ago. It is worth noting that the first truly convincing experimental demonstration of the nonclassical character of photons was based on measuring photon antibunching [2]. Recent experimental optical demonstrations of quantum advantage include enhanced gravitational-wave detection with squeezed states [3; 4; 5], boson sampling based on entangled and squeezed states [6; 7; 8], and entanglement-based quantum cryptography [9]. In quantum optics, the state of an optical field is classified as nonclassical (or quantum) if its Glauber-Sudarshan \(P\) function [10; 11] is not positive semidefinite, so it is not a classical probability density [12; 13; 14]. This means that only coherent states and their statistical mixtures are considered classical. Extensive attention has been devoted to various forms of nonclassical correlations, with particular emphasis on their three distinct types: quantum entanglement (quantum inseparability) [15], Einstein-Podolsky-Rosen (EPR) steering (commonly referred to as quantum steering) [16; 17], and Bell nonlocality, which manifests through violations of Bell inequalities [18]. In this paper, we quantify the NC of single-qubit optical states, which are arbitrary vacuum and one-photon superpositions (VOPS), via measures of two-mode quantum correlations. Testing nonlocal quantum correlations of single-photon states, or more specifically the states, generated by mixing a VOPS with the vacuum on a balanced beam splitter (BS), has been attracting considerable interest both theoretical (see, e.g., [19; 20; 21; 22]) and experimental (see, e.g., [23; 24; 25; 26]). Experimental tests whether a given optical state is nonclassical are usually based on measuring NC witnesses corresponding to demonstrating violations of various classical inequalities [27; 28; 29; 30; 14]. Typical NC witnesses are not universal, which means that they are sufficient but not necessary criteria of NC. Universal witnesses of NC correspond to those criteria which are both sufficient and necessary of NC. Experimental implementations of such universal witnesses usually require applying a complete quantum state tomography (QST). They can be used not only as NC tests but also NC measures (if they satisfy some additional properties). The most popular NC measures include: nonclassical distance [31], nonclassical depth [32; 33], and entanglement potentials [34]. Nonclassical depth is defined as the minimal amount of Gaussian noise which transforms a nonpositive semidefinite \(P\) function into a positive one. It is equal to 1 for all non-Gaussian states, and, thus, it is not a useful measure to quantify the amount of the NC of VOPS states [35], although it was shown to be useful for quantifying the NC of Gaussian states, e.g., twin beams [36]. Moreover, nonclassical distance is defined as the distance (according to a chosen distance measure including those of Bures or Kullback-Leibler) of a given nonclassical state to its closest classical state (CCS). Finding a CCS is usually very hard, even numerically. Of course, if one limits the set of classical states, then the nonclassical distance can be calculated effectively; e.g., it can be calculated for VOPS states if the vacuum is chosen as the CCS, which is reasonable because this is the only classical VOPS state [35]. Of course, the CCS for a given VOPS state might belong to a wider class of classical states. So, in general, finding a CCS could be difficult even for such simple VOPS states. Thus, we consider here only entanglement potentials and related NC quantifiers which do not suffer from the above-mentioned problems of nonclassical depth and distance. Non-universal NC witnesses, which are also often used for quantifying NC, to mention only a few include: (i) quadrature squeezing variances; (ii) second-order correlation functions to quantify photon antibunching and the sub-Poissonian photon statistics; (iii) the nonclassical volume corresponding to the volume of the negative part of a Wigner function [37]; (iv) the Wigner distinguishability quantifier [38], which is defined in terms of the distinguishability of a given state from one with a positive Wigner function; and (v) quantifiers of two- and multimode quantum correlations, which are the main topic of this paper, can also be used for estimating the degree of NC [34; 35; 39; 40; 41]. We also mention operational approaches to quantify the NC of states (see, e.g., [42; 43; 44]) including the effect of a measurement setup. For example, the negativity of quantumness is defined as the minimum entanglement (quantified by the negativity) that is created between a given system and a measurement apparatus assuming local measurements performed on subsystems [44]. The well-known hierarchy of standard measures of entanglement, EPR steering (in different measurement scenarios), and Bell nonlocality has recently been demonstrated for experimental polarization-qubit states, which were measured by applying complete [45; 46] or incomplete [47] QST. A closely related hierarchy of temporal quantum correlations, including temporal inseparability, temporal steering, and macrorealism, has also been studied [48]. Moreover, considerable research has been devoted to the hierarchies of measures or witnesses of NC, which are limited to specific types of quantum correlations. These include hierarchies of entanglement witnesses [49; 50], steering witnesses [22], Bell inequalities [18; 51]; as well as spatial [52] and spatiotemporal [29; 53] NC witnesses. In this paper we study theoretically _the potentials for two-qubit correlations to quantify the NC of single-qubit states defined as (coherent or incoherent) VOPSs_. We focus on analyzing quantifiers of single-qubit NC based on the above-mentioned three types of quantum correlations, i.e., entanglement, steering, and Bell nonlocality. More specifically, inspired by the concept of entanglement potentials introduced in Ref. [34] for quantifying the NC of single-mode optical states, _we introduce the potentials for EPR steering and Bell nonlocality. These potentials for two-mode quantum correlations can serve as the quantifiers of single-mode NC correlations._ In particular, they can also enable us to determine the hierarchy of single-qubit nonclassical correlations via the corresponding hierarchy of two-qubit nonclassical correlations. Compared to our former related works on quantifying NC of single-qubit optical states [35; 40], here we introduce _novel types of potentials for two-qubit correlations to study the hierarchy of single-qubit nonclassicality_, analogously to the hierarchy of two-qubit correlations, which we studied experimentally in Refs. [45; 46; 47] using polarization-based tomographic methods (see Refs. [54; 55] for comparative analyses). In this work, we use photon-number encoding of qubits, and, thus, we consider Wigner tomographic methods for their reconstruction. For example, a two-mode state (say \(\rho\)), which is generated by mixing a VOPS state with the vacuum at a balanced BS, can be reconstructed by homodyne tomography by locally mixing each mode of \(\rho\) with a high-intensity classical beam (i.e., a local oscillator), as shown in Fig. 1(a). The reconstructed Wigner functions and, thus, also two-mode density matrices enable the calculation of any quantifiers of two-qubit correlations and the corresponding potentials for single-mode VOPS states. The feasibility of this homodyne-QST-based approach has already been experimentally demonstrated, but only in a special case of the input single-photon Fock state for testing Bell nonlocality [25; 26] and EPR steering [26]. This paper is organized as follows: The concept of entanglement potentials is recalled and the potentials for EPR steering and Bell nonlocality are introduced and analyzed in detail in Sec. II assuming ideal experimental conditions. These concepts are generalized in Sec. III for realistic experimental conditions by including the effects of phase damping and unbalanced beam-splitting (corresponding to amplitude damping). The phase-space approach to describe nonclassicality using the Wigner and generalized Wigner functions (i.e., Cahill-Glauber functions) is given in Sec. IV based on the definitions summarized in Appendix A. Feasible experimental setups for the generation of the VOPS states and the tomographic reconstruction of the corresponding two-qubit states are described in Sec. V.1. In Sec. V.2 we briefly discuss how nonclassicality potentials can be defined and applied for higher- or even infinite-dimensional systems using NC witnesses rather than NC measures. We conclude in Sec. VI. ## II Ideal nonclassicality potentials Here, by generalizing the idea of entanglement potentials of Asboth _et al._[34], we define other NC potentials, i.e., those related to quantum steering and Bell nonlocality and, then, we use them to classify the NC of single-qubit optical states. We first analyze these potentials under ideal conditions assuming no damping and a perfectly balanced BS. We consider single-qubit optical states defined as (coherent or incoherent) superpositions of the vacuum \(|0\rangle\) and the single-photon Fock state \(|1\rangle\), which are (for brevity) referred to as VOPS states, and given by a general density matrix, \[\sigma(p,x)=\sum_{m,n=0}^{1}\sigma_{mn}|m\rangle\langle n|=\left[\begin{array} []{cc}1-p&x\\ x^{*}&p\end{array}\right], \tag{1}\] where \(p\in[0,1]\) is the probability of measuring a single photon, \(p=\langle 1|\sigma|1\rangle\), and \(x\) is a coherence parameter satisfying \(|x|\in[0,\sqrt{p(1-p)}]\). When referring to the VOPS encoding of qubit states, the only classical state of \(\sigma(p,x)\) is for \(p=0\), corresponding to the vacuum. ### Entanglement potentials for a single qubit According to the approach of Ref. [34], a NC measure of a single optical qubit \(\sigma\) can be defined by the entanglement of the output state \(\rho_{\rm out}\) of an auxiliary lossless balanced beam-splitter (BS) with the state \(\sigma\) and the vacuum \(|0\rangle\) at the inputs [see Fig. 1(a)], i.e., \[\rho_{\rm out}=U_{\rm BS}(\sigma\otimes|0\rangle\langle 0|)U_{\rm BS}^{ \dagger}, \tag{2}\] given in terms of the unitary transformation \(U_{\rm BS}=\exp(-iH\theta)\); with the Hamiltonian \(H=\frac{1}{2}i(a_{1}^{\dagger}a_{2}-a_{1}a_{2}^{\dagger})\), where \(a_{1,2}\) (\(a_{1,2}^{\dagger}\)) are the annihilation (creation) operators of the input modes and, for simplicity, we set \(\hbar=1\). Moreover, the BS parameter \(\theta\) defines the reflection and transmission coefficients, as \(r=\sin(\theta/2)\) and \(t=\cos(\theta/2)\), respectively. First, we set \(\theta=\pi/2\) for a balanced BS. Linear transformations (as discussed in greater detail in Sec. IV) do not change the global NC of an optical field. Thus, the output state \(\rho_{\rm out}\) is entangled if and only if the input state \(\sigma\) is nonclassical. Let us recall that coherent states (so infinite-dimensional states except the vacuum) and their statistical mixtures are the only classical states. Thus, an arbitrary finite-dimensional single-mode optical state (except the vacuum) is nonclassical and if it is mixed with the vacuum on a BS, then the output two-mode state is entangled. In a special case, by mixing an arbitrary single-qubit optical state \(\sigma(p,x)\) with the vacuum on a perfect balanced BS and assuming no phase and amplitude dissipation, the following state \(\rho_{\rm out}\equiv\rho(p,x)\) is generated: \[\rho(p,x)=\left[\begin{array}{cccc}1-p&-\frac{1}{\sqrt{2}}x&\frac{1}{\sqrt{ 2}}x&0\\ -\frac{1}{\sqrt{2}}x^{*}&\frac{1}{2}p&-\frac{1}{2}p&0\\ \frac{1}{\sqrt{2}}x^{*}&-\frac{1}{2}p&\frac{1}{2}p&0\\ 0&0&0&0\end{array}\right]. \tag{3}\] To quantify the NC of \(\sigma\), we consider the entanglement potential [34]: \[{\rm CP}(\sigma)\ =\ E(\rho_{\rm out}), \tag{4}\] which is defined by, e.g., the Wootters concurrence [56]: \[C(\rho)=\Theta\Big{(}2\max_{j}\lambda_{j}-\sum_{j}\lambda_{j}\Big{)}, \tag{5}\] Figure 1: (a) Scheme for converting the nonclassicality of a vacuum-one-photon superposition (VOPS) state \(\sigma(p,x)\) into a two-mode state \(\rho(p,x)\) exhibiting entanglement, and in some cases EPR steering and Bell nonlocality; \(\rho(p,x)\) can be reconstructed by homodyne state tomography, where LO’ and LO’ stand for local oscillators and BS denotes a beam splitter. (b) Quantum scissors device for the nonlocal generation of arbitrary VOPS, where \(|{\rm in}\rangle=|\alpha\rangle\) is a coherent input state, which is truncated to a qubit state \(|{\rm out}\rangle\); \(D_{i}\) are single-photon photodetectors, and PS denotes a phase shift (0 or \(\pi\)), which is applied with a specific probability to decohere the state \(|{\rm out}\rangle\), i.e., to decrease its coherence factor \(x=\sqrt{p(1-p)}\) to a desired value. given in terms \(\Theta(x)\equiv\max(x,0)\), \(\lambda_{j}^{2}=\text{eig}[\rho_{\text{out}}(\sigma_{2}\otimes\sigma_{2})\rho_{ \text{out}}^{\sigma}(\sigma_{2}\otimes\sigma_{2})]_{j}\), \(\sigma_{2}\) is a Pauli operator, and asterisk denotes complex conjugation. The concurrence is a monotone of the entanglement of formation [15]. As shown in [35], the concurrence of \(\rho(p,x)\) is given simply by the single-photon probability \(p\) and the coherence parameter \(x\), \[\text{CP}[\sigma(p,x)]\ =\ E[\rho(p,x)]=p. \tag{6}\] Thus, a single-qubit state \(\sigma(p,x)\) has a nonzero entanglement potential, \(\text{CP}[\sigma(p,x)]>0\) for any \(p>0\) and \(|x|\in[0,\sqrt{p(1-p)}]\), i.e., for all allowed values of the parameters except \(p=x=0\). In addition to the concurrence, one can apply the negativity, which is arguably the most popular measure of entanglement. The negativity for two qubits in a state \(\rho_{\text{out}}\) is defined by [15]: \[N(\rho_{\text{out}})=\max\big{[}0,-2\min\text{eig}(\rho_{\text{out}}^{\Gamma}) \big{]}, \tag{7}\] where \(\rho_{\text{out}}^{\Gamma}\) is the partial transpose of \(\rho_{\text{out}}\) with respect to either qubit. Thus, the negativity potential (NP) of a single-qubit state \(\sigma\) is defined as the negativity \(N\) of the two-qubit output state \(\rho_{\text{out}}\), i.e., \[\text{NP}(\sigma)=N(\rho_{\text{out}}). \tag{8}\] The explicit formula for NP for an arbitrary single-qubit state \(\sigma(p,x)\) reads [35]: \[\text{NP}[\sigma(p,x)]=\frac{1}{3}\left[2\text{Re}\left(\sqrt[3]{2\sqrt{a_{1} }+2a_{2}}\right)+p-2\right], \tag{9}\] where \[a_{1} = a_{2}^{2}-2\big{[}5(p-1)p+6|x|^{2}+2\big{]}^{3},\] \[a_{2} = 14p^{3}-21p^{2}+15p+9(p-2)|x|^{2}-4, \tag{10}\] which depends, in general, on the absolute value of the coherence parameter, \(|x|\), which is not the case for the concurrence potential. Note that \(\text{NP}[\sigma(p,x)]>0\) iff \(\text{CP}[\sigma(p,x)]>0\), because the negativity and concurrence are good measures of two-qubit entanglement. For some classes of states, including pure states and Werner states, \(\text{NP}[\sigma(p,x)]\) and \(\text{CP}[\sigma(p,x)]\) are the same, although they are different in general. An entanglement potential can also be defined via the relative entropy of entanglement (REE) [15], as studied in, e.g., Refs. [34; 35; 40]. Unfortunately, no analytical formulas are known for the REE of \(\rho(p,x)\) assuming general parameters \(p\) and \(x\), so here we limit our study of entanglement potentials to the CP and NP. ### Ideal steering potentials for a single qubit EPR steering is a type of quantum nonlocality between two parties (qubits or modes) that is in general distinct from both entanglement and Bell nonlocality. In the original meaning, it describes the ability of one observer to influence another party's (qubit's) state via local measurements on its system (qubit). According to Ref. [57], EPR steering arises from the quantum correlations exhibited by quantum systems, enabling the verification of entanglement even when complete characterization of one of the subsystems is lacking. Thus, EPR steering can be interpreted as a stronger form of entanglement such that can be detected even by untrusted detectors in one subsystem. Specifically, by considering the setup shown in Fig. 1(a), this interpretation could correspond to assuming low- (high-) quality detectors used by, e.g., Alice (Bob) for their homodyne QST. Inspired by this interpretation, applications of quantum steering have been found for quantum cryptography [17] and enhanced metrology [58; 59]. Moreover, temporal [60; 61] and spatiotemporal [62] analogues of standard (spatial) quantum steering have also been found and applied in quantum cryptography [63], as well as for quantifying non-Markovianity [64], or witnessing quantum scrambling [65] and nonclassical correlations in quantum networks [62]. Here, inspired by Ref. [34] entanglement potential for a single optical mode defined via a two-mode entanglement measure, we propose to define a steering potential for a single optical qubit (or mode) quantified by a measure of standard two-qubit (or two-mode) EPR steering. In the following, for simplicity, we consider the standard \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Regime & Entanglement & Steering & Bell nonlocality & Single-photon & Coherence & Examples of states \\ & potential & potential in three-MS & potential & probability & parameter & shown in figures \\ \hline I & \(\text{NP}=0\) & \(\text{SP}=0\) & \(\text{BP}=0\) & \(p=0\) & \(x=0\) & 8(a) \\ II & \(\text{NP}>0\) & \(\text{SP}=0\) & \(\text{BP}=0\) & \(p\in(0,\frac{2}{3}]\) & \(|x|\in(0,x_{S}]\) & 5(a), 6(a), 8(b), 8(c) \\ III & \(\text{NP}>0\) & \(\text{SP}>0\) & \(\text{BP}=0\) & \(p\in(0,\frac{2}{3}]\) & \(|x|\in(x_{S},x_{B}]\) & 5(b), 5(d), 6(b) \\ & & & & & \& \(p\in(\frac{2}{3},\frac{1}{\sqrt{2}}]\) & \(|x|\in(0,x_{B}]\) & 8(e) \\ IV & \(\text{NP}>0\) & \(\text{SP}>0\) & \(\text{BP}>0\) & \(p\in(0,\frac{1}{\sqrt{2}}]\) & \(|x|\in(x_{B},x_{\text{max}}]\) & 5(c), 8(d) \\ & & & & & \(p\in(\frac{1}{\sqrt{2}},1]\) & \(|x|\in(0,x_{\text{max}}]\) & 5(e), 5(f), 8(f) \\ \hline \hline \end{tabular} \end{table} Table 1: Four regimes of vanishing or nonvanishing two-mode nonclassicality correlation potentials revealing the hierarchy of the classes of single-qubit correlations in VOPS states, \(\sigma(p,x)\), depending on the single-photon probability \(p\) and the coherence parameter \(x\). Here, \(x_{\text{max}}=\sqrt{p(1-p)}\), while \(x_{S}\) and \(x_{B}\) are given in Eqs. (16) and (24), respectively. Costa-Angelo measure of steering [66] for which an analytical formula can be found. Of course, other measures of steering can also be used in defining steering potentials, including the steerable weight [67] and/or the steering robustness [68]; however, such definitions would be based on numerical calculations for general single-qubit states except some simple classes of states. The steering potential quantified by the Costa-Angelo measure of steering [66] in a three-measurement scenario (three-MS), corresponding to measuring the three Pauli operators on qubits of both parties, can be defined as \[\mathrm{SP}^{\prime}(\sigma)=S_{\mathrm{CA}}^{(3)}(\rho)=\frac{\Theta(\sqrt{ \mathrm{Tr}R}-1)}{\sqrt{3}-1}, \tag{11}\] given in terms of the correlation function \(R=T^{T}T\), where the elements of the matrix \(T\) are the two-qubit Stokes parameters, \(T_{ij}=\mathrm{Tr}[\rho(\sigma_{i}\otimes\sigma_{j})]\). Note that the correlation matrix \(R\), and thus \(S_{\mathrm{CA}}^{(3)}\), can be determined even experimentally without full QST, as recently demonstrated in [47]. To show this explicitly, we recall the Bloch representation of a general two-qubit state \(\rho\): \[\rho=\frac{1}{4}\Big{(}I\otimes I+\mathbf{u}\cdot\mathbf{\sigma}\otimes I+I\otimes\mathbf{ v}\cdot\mathbf{\sigma}+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and \(v_{i}=\mathrm{Tr}[\rho(I\otimes\sigma_{i})]\), respectively, and \(I\) is the single-qubit identity operator. Thus, the reconstruction of the correlation matrix \(R=T^{T}T\) of \(\rho\), without reconstructing the Bloch vectors \(\mathbf{u}\) and \(\mathbf{v}\), enables the calculation of the steering and nonlocality measures and, thus, the corresponding potentials discussed below. The steering potential can be defined in a modified way: \[\mathrm{SP}(\sigma)=S^{(3)}(\rho)=\sqrt{\tfrac{1}{2}\Theta(\mathrm{Tr}R-1)}, \tag{13}\] which corresponds to the three-MS steering measure ap Figure 5: Wigner functions \(W(\alpha)\) for single-qubit states \(\sigma(p,x)\), for chosen values of the single-photon probability \(p\) and the coherence parameter \(x\) showing the hierarchy of potentials for quantum correlations, which are summarized in Table 1. Wigner functions for: (a) \(\sigma(0.5,0)\), (b) \(\sigma(0.5,0.37)\), (c) \(\sigma(0.5,0.5)\), (d) \(\sigma(0.7,0)\), (e) \(\sigma[p,\sqrt{p(1-p)}]\) with \(p=0.7\), and (f) \(\sigma(1,0)\). The darker red, the larger positive values of the Wigner functions; while the darker blue, the more negative values; white color corresponds to \(W(\alpha)=0\). \(W(\alpha)\) varies in the ranges: (a) \([0,0.23]\), (b) \([-0.14,0.50]\), (c) \([-0.23,0.60]\), (d) \([-0.25,0.25]\), (e) \([-0.39,0.54]\), and (f) \([-0.64,0.28]\). The negative regions (marked by blue) of the Wigner functions clearly show the nonclassicality of the represented states. Note that the state shown in (a) is nonclassical, although its Wigner function is nonnegative. We did not show here the trivial case of the Gaussian Wigner function for the vacuum state when \(\mathrm{CP=SP=BP=0}\). plied in Refs. [47; 69; 70]). Note that \(S_{\rm CA}^{(3)}\) and \(S^{(3)}\) are both measures of the violation of the steering inequality derived by Cavalcanti, Jones, Wiseman, and Reid (CJWR) in the three-MS [71]. The two steering potentials are monotonically related for arbitrary single-qubit states \(\sigma\) by \[\mathrm{SP}^{\prime}(\sigma)=\frac{\sqrt{2[\mathrm{SP}(\sigma)]^{2}+1}-1}{ \sqrt{3}-1}\leq\mathrm{SP}(\sigma). \tag{14}\] in analogy to the corresponding relation for the steering measures [47]. In this paper we focus on \(\mathrm{SP}(\sigma)\) because it reduces to the entanglement potential for any two-qubit pure states, \(\sigma[p,\sqrt{p(1-p)}]\equiv|\psi\rangle\langle\psi|\). On the other hand, \(\mathrm{SP}^{\prime}(\sigma)\) calculated for experimentally reconstructed states gives usually a better agreement with theoretical predictions (see Ref. [47] for comparison of experimentally determined \(S_{\rm CA}^{(3)}\) and \(S^{(3)}\) for Werner-like states). Thus, we present both definitions. We find that the steering potential in the three-MS for a single-qubit state \(\sigma(p,x)\) is given by \[\mathrm{SP}(\sigma)=\sqrt{\Theta(3p^{2}-2p+2|x|^{2})}, \tag{15}\] clearly depending on the coherence parameter \(|x|\), which is not the case for \(\mathrm{CP}(\sigma)\). Thus, we find that a given state \(\sigma(p,x)\) has a nonzero steering potential, \(\mathrm{SP}[\sigma(p,x)]>0\) for: (i) \(p\in(0,2/3]\) if \(|x|\in(x_{S}(p),\sqrt{p(1-p)}]\), where \[x_{S}(p)=\sqrt{p(1-3p/2)}, \tag{16}\] and (ii) \(p\in(2/3,1]\) if \(|x|\in[0,\sqrt{p(1-p)}]\). To explain a rapid decrease and vanishing of the SP by introducing even a slight decoherence of a pure state \(\sigma[p,\sqrt{p(1-p)}]\) for small \(p\), as seen in Figs. 2(c), 3(a), and 3(b), we introduce a decoherence factor \(\kappa\in[0,1]\), such that \(x=\kappa\sqrt{p(1-p)}\). By analyzing Eq. (15), one readily finds that the decoherence factor should satisfy \[\kappa>\kappa_{0}=\frac{2-3p}{2-2p}, \tag{17}\] to guarantee that \(\mathrm{SP}(\sigma)>0\). Thus, we see that the steering potential is nonzero for any value of \(\kappa\) (and so \(x\)) if \(p>2/3\). However, if \(p=0.1\) (\(0.2\)), \(\mathrm{SP}(\sigma)>0\) for \(\kappa>0.94\) (\(>0.875\)). This clearly explains a rapid disappearance of the steering potential shown by the thin curve on the left-hand side of Fig. 2(c). Moreover, even a more rapid loss of the nonlocality potential can be seen in Fig. 2(e), because a vanishing SP implies a vanishing Figure 6: Cahill-Glauber function \(W^{(1/2)}(\alpha)\) for the single-qubit states (a) \(\sigma(0.5,0)\) and (b) \(\sigma(0.5,0.37)\) revealing the hierarchy of NC potentials. The states are the same as in the corresponding panels (a,b) in Fig. 5 for the Wigner function \(W^{(0)}(\alpha)\). \(W^{1/2}(\alpha)\) changes over the ranges: (a) \([-1.27,0.57]\) and (b) \([-1.49,1.17]\), which correspond, respectively, to the ranges: (a) \([0,0.23]\) and (b) \([-0.14,0.50]\) for \(W^{(0)}(\alpha)\). The negative values of \(W^{(1/2)}(\alpha)\) clearly show the NC character of the states, even if the corresponding \(W^{(0)}(\alpha)\) is nonnegative in the entire phase space. Figure 7: Marginal distributions of two-mode Wigner functions \(W(\alpha_{1},\alpha_{2})\), i.e.: (a) \(W(X_{1},Y_{1})\), (b) \(W(X_{2},Y_{2})\), (c) \(W(X_{1},X_{2})\), and (d) \(W(Y_{1},Y_{2})\), where \(X_{i}=\mathrm{Re}(\alpha_{i})\) and \(Y_{i}=\mathrm{Im}(\alpha_{i})\), for single-photon two-mode states \(\rho(p,x)\) assuming \(p=0.5\) and \(x=0.37\). This state is steerable in the three-MS, but Bell local (so unsteerable in the two-MS), and it corresponds to \(\sigma(p,x)\) shown in Fig. 5(b). The maximum values of these non-negative Wigner functions are: (a,b) \(0.50\), (c) \(0.67\), and (d) \(0.39\). ### Bell nonlocality potentials for a single qubit Single-qubit Bell nonlocality potentials can be introduced via Bell nonlocality measures of a given two-qubit state \(\rho\) quantifying the violation of the Bell inequality in the Clauser-Horne-Shimony-Holt form (denoted as Bell-CHSH) [72]: \[\left|\left\langle\mathcal{B}\right\rangle_{\rho}\right|\equiv\left|\left\langle \boldsymbol{a}\boldsymbol{\cdot}\boldsymbol{\sigma}\otimes(\boldsymbol{b}+ \boldsymbol{b}^{\prime})\boldsymbol{\cdot}\boldsymbol{\sigma}+\boldsymbol{a}^{ \prime}\boldsymbol{\cdot}\boldsymbol{\sigma}\otimes(\boldsymbol{b}-\boldsymbol {b}^{\prime})\boldsymbol{\cdot}\boldsymbol{\sigma}\right\rangle_{\rho}\right| \leq 2, \tag{18}\] given in terms of the Bell-CHSH operator \(\mathcal{B}\), where \(\boldsymbol{a},\boldsymbol{a}^{\prime},\boldsymbol{b},\boldsymbol{b}^{\prime} \in\mathbb{R}^{3}\) are unit vectors describing measurement settings. As described by Horodecki _et al._[73], the maximum possible violation of the Bell-CHSH inequality in Eq. (18) considered over all measurement settings, can be used as a Bell nonlocality measure, i.e., \(\max_{\nu}\langle\mathcal{B}\rangle_{\rho}=2\sqrt{\mathcal{M}(\rho)}\), where \(\mathcal{M}(\rho)\) is the sum of the two largest eigenvalues of the correlation matrix \(R(\rho)\). The Bell-CHSH inequality is satisfied if and only if \(\mathcal{M}(\rho)\leq 1\). To make our comparison of various types of quantum correlations consistent, as based on measures and potentials defined in the range [0,1], the Bell nonlocality measure of Ref. [73] is often rescaled as \(B(\rho)=\sqrt{\Theta[\mathcal{M}(\rho)-1]}\) (see, e.g., Refs. [74; 75; 70]). Thus, we define a Bell nonlocality potential as \[\mathrm{BP}(\sigma) = B(\rho)=\sqrt{\Theta[\mathcal{M}(\rho)-1]} \tag{19}\] \[= \sqrt{\Theta\big{\{}\mathrm{Tr}R-\min[\mathrm{eig}(R)]-1\big{\}}}.\] This measure is monotonically related to the Costa-Angelo measure of steering [66] defined in the two-measurement scenario (two-MS), which corresponds to measuring two Pauli operators on qubits of both parties [66]. Specifically, the related nonlocality potential reads \[\mathrm{BP}^{\prime}(\sigma)=S_{\mathrm{CA}}^{(2)}(\rho)=\frac{\Theta\big{\{} \sqrt{\mathrm{Tr}R-\min[\mathrm{eig}(R)]}-1\big{\}}}{\sqrt{2}-1}, \tag{20}\] which is simply related to \(\mathrm{BP}(\sigma)\) as \[\mathrm{BP}^{\prime}(\sigma)=\frac{\sqrt{[\mathrm{BP}(\sigma)]^{2}+1}-1}{ \sqrt{2}-1}\leq\mathrm{BP}(\sigma). \tag{21}\] Figure 8: Angular-momentum probability surfaces (AMPSs) for chosen two-mode states \(\rho_{qr}(p,x)\), given in Eq. (30) corresponding to a qutrit, revealing the hierarchy of the ideal (a-f) and lossy (g,h) potentials for entanglement (CP and CP\({}_{\textrm{gr}}\)), steering (SP and SP\({}_{qr}\)), and Bell nonlocality (BP and BP\({}_{qr}\)), respectively. In the ideal cases the BS is balanced (\(r=1/\sqrt{2}\)) and no phase damping occurs (\(q=0\)); while for the non-ideal cases we set: (g) phase damping with \(q=1/2\) for a balanced BS; and (h) unbalanced BS with \(r=1/4\) and no phase damping. The chosen two-mode states are: (a) \(\rho(0,0)=\left|0\right\rangle\left\langle 0\right|\), (b) \(\rho(0.1,0)\), (c) \(\rho(\frac{1}{2},0)\), (d) \(\rho(\frac{1}{2},\frac{1}{2})\), (e) \(\rho(0.7,0)\), and (f,g,h) \(\rho(1,0)=\left|1\right\rangle\left\langle 1\right|\). In (h), a more precise value of the three potentials is \(0.4841\). Note that \(B,B^{\prime},\mathrm{BP},\mathrm{BP}^{\prime}\in[0,1]\) for arbitrary two-qubit states. We find that \[\min[\mathrm{eig}(R)] = \frac{1}{2}\Big{(}1+p(5p-4)+4|x|^{2}\] \[-(1-p)\sqrt{(1-3p)^{2}+8|x|^{2}}\Big{)},\] \[\mathrm{Tr}R = 1-4p+6p^{2}+4x^{2}, \tag{22}\] so the Bell nonlocality potential BP for a general state \(\sigma(p,x)\) with \(p\in[0,1]\) and \(x\in[0,\sqrt{p(1-p)}]\) becomes \[\mathrm{BP}[\sigma(p,x)] = \Big{\{}\Theta\Big{[}\frac{1}{2}\big{(}7p^{2}+(1-p)\sqrt{(1-3p)^{ 2}+8|x|^{2}} \tag{23}\] \[-4p+4|x|^{2}-1\big{)}\Big{]}\Big{\}}^{1/2}.\] We find that a given state \(\sigma(p,x)\) has a nonzero nonlocality potential, \(\mathrm{BP}[\sigma(p,x)]>0\) for: (i) \(p\in(0,\frac{1}{\sqrt{2}}]\) with \(|x|\in(x_{B}(p),\sqrt{p(1-p)}]\), where \[x_{B}(p)=\frac{1}{\sqrt{2}}\sqrt{1+p-3p^{2}-(1-p)\sqrt{1-p^{2}}}, \tag{24}\] and (ii) \(p\in(\frac{1}{\sqrt{2}},1]\) with \(|x|\in[0,\sqrt{p(1-p)}]\). ### Hierarchy of nonclassicality potentials Single-qubit correlations, as quantified by the NC potentials, satisfy the following hierarchy: \[\mathrm{BP}(\sigma)\leq\mathrm{SP}(\sigma)\leq\mathrm{NP}(\sigma)\leq\mathrm{ CP}(\sigma), \tag{25}\] for an arbitrary state \(\sigma(p,x)\). This hierarchy is in close analogy to that for the corresponding two-qubit correlation measures (see, e.g., [47]). For single-qubit pure states \(\sigma(p,\sqrt{p(1-p)})=|\psi\rangle\langle\psi|\), the potentials become the same, \[\mathrm{BP}(|\psi\rangle)=\mathrm{SP}(|\psi\rangle)=\mathrm{NP}(|\psi\rangle)= \mathrm{CP}(|\psi\rangle)=p. \tag{26}\] Figure 4(a) shows the hierarchy of ideal NC potentials, i.e., assuming a lossless and balanced BS. In addition to the vacuum (for \(p=x=0\)), which is the only separable VOPS state, this hierarchy includes the states with the potentials for: (i) non-steerable entangled states (corresponding to the red region), (ii) steerable states but Bell local (in the green region), and (iii) Bell nonlocal states (in the blue region). We can see in this figure that a VOPS state has nonvanishing SP or BP if either \(p\) is sufficiently large (and then independent of \(x\)) or if smaller values of \(p\) are accompanied by a sufficiently large \(x\). The same conclusion can be drawn by analyzing Figs. 2(c,e) and Eqs. (16), (17), and (24). Figures 3(a,b,c) show the ideal NC potentials and their hierarchy for the mixed states defined as \[\sigma^{\prime}(p,p^{\prime})=p^{\prime}|1\rangle\langle 1|+(1-p^{\prime})| \psi_{p}\rangle\langle\psi_{p}|, \tag{27}\] where \(|\psi_{p}\rangle=\sqrt{p}|1\rangle+\sqrt{1-p}|0\rangle\). These states lie on the cross sections of some graphs in Fig. 4 as explained in detail in the caption of Fig. 3. In particular, a very narrow region, which close to \(p^{\prime}=0\) with nonzero BP and SP, is shown in Fig. 3(a) for \(p=0.2\). ## III Realistic nonclassicality potentials We stress that the standard entanglement potentials of Ref. [34] are based solely on the special case of \(\rho_{\mathrm{out}}\) for a balanced (50/50) BS assuming no dissipation. Now we analyze how experimental imperfections can affect the two-qubit states generated from single-qubit states given in Eq. (1). We first consider the effect of phase damping. Specifically, the Kraus operators for a single-qubit phase-damping channel (PDC) read [76]: \[E_{0}(q_{i})=|0\rangle\langle 0|+\sqrt{1-q_{i}}|1\rangle\langle 1|,\quad E_{1}(q_ {i})=\sqrt{q_{i}}|1\rangle\langle 1|, \tag{28}\] where \(q_{i}\) (with \(i=1,2\)) are phase-damping coefficients (rates), and the Kraus operators satisfy the normalization relation \(\sum_{n=0,1}E_{n}^{\dagger}(q_{i})E_{n}(q_{i})=I\). Two-qubit phase damping transforms a given two-mode state \(\rho_{\mathrm{in}}\) to \[\rho_{\mathrm{PDC}}=\sum_{i,j}[E_{i}(q_{1})\otimes E_{j}(q_{2})]\rho_{\mathrm{ in}}[E_{i}^{\dagger}(q_{1})\otimes E_{j}^{\dagger}(q_{2})]. \tag{29}\] For simplicity, we analyze the same phase damping rate in both qubits, so we set \(q\equiv q_{1}=q_{2}\). We also consider the effect of an unbalanced BS on the generation of two-mode states, as given by Eq. (2) for \(r\neq t=\sqrt{1-r^{2}}\). By the inclusion of these effects, we find that the output state \(\rho_{\mathrm{out}}\), given in Eq. (2), now generalizes to \[\rho_{qr}(p,x)=\left[\begin{array}{cccc}1-p&-Qrx&Qtx&0\\ -Qrx^{*}&pr^{2}&-pQ^{2}rt&0\\ Qtx^{*}&-pQ^{2}rt&pt^{2}&0\\ 0&0&0&0\end{array}\right], \tag{30}\] where \(Q=\sqrt{1-q}\) for the phase damping parameter \(q\). Equation (30) reduces to Eq. (3) for \(r=t=1/\sqrt{2}\) and \(q=0\). Thus, by considering these imperfections, we can analyze the entanglement, steering, and Bell nonlocality generalized potentials corresponding to more realistic experimental situations, as defined, respectively, by \[\mathrm{CP}_{qr}(\sigma) = C(\rho_{qr}), \tag{31}\] \[\mathrm{NP}_{qr}(\sigma) = N(\rho_{qr}),\] (32) \[\mathrm{SP}_{qr}(\sigma) = S^{(3)}(\rho_{qr}),\] (33) \[\mathrm{BP}_{qr}(\sigma) = B(\rho_{qr}), \tag{34}\] and analogously to the related potentials based on \(S^{(3)}_{\mathrm{CA}}(\rho_{qr})\) and \(S^{(2)}_{\mathrm{CA}}(\rho_{qr})\). The hierarchy relations, given in Eq. (25) for the ideal potentials, simply generalize for the realistic (i.e., lossy) NC potentials to \[\mathrm{BP}_{qr}(\sigma)\leq\mathrm{SP}_{qr}(\sigma)\leq\mathrm{NP}_{qr}(\sigma )\leq\mathrm{CP}_{qr}(\sigma). \tag{35}\] We find that the concurrence generalized potential reads \[\mathrm{CP}_{qr}[\sigma(p,x)]=C[\rho_{qr}(p,x)]=2p(1-q)rt=p(1-q)\sin\theta, \tag{36}\] where the BS parameter \(\theta\) is defined below Eq. (2). One can define other entanglement generalized potentials based on, e.g., the universal witness of entanglement (UWE) can be defined by \(\det\rho^{\Gamma}\)[77] or \(\Theta(-\det\rho^{\Gamma})\) to be consistent with the definitions of other nonclassicality quantifiers applied in this paper. Note that an effective experimental method for measuring the UWE without full QST was described in [78] (although the method has not been implemented experimentally yet). The measurement of the concurrence of a two-qubit state requires usually its full QST. The UWE and the corresponding entanglement generalized potential \(\mathrm{UWEP}_{qr}\) for the \(\theta\)-dependent BS output state reads \[\mathrm{UWEP}_{qr}\equiv\Theta[-\det\rho^{\Gamma}_{qr}(p,x)]=(\tfrac{1}{2}p \sin\theta)^{4}(1-q)^{2}, \tag{37}\] being independent of \(x\), which is the same as for the concurrence potential \(\mathrm{CP}_{qr}\), but contrary to the negativity potential \(\mathrm{NP}_{qr}\). Anyway, it holds \[\mathrm{UWEP}_{qr}(\sigma)>0\Leftrightarrow\mathrm{CP}_{qr}(\sigma)>0 \Leftrightarrow\mathrm{NP}_{qr}(\sigma)>0, \tag{38}\] for any \(x\). Note that the analytical expression for \(\mathrm{NP}_{qr}\), which generalizes Eq. (9), is quite lengthy, so it is not presented here. We stress that all the nonclassicality measures and quantifiers considered in this paper, are independent of the phase of \(x\), although the \(R\) matrix depends. So, to have compact formulas for the \(R\) matrix, let us set in the following equations that the coherence parameter \(x\)_is real_. Then the correlation matrix \(R\) reads \[R=\left[\begin{array}{ccc}4Zp^{2}+4Q^{2}r^{2}x^{2}&0&Y\\ 0&4p^{2}Z&0\\ Y&0&(1-2p)^{2}+4Q^{2}t^{2}x^{2}\end{array}\right], \tag{39}\] where \[Y = 2Qr\left(-2pQ^{2}t^{2}+2p-1\right)x,\] \[Z = Q^{4}r^{2}t^{2}=\tfrac{1}{4}(1-q)^{2}\sin^{2}\theta. \tag{40}\] The eigenvalues of \(R\) are found as: \[e_{1,2} = \frac{1}{2}\left(1+4\left[Q^{2}x^{2}+p^{2}(Z+1)-p\right]\pm\sqrt {f_{e}}\right),\] \[e_{3} = 4p^{2}Z=[p(1-q)\sin\theta]^{2}, \tag{41}\] where \[f_{e}=\left(\mathrm{Tr}R-4p^{2}Z\right)^{2}-16Z\left(2p^{2}+2x^{2}-p\right)^{ 2}. \tag{42}\] Thus, we have \[\mathrm{Tr}R=\sum e_{i} = 4p(2pZ+p-1)+4Q^{2}x^{2}+1, \tag{43}\] \[\mathrm{min}[\mathrm{eig}(R)] = \mathrm{min}(e_{2},e_{3}), \tag{44}\] which enable the calculation of the generalized potentials for \(\mathrm{SP}_{qr}\) and \(\mathrm{BP}_{qr}\). The hierarchy of the lossy NC potentials is plotted in Figs. 4(b-f) in comparison to the ideal NC potentials shown in Fig. 4(a). The red, green, and blue regions show, respectively, the regimes II, III, and IV listed in Table 1; while the point \((p,x)=(0,0)\) indicates the only separable VOPS state (i.e., the vacuum), which belongs to the regime I. It is clearly seen that dephasing and unbalanced beam splitting considerably decrease the regions of the nonvanishing steering and nonlocality potentials. ## IV Phase-space and angular-momentum descriptions of nonclassicality ### Wigner and Cahill-Glauber quasiprobability distributions To visualize the nonclassicality of the analyzed single- and two-mode states, we here apply the standard Wigner functions and their generalizations. It is well known that linear transformations (including that of a BS) do not change the global nonclassicality of states. This can be convincingly demonstrated using the Cahill-Glauber \(s\)-parametrized quasiprobability distribution (QPD), which is defined for any \(s\in[-1,1]\) in Appendix A. Note that the \(s\)-parametrized QPD reduces in special cases to the standard Wigner (\(W=\mathcal{W}^{(0)}\)) and Husimi (\(W=\mathcal{W}^{(-1)}\)) functions, which can be measured experimentally, and to the Glauber-Sudarshan function (\(P=\mathcal{W}^{(1)}\)), which is used in the definition of the nonclassicality of optical fields, but usually cannot be measured experimentally, because of its singularity (except very special nonclassical fields). For example, a perfect BS transformation, which is given by the unitary transformation \(U_{\mathrm{BS}}\), given below Eq. (2), of an arbitrary input state \(\rho_{\mathrm{in}}\) (of any dimension) resulting in the two-mode output state \(\rho_{\mathrm{out}}\), can equivalently be described by the evolution in a two-mode phase space of the corresponding QPD given by [79] \[\mathcal{W}^{(s)}_{\mathrm{out}}(\alpha_{1},\alpha_{2})=\mathcal{W}^{(s)}_{ \mathrm{in}}(t\alpha_{1}+r\alpha_{2},r\alpha_{1}-t\alpha_{2}). \tag{45}\] This equation implies that the initial QPD is displaced, without changing its form, along a trajectory in the phase space spanned by the canonical position (\(X_{i}\equiv\mathrm{Re}\,\alpha_{i}\) for \(i=1,2\)) and (\(Y_{i}\equiv\mathrm{Im}\,\alpha_{i}\)) momentum operators. The trajectory is given by the solution of the corresponding classical equations of motion. Thus, the global nonclassicality of the state is unchanged during this evolution. In this paper we are mainly interested in a special case of the BS transformation assuming a VOPS state \(\sigma\) in one input ports and the vacuum in another port. Then, Eq. (45) reduces to \[\mathcal{W}^{(s)}_{\rm out}(\alpha_{1},\alpha_{2})=\mathcal{W}^{(s)}_{\rm vops}( t\alpha_{1}+r\alpha_{2})\,\mathcal{W}^{(s)}_{\rm vac}(r\alpha_{1}-t\alpha_{2}), \tag{46}\] where \(\mathcal{W}^{(s)}_{\rm vac}\) is the single-mode-vacuum QPD given by \[\mathcal{W}^{(s)}_{\rm vac}(\alpha)=\frac{1}{\pi}T^{(s)}_{00}(\alpha)=\frac{2 }{\pi(1-s)}\exp\left(-\frac{2}{1-s}|\alpha|^{2}\right), \tag{47}\] and \(\mathcal{W}^{(s)}_{\rm vops}(\alpha)\) is the QPD for an arbitrary single-mode state \(\sigma(p,x)\): \[\mathcal{W}^{(s)}_{\rm vops}(\alpha) = \frac{1}{\pi}\Big{[}(1-p)T^{(s)}_{00}(\alpha)+pT^{(s)}_{11}(\alpha) \tag{48}\] \[\quad+xT^{(s)}_{10}(\alpha)+x^{*}T^{(s)}_{01}(\alpha)\Big{]},\] with the functions \(T^{(s)}_{nm}(\alpha)\) given explicitly in Eq. (35). Note that Eq. (47) is a special case of Eq. (48). Another important special case of that formula is the QPD \(\mathcal{W}^{(s)}_{\rm 1ph}(\alpha)=T^{(s)}_{11}(\alpha)/\pi\) for the single-photon Fock state: \[\mathcal{W}^{(s)}_{\rm 1ph}(\alpha)=\frac{2(4|\alpha|^{2}+s^{2}-1)}{\pi(1-s)^ {3}}\exp\left(-\frac{2}{1-s}|\alpha|^{2}\right), \tag{49}\] which in the limit \(s\to 1\) becomes a derivative of Dirac's \(\delta\)-function [14]: \[P_{\rm 1ph}(\alpha)\equiv\mathcal{W}^{(1)}_{\rm 1ph}(\alpha)=\left(1+\frac{ \partial}{\partial\alpha}\frac{\partial}{\partial\alpha^{*}}\right)\delta( \alpha), \tag{50}\] which can be easily shown by representing \(\delta(\alpha)\) as the limit of the sequence of zero-centered normal distributions, i.e., \(\mathcal{W}^{(s)}_{\rm vac}(\alpha)\). Equation (50), and thus also Eq. (48) for \(s=1\), clearly show a nonclassical character of any VOPS state (except the vacuum), as these \(P\)-functions are more singular than that for a coherent state \(|\alpha\rangle\), i.e., \(P_{\rm coh}(\alpha)=\delta(\alpha)\). The Wigner function for an arbitrary VOPS state \(\sigma(p,x)\), which can be obtained from Eq. (48), reads \[W_{\rm vops}(\alpha) = \frac{2}{\pi}\Big{[}(1-p)+p(4|\alpha|^{2}-1) \tag{51}\] \[\quad+2{\rm Re}(x\alpha)\Big{]}\exp\left(-2|\alpha|^{2}\right).\] Examples of the single-mode Wigner and Cahill-Glauber distribution for a chosen \(\sigma\) state are plotted in Figs. 5 and 6, respectively. Experimentally reconstructed two-mode Wigner functions \(W(\alpha_{1},\alpha_{2})\) are usually shown graphically (see, e.g., [80]) via their marginal functions of four different quadrature pairs, i.e.: \(W(X_{1},X_{2})=\int W(\alpha_{1},\alpha_{2})dY_{1}dY_{2}\), and analogously \(W(Y_{1},Y_{2})\), \(W(X_{1},Y_{1})\), and \(W(Y_{2},X_{2})\). As an example, we show such four marginal distributions in Fig. 7 for \(\rho(p=0.5,x=0.37)\). The Cahill-Glauber QPD, \(W^{(1/2)}(\alpha)\), was calculated using Eq. (34), while the Wigner functions were calculated from: (i) the simple formulas in Eqs. (46), (47), and (51) for the model without phase damping, i.e., for the BS output state in Eq. (3); and (ii) the general definition of the two-mode Wigner function, given in Eq. (48), for two-mode states affected by phase damping according to Eq. (30). ### Angular-momentum probability surfaces Because the studied two-mode states are limited to two qubits or even formally to a single qutrit, as implied Eq. (30), we can visualize their properties more compactly using angular-momentum probability surfaces (AMPS) or, equivalently, angular-momentum Wigner functions. As defined in [81; 82; 83], an AMPS [say \(\rho_{JJ}(\theta,\phi)\)] for a given \((2J+1)\times(2J+1)\) state \(\rho\) (which can be interpreted as an angular-momentum state for any \(J\)) is a three-dimensional closed surface, where the distance from the origin in a specific direction corresponds to the probability of the maximum projection of \(\rho\) along that direction. An AMPS \(\rho_{JJ}(\theta,\phi)\) can be given as a linear combination of spherical harmonics with the coefficients corresponding to the moments of a polarization operator, and the Clebsch-Gordan coefficients (see [82] for details). A one-to-one correspondence between a given \(\rho\) and \(\rho_{JJ}(\theta,\phi)\) can be easily shown by recalling the orthonormality of the spherical harmonics. Alternatively, one can apply the angular-momentum Wigner functions introduced by Agarwal [84] (see also [85]), which are also simply related to the AMPS [83]. Thus, the AMPS, and the above-mentioned standard and generalized Wigner functions can be interchangeably used as complete representations of the studied state \(\rho\). In our case, we encode the Fock basis states \(|00\rangle\), \(|01\rangle\), and \(|10\rangle\) into, respectively, the angular momentum states \(|J,-1\rangle\), \(|J,0\rangle\), and \(|J,1\rangle\), where \(J=1\) corresponds to a qutrit. We note that other encodings can also be applied. We have shown in Fig. 8, the AMPS for chosen states, which reveal different relations between the NC potentials corresponding to all the hierarchy regimes listed in Table 1. ## V Discussion ### Experimental feasibility #### v.1.1 Generation of arbitrary vacuum-one-photon superpositions A number of methods for generating superpositions of Fock states, including the studied VOPS states, have been proposed and implemented experimentally with optical [86; 87; 88; 24; 89] or microwave [90] photons. In particular, VOPS can be generated from a coherent state by generalized conditional quantum teleportation and projective synthesis using a quantum scissors device, as shown schematically in Fig. 1(b). The method was proposed in [91], its experimental feasibility was analyzed in detail in [92, 93], and it was experimentally implemented in [88]. The device comprises two balanced beam splitters \(\mathrm{BS}_{1}\) and \(\mathrm{BS}_{2}\). A single-photon state \(|1\rangle\) is mixed with the vacuum \(|0\rangle\) on \(\mathrm{BS}_{1}\), and the generated entangled state at one of the \(\mathrm{BS}_{1}\) outputs is mixed with a coherent state \(|\alpha\rangle\) (with a complex amplitude \(\alpha\)) at \(\mathrm{BS}_{2}\). To generate a desired pure state \(\sigma[p,\sqrt{p(1-p)}]=|\psi\rangle\langle\psi|\), the amplitude \(\alpha\) should satisfy the condition \(p/(1-p)=|\alpha|^{2}\), so \(|\psi\rangle\sim(|0\rangle+\alpha|1\rangle)\). The projection synthesis of \(|\psi\rangle\) is realized by conditional measurements at the two single-photon detectors, \(\mathrm{D}_{1}\) and \(\mathrm{D}_{2}\). A proper generation of \(|\psi\rangle\) at the second output port of \(\mathrm{BS}_{1}\) occurs if the detector \(\mathrm{D}_{1}\) registers a single photon and \(\mathrm{D}_{2}\) does not register any (or vice versa). In case of other measurement results, the generation (and qubit teleportation) is unsuccessful, so the procedure should be repeated. Note that a VOPS state is generated via quantum state truncation (which can be considered a measurement-induced photon blockade process) and via the conditional teleportation of the truncated state. To generate an incoherent VOPS state \(\sigma(p,x)\) with a coherence factor \(|x|<\sqrt{p(1-p)}\), a phase shifter can be applied (with a specific probability), as shown in Fig. 1(b). For example, by using random \(0\) or \(\pi\) phase shifts with a given probability, one can decohere a given pure-state superposition to an arbitrary degree. A phase shifter can be replaced by two kinds of mirrors changing the phase of a state during its reflection by either \(0\) or \(\pi\). Let us assume that the state \(|\psi_{0}\rangle=\mathcal{N}(|0\rangle+\alpha|1\rangle)\) for \(\phi=0\) was generated \(n_{0}\) times, and \(|\psi_{1}\rangle=\mathcal{N}(|0\rangle-\alpha|1\rangle)\) for \(\phi=\pi\) was produced \(n_{1}\) times, where \(\mathcal{N}\) is the normalization constant. In fact, the state \(|\psi_{1}\rangle\) is generated in the scheme if a single photon is detected by \(D_{2}\) instead of \(D_{1}\); thus, no phase shifter is required for generating \(|\psi_{1}\rangle\). The corresponding mixed state reads \(\sigma^{\prime}=\sum_{i=0,1}n_{i}|\psi_{i}\rangle\langle\psi_{i}|/(n_{0}+n_{ 1})\); so, if \(n_{1}=n_{0}\) then \(x=0\), and if \(n_{1}=0\) then \(x=\sqrt{p(1-p)}\). Thus, by choosing properly \(n_{1}\) compared to \(n_{0}\), one can obtain any value of \(|x|\in[0,\sqrt{p(1-p)}]\). VOPS states can also be generated conditionally (via postselection) using other linear-optical schemes, e.g., via: quantum-optical catalysis [24], spontaneous parametric down-conversion [86], or a single-photon linear amplification with finite gain [94]. We focus here on freely propagating VOPS states generated in a linear-optical system. We note that the generation and control of arbitrary superpositions of harmonic-oscillator states were experimentally demonstrated also in various other systems, which include microwave resonators [95, 96, 97, 90] and optical cavities [98], or even ion traps, where superpositions of motional states of trapped ions were generated [99]. Thus, our classification of NC is not limited to VOPS states, but also applies to other bosonic excitations. #### iv.2.2 Two-mode state tomography Once a desired VOPS state is generated, it is mixed with the vacuum on a balanced BS and than a two-mode Wigner function can be reconstructed using, e.g., homodyne QST as shown in Fig. 1(a). It should be noted that from the experimental point of view, it is much more challenging to perform optical tomography on qubit states implemented as VOPS states compared to such tomographic measurements of optical qubits implemented in other ways, including photon polarization. Anyway, a number of experiments reported the generation of VOPS states and their tomographic reconstruction via homodyne detection [24, 25, 26, 88, 89]. Homodyne tomographic measurements of the joint detection probabilities for testing Bell nonlocality were first considered on correlated optical beams at the output of a nondegenerate parametric amplifier in Ref. [100]. Thus, a typical setup of two-mode homodyne QST, as schematically shown in Fig. 1(a), can be applied for reconstructing a two-qubit Wigner function \(W(\alpha_{1},\alpha_{2})\) from which the corresponding density matrix \(\rho_{\mathrm{exp}}\) can be calculated by Eq. (10). To find the corresponding single-qubit state \(\sigma(p,x)\), one can numerically find the closest state \(\rho_{qr}(p,x)\), given in Eq. (30), maximizing the Uhlmann-Jozsa fidelity (or, equivalently, minimizing the Bures distance), \[F_{\mathrm{max}} = \max_{p,x,q,r}F[\rho_{\mathrm{exp}},\rho_{qr}(p,x)] \tag{52}\] \[\equiv \max_{p,x,q,r}\left[\mathrm{Tr}\Big{(}\sqrt{\sqrt{\rho_{\mathrm{ exp}}}}\rho_{qr}(p,x)\sqrt{\rho_{\mathrm{exp}}}\Big{)}\right]^{2}.\] Homodyne QST for reconstructing two-mode Wigner function can be replaced by the Lutterbach-Davidovich QST [101] based on performing proper displacements in a phase space and parity measurements using the Cahill-Glauber formula, given in Eq. (10). The single-mode QST method was experimentally applied in, e.g., [90] for reconstructing single-mode Wigner functions of Fock-state superpositions (including VOPS states) in a superconducting resonator. The Lutterbach-Davidovich method can be readily applied for reconstructing also two-mode Wigner functions (as experimentally implemented in, e.g., [102]), in the same spirit as single-mode homodyne QST was generalized to two-mode QST [see Fig. 1(a)]. Moreover, a modified Lutterbach-Davidovich method can be applied for reconstructing also the single- and two-mode Cahill-Glauber \(s\)-parametrized QPDs given in Eqs. (11) and (10) for \(s\) not too close \(1\). The NC of experimental VOPS states can be tested by applying various NC witnesses, including a Vogel criterion [14] as applied in [24], or negative Wigner functions [89]. The NC of single-photon Fock states was experimentally tested via violating a Bell inequality calculated from a two-mode density matrix reconstructed via homodyne detection in [25; 26]; those results can be considered as a special case of our nonlocality potential for a single-photon Fock-like state generated experimentally, and other \(\sigma(p,x)\) states were not studied there. At the end of this section we would like to stress the importance of applying quantum state tomography in this study. Specifically, we are interested not only in testing whether a single-mode state exhibits a given type of quantum correlations, but our goal is to quantify the NC of the state via measures of two-mode quantum correlations, and finally to demonstrate the related hierarchy of such NC quantifiers. This is a much harder problem especially to determine an entanglement measure of a general two-qubit state without a full two-qubit QST. For a related discussion and references we refer to [45], where the hierarchy of entanglement, steering, and Bell nonlocality of experimental two polarization qubit states was demonstrated via a full QST. Actually such a method which enables the determination of an entanglement measure without full QST of two polarization qubits has been proposed [78], but it is quite complicated and, thus, has not been implemented experimentally yet. The determination of the Costa-Angolo steering measures \(S_{\rm CA}^{(2)}(\rho)\) and \(S_{\rm CA}^{(3)}(\rho)\) (and, thus, the corresponding Bell nonlocality and steering potentials) without full QST is possible, but the method has been so far developed only for polarization qubits [75]. To our knowledge, the only experimental work showing the hierarchy of entanglement, steering, and Bell nonlocality measures without full QST has been reported very recently in Ref. [47], but only for some specific classes of two-polarization qubits (i.e., Werner and Werner-like states). In the present paper, we study an analogous hierarchy of quantum correlations, but for single-qubit states. These states, after mixing with the vacuum on a balanced or unbalanced BS and subjected to phase damping result in two-qubit states belonging to much broader classes of states than the Werner and Werner-like states. ### Nonclassical potentials for higher-dimensional single-mode optical states One can apply NC potentials not only for VOPS states, but also for single-mode optical states of higher dimensions, and (at least for some classes of) continuous-variable (CV) states. We can interpret such potentials in close analogy to those for single-qubit states by applying the Wiseman _et al._ interpretation of the corresponding two-mode NC correlations [57]. Specifically, an EPR steering potential describes the quantum correlations exhibited by a single-mode bosonic field, enabling the verification of two-mode entanglement, generated by a linear coupling of the single-mode field with the vacuum, even when complete characterization of one of the generated modes is lacking. While the Bell nonlocality (entanglement) potentials describe single-mode nonclassical correlations in the case when complete characterization of both generated modes is lacking (available). The calculation of steering and Bell nonlocality potentials based on measures of the corresponding two-mode correlations would be very challenging numerically, except low-dimensional qudits or specific classes of CV states (like Gaussian states). In particular, the calculation of steering potentials based on two-mode steering measures for two qutrits can be effectively performed by applying semidefinite programming [16]. Anyway, such a measure-based approach becomes numerically demanding already for two quartits. Thus, it is much more practical to analyze single-mode steering and nonlocality potentials for qudits and CV systems based on necessary and sufficient criteria, corresponding to violations of some classical inequalities for observing two-mode correlations, instead of analyzing their measures. Thus, the hierarchies of criteria of steering and nonlocality potentials for single-mode fields can be determined via the hierarchies of sufficient or necessary conditions for observing, respectively, two-mode steering (e.g., [22]) and nonlocality (e.g., [51]). The calculations of steering and Bell nonlocality potentials can usually be much simplified by limiting the number of measurements from infinite to finite, as we have assumed even in our analysis of single-qubit states. A variety of powerful Bell and steering inequalities, which can be readily applied for calculating the corresponding potentials beyond the VOPS states and beyond the applied measurement scenarios, are reviewed in Refs. [18] and [16; 17], respectively. Steering witnesses for CV systems can be based on the variances of some observables [103] or entropic uncertainty relations [104; 105]. We also note that Bell inequalities, which can be the basis for defining the nonlocality potentials for CV systems, have been studied even for the infinite number of measurement settings of each party [106; 107] and for continuous sets of values of the measurement outputs [108; 109]. Such an analysis of NC potentials for CV states, can be much simplified by limiting the interest to Gaussian states, i.e., displaced squeezed thermal states. Actually, an entanglement potential based on the logarithmic negativity was applied to Gaussian states already in the first paper on NC potentials [34]. The convertibility (via a BS) of locally squeezed Gaussian states and entanglement was considered in Ref. [110]. Concerning steering potentials, one can use a computable measure of steering for arbitrary bipartite Gaussian states proposed in [111]. Nonlocality potentials for Gaussian states can be considered via Bell's inequality violations using homodyne detection, as studied in, e.g., [112]. ## VI Conclusions We have studied theoretically measures of various types of single-qubit quantum correlations related to two-qubit correlations via a linear transformation. Thus, we have generalized the concept of entanglement potentials of Asboth _et al._[34], as measures of single-mode NC, by proposing the Bell nonlocality and steering potentials. Analogously to the Wiseman _et al._ standard interpretation of entanglement, steering, and Bell nonlocality of two-party systems [57], one can interpret NC correlations of single-qubit states with nonvanishing potentials via trusted or untrusted detectors used for measuring the two-qubit states, which are generated via balanced beamsplitting on the single-qubit ones. We have applied this approach for quantifying the nonclassicality of VOPS states by mixing them with the vacuum on a balanced BS and then to determine various measures of two-qubit (two-mode) nonclassical correlations. Specifically, we have applied here: (i) the negativity and concurrence as examples of entanglement potentials; (ii) quantum steering potentials based on the Costa-Angelo measures [66] of two-qubit steering in the three-measurement scenario via the maximal violation of a CJWR inequality. We have chosen these specific steering potentials as they can be calculated analytically for any two-qubit states. We note that steering potentials can be defined and applied (at least numerically) via other popular steering measures, like the steerable weight [67] and the steering robustness [68], which also might be applied for studying steering potentials for two qudit states. Moreover, we have defined a Bell nonlocality potential via the Horodecki measure [73] of two-qubit Bell nonlocality quantifying the maximal violation of the Bell-CHSH inequality. We note that this potential is monotonically related to the steering potential based on the Costa-Angelo measure in the two-measurement scenario [66]. Thus, with the help of these potentials, we could reveal the hierarchy of single-qubit nonclassical correlations in analogy to the hierarchy of the corresponding two-qubit correlations [45; 47]. We have discussed various methods for the generation of VOPS states and the homodyne tomographic reconstruction of the resulting two-mode states and the calculation of realistic potentials assuming system imperfections including phase damping and unbalanced beam splitting. The studied hierarchy of single-qubit potentials for generating two-qubit entanglement, steering, and Bell nonlocality can also be useful for estimating the degree of one type of quantum correlation from another, e.g., estimating the Bell nonlocality or steering potentials from an entanglement potential (or vice versa), in the spirit of such estimations for the corresponding two-qubit quantum-correlation measures (see, e.g., [113] and references therein). Apart from a fundamental interest in single-photon entanglement and VOPS states, these have been experimentally used for quantum information tasks, including quantum teleportation [23; 88] and EPR steering [26]. Moreover, one can subject a VOPS to a non-demolition photon presence detection gate and to partially erase this information [114]. Thus, we believe that a deeper study NC correlations of VOPS states can find further applications for quantum technologies. We also stress that the studied NC potentials are not limited to Fock-state superpositions. Indeed, the results of this paper can be experimentally implemented with qubits encoded in, e.g., photon polarization, as reported in Ref. [115]. Thus, we believe that our work can stimulate further research in quantifying and utilizing the NC of single-mode optical fields in close analogy to various types of intermode quantum correlations with applications for quantum information processing. ###### Acknowledgements. A.M. and K.B. are supported by the Polish National Science Centre (NCN) under the Maestro Grant No. DEC-2019/34/A/ST2/00081. J.K. acknowledges Internal Palacky University grant No. IGA_PrF_2023_005. F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R&D Grant Number JPMJMS2061], the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06. ## Appendix A Cahill-Glauber \(s\)-parametrized quasiprobability distributions Here we recall some basic formulas of the Cahill-Glauber formalism of quasiprobability distributions (QPD) [116], which are phase-space representations of single- or multimode states. These are generalizations of the standard Wigner, Husimi, and Glauber-Sudarshan functions. For a multimode optical state \(\rho\), the Cahill-Glauber \(s\)-parametrized QPD is defined as \[\mathcal{W}^{(s)}(\{\alpha_{k}\})=\frac{1}{\pi^{M}}\langle T^{(s)}(\{\alpha_{ k}\})\rangle=\frac{1}{\pi^{M}}\mathrm{Tr}\Big{[}\rho\prod_{k}T^{(s)}(\alpha_{k}) \Big{]}, \tag{10}\] where \(s\in[-1,1]\), \(\{\alpha_{k}\}=(\alpha_{1},...,\alpha_{M})\) (for the studied states \(\rho_{\mathrm{out}}\), the number of modes is \(M=2\)), \(\alpha_{k}\) are complex numbers, and the \(k\)th-mode operator \(T^{(s)}(\alpha_{k})\) is defined by \[T^{(s)}(\alpha_{k})=\int D^{(s)}(\beta_{k})\,\exp(\alpha_{k}\beta_{k}^{*}- \alpha_{k}^{*}\beta_{k})\,\frac{\mathrm{d}^{2}\beta_{k}}{\pi}, \tag{11}\] which is the Fourier transform of the \(s\)-parametrized displacement operator, \[D^{(s)}(\beta_{k})=\exp\left(\beta_{k}a_{k}^{\dagger}-\beta_{k}^{*}a_{k}+\frac {s}{2}\left|\beta_{k}\right|^{2}\right), \tag{12}\] where \(a_{k}\) (\(a_{k}^{\dagger}\)) is the \(k\)th-mode annihilation (creation) operator. The multimode operator \(T^{(s)}(\{\alpha_{k}\})\) is just a product of single-mode operators \(T^{(s)}(\alpha_{k})\), which can be equivalently defined as \[T^{(s)}(\alpha_{k})=\frac{2}{1-s}D(\alpha_{k})\left(\frac{s+1}{1-s}\right)^{a_{k }^{\dagger}a_{k}}D^{-1}(\alpha_{k}), \tag{10}\] where \(D(\alpha_{k})=D^{(0)}(\alpha_{k})\) is the standard displacement operator. In the three special cases of \(s=-1,0,1\), the \(s\)-parameterized QPD, \(\mathcal{W}^{(s)}(\alpha_{1},\alpha_{2})\), reduces, respectively, to the Husimi \(Q\), Wigner \(W\), and Glauber-Sudarshan \(P\) functions corresponding to the antinormal, symmetric, and normal orderings of the creation and annihilation operators. After substituting Eq. (10) to Eq. (11) for \(s=0\), one arrives at \[W(\{\alpha_{k}\}) \equiv W^{(0)}(\{\alpha_{k}\})\] \[= \frac{2}{\pi^{M}}\mathrm{Tr}\Big{[}\rho\prod_{k}D(\alpha_{k}) \mathcal{P}(a_{k})D^{-1}(\alpha_{k})\Big{]},\] where \(\mathcal{P}(a_{k})=(-1)^{a_{k}^{\dagger}a_{k}}\) is the photon-number parity operator. The Cahill-Glauber formula in Eq. (11) is the basis for a direct experimental measurement of the single-mode [90; 101] and multimode Wigner functions just by performing proper displacements \(D(\alpha_{k})\) in the phase space and the measurements of the parity operator \(\mathcal{P}(a_{k})\). The QPD for any \(s\) contains a full information about a given state \(\rho\), as implied by the formula \[\rho = \int\mathcal{W}^{(s)}(\{\alpha_{k}\})\,T^{(-s)}(\{\alpha_{k}\})\, \mathrm{d}^{2}\{\alpha_{k}\}, \tag{11}\] where \(\mathrm{d}^{2}\{\alpha_{k}/\pi\}=\mathrm{d}^{2}\alpha_{1}\cdots\mathrm{d}^{2 }\alpha_{M}\). For numerical calculations of a QPD (practically for any \(s\), which is not too close to \(1\)), it is useful to use its Fock-state representation, \[\mathcal{W}^{(s)}(\{\alpha_{k}\})=\frac{1}{\pi^{M}}\sum_{\{m_{k} \}=0}^{N_{0}}\,\sum_{\{n_{k}\}=0}^{N_{0}}\prod_{k=1}^{M}\langle n_{k}|T^{(s)} (\alpha_{k})|m_{k}\rangle,\] \[\qquad\qquad\times\langle\{n_{k}\}|\rho|\{m_{k}\},\rangle \tag{12}\] where \[\langle n_{k}|T^{(s)}(\alpha_{k})|m_{k}\rangle=\sqrt{\frac{n_{k}!} {m_{k}!}}\left(\frac{-s_{m}}{s_{p}}\right)^{n_{k}}s_{m}^{\delta_{k}+1}\left( \alpha_{k}^{\star}\right)^{\delta_{k}}\] \[\qquad\qquad\times L_{n_{k}}^{(\delta_{k})}\left(s_{p}s_{m}| \alpha_{k}|^{2}\right)\exp\left(-s_{m}|\alpha_{k}|^{2}\right), \tag{13}\] for \(m_{k}\geq n_{k}\); other elements can be found from the property \(\langle n_{k}|T^{(s)}(\alpha_{k})|m_{k}\rangle=\langle m_{k}|T^{(s)}(\alpha_{ k}^{\star})|n_{k}\rangle\). Here \(\delta_{k}=m_{k}-n_{k}\), \(s_{p}=2/(1+s)\), \(s_{m}=2/(1-s)\), and \(L_{n_{k}}^{(\delta_{k})}(x)\) are the associated Laguerre polynomials. To calculate the QPD for a given \(s<1\), we can directly apply the density matrices given in Eqs. (3) and (30) to Eq. (13). The formula in Eq. (13) can be applied even in the limit \(s\to 1\), but the limit should be taken very carefully. We note that for the VOPS states \(\sigma\) and the BS-transformed states \(\rho_{\mathrm{out}}\), it is enough to analyze the two special cases of the polynomials: \(L_{0}^{(\delta_{k})}(x)=1\) and \(L_{1}^{(\delta_{k})}(x)=1+\delta_{k}-x\), because \(N_{0}=1\). Thus, by denoting \(T_{nm}^{(s)}(\alpha)=\langle n|T^{(s)}(\alpha)|m\rangle\), we have \[T_{00}^{(s)}(\alpha) = \frac{2}{1-s}\exp\left(-\frac{2}{1-s}|\alpha|^{2}\right),\] \[T_{10}^{(s)}(\alpha) = [T_{01}^{(s)}(\alpha)]^{*}=\frac{4\alpha}{(1-s)^{2}}\exp\left(- \frac{2}{1-s}|\alpha|^{2}\right),\] \[T_{11}^{(s)}(\alpha) = \frac{2(4|\alpha|^{2}+s^{2}-1)}{(1-s)^{3}}\exp\left(-\frac{2}{1-s }|\alpha|^{2}\right). \tag{14}\]
2303.17813
Complexity analysis of weakly noisy quantum states via quantum machine learning
Quantum computers capable of fault-tolerant operation are expected to provide provable advantages over classical computational models. However, the question of whether quantum advantages exist in the noisy intermediate-scale quantum era remains a fundamental and challenging problem. The root of this challenge lies in the difficulty of exploring and quantifying the power of noisy quantum states. In this work, we focus on the complexity of weakly noisy states, which we define as the size of the shortest quantum circuit required to prepare the noisy state. To analyze this complexity, we first establish a general relationship between circuit depth, noise model, and purity. Based on this necessary condition, we propose a quantum machine learning (QML) algorithm that exploits the intrinsic-connection property of structured quantum neural networks. The proposed QML algorithm enables efficiently predicting the complexity of weakly noisy states from measurement results, representing a paradigm shift in our ability to characterize the power of noisy quantum computation.
Yusen Wu, Bujiao Wu, Yanqi Song, Xiao Yuan, Jingbo B. Wang
2023-03-31T06:02:44Z
http://arxiv.org/abs/2303.17813v3
# Complexity analysis of weakly noisy quantum ###### Abstract Quantum computers capable of fault-tolerant operation are expected to provide provable advantages over classical computational models. However, the question of whether quantum advantages exist in the noisy intermediate-scale quantum era remains a fundamental and challenging problem. The root of this challenge lies in the difficulty of exploring and quantifying the power of noisy quantum states. In this work, we focus on the complexity of weakly noisy states, which we define as the size of the shortest quantum circuit required to prepare the noisy state. To analyze this complexity, we first establish a general relationship between circuit depth, noise model, and purity. Based on this necessary condition, we propose a quantum machine learning (QML) algorithm that exploits the intrinsic-connection property of structured quantum neural networks. The proposed QML algorithm enables efficiently predicting the complexity of weakly noisy states from measurement results, representing a paradigm shift in our ability to characterize the power of noisy quantum computation. ## I Introduction The concept of quantum complexity has deep connections to high-energy physics, quantum many-body systems, and black-hole physics [1; 2; 3; 4; 5]. Within the context of quantum computation, the complexity of a quantum state characterizes the necessary quantum resources required to prepare a state that reveals the solution to a problem. We determine the complexity of an \(n\)-qubit quantum state by finding the minimum number of gates required to prepare it from the initial \(\ket{0^{n}}\bra{0^{n}}\) state [6; 7]. Brown and Susskind's conjecture, which is supported by the complexity geometry theory proposed by Nielsen et al. [8; 9], suggested that the random quantum state complexity grows linearly before it saturates after reaching an exponential size [10; 11]. Recent works have connected quantum state complexity to unitary \(t\)-designs [12; 13] and the dimension of semi-algebraic sets [7], further supporting Brown and Susskind's conjecture. This implies that most random quantum circuits are challenging to compress and optimize. Furthermore, given the technical limitations of near-term quantum devices [14; 15], structured quantum circuits with a brickwall-type layout of gates [14; 7; 15] are preferred. It is believed that structured random quantum states are classically difficult to generate under the anticoncentration property in the average-case scenario [6; 16; 17; 18; 19; 20; 21; 22], implying that structured quantum circuits could provide quantum advantages for industrial applications, such as approximating the ground state energy of physically relevant Hamiltonians [23; 24; 25; 26; 27; 28], solving combinatorial optimization problems [29; 30; 31], and addressing quantum chemistry problems [32; 33; 34; 35; 36; 37; 38; 39; 40]. Despite the remarkable progress in quantum hardware during the Noisy-Intermediate-Quantum (NISQ) era, limitations on gate fidelities still remain a hindrance to realizing the full computational potential of such devices [14; 15; 16; 41]. When a \(\Omega(\log n)\)-depth noisy circuit is subjected to a depolarizing channel, the resulting state is situated in the classically efficient 'Gibbs ball' as demonstrated in [42]. Furthermore, increasing the depth of the noisy circuit leads to a state that is close to the maximally mixed state in terms of trace distance, thus forfeiting any potential quantum advantages [43]. These results suggest noisy channels may weaken the complexity of a quantum state. The affect of noise channels have practical implications in various fields, including physics research. For example, in the context of the anti-de-Sitter-space/conformal field theory (AdS/CFT) correspondence, the "complexity equals volume" conjecture [4] suggests that the boundary state of the correspondence has a complexity proportional to the volume behind the event horizon of a black hole in the bulk geometry. However, when measuring the boundary state, the interaction with the surrounding environment inevitably introduces noise signal, causing the pure state to become a noisy state. This thus raises an essential problem: _how can the quantum state complexity of an unknown weakly noisy quantum state be predicted with only a few copies available?_ Here, we investigate this problem. The accumulation of noise caused by the noisy quantum gates significantly increases the challenge in estimating the dimension of the accessible space and constructing a Clifford high-rank point of the noisy state [7]. Then, prior studies have examined the convergence rate of random noisy states to the maximally entangled state, but these studies require the Haar-random assumption and a specific noise model [17; 19; 43; 44]. Therefore, it is difficult to apply these findings directly to predict the complexity of a particular unknown noisy state. More recently, the overlap tomography approach has been utilized to measure quantum state complexity [45], but the efficiency in generating unstructured "proof states" remains unclear. On the other hand, the expressive power [46; 47; 48], optimization [49; 50; 51; 52], and applications [53; 54; 55; 56] of quantum machine learning (QML) have been intensively studied. Furthermore, recent pioneering experiments conducted on quantum computer processors [66; 67] have shown significant quantum computational advantages, particularly in predicting linear properties of the density matrix. Hence, it is plausible to expect that QML can offer a novel perspective on predicting the quantum state complexity of noisy states. Due to the signal-weakening nature of noise channels, it is expected a weakly noisy state can be approximated by a much shallower quantum circuit. Following such intuition, this article presents a quantum learning approach for investigating the Structured Complexity Prediction (SCP) problem. Specifically, the problem pertains to determining whether a quantum state \(\rho_{\text{un}}\), generated from a noisy quantum circuit with at most \(\mathcal{O}(\log n)\) depth and a weak noise strength, can be approximated by a shallower \(R\)-depth structured quantum neural network (QNN) state with \(\epsilon\) additive error by trace distance. The QNN represents a periodic structured quantum circuit where each two-qubit gate is determined by a classical tunable parameter (as illustrated in Fig. 1 (b)). The constraints on the noisy circuit depth and noisy strength ensure a weakly noisy state, denoted as \(\Phi_{1}\) in Fig. 1 (c). In this paper, our first contribution is characterizing the relationship between circuit depth, general noise models, and purity, as claimed in Theorem 1. This necessary condition is suitable for a wide range of noise models and accurately defines a boundary for using a pure state to approximate a weakly noisy state. To connect the weakly noisy state complexity with the QNN, we leverage a significant property in QNN, termed as intrinsic-connection property (Theorem 2). This property asserts that the behavior of an observable \(M\), generated by a quantum circuit in \(\mathcal{U}_{\mathcal{A}}(R)\), can be emulated by linear combinations of \(\text{poly}(R,n)\) random QNN states from \(\mathcal{U}_{\mathcal{A}}(R)\). Here, \(\mathcal{A}\) denotes a QNN architecture and \(\mathcal{U}_{\mathcal{A}}(R)\) represents a set of \(R\)-depth QNN circuits induced by \(\mathcal{A}\). Based on this property, we propose transforming the complexity prediction task into the recognition of a noisy state from a set of QNN states by utilizing a tunable measurement operator constrained to \(\mathcal{U}_{\mathcal{A}}(R)\). Finally, Theorem 3 indicates a quantum machine learning algorithm can be efficiently trained in recognizing the weakly noisy state from QNN states, and provides a complexity prediction (the minimum \(R\)). Importantly, our proposed QML approach only requires the shadow tomography [68] (classical representation) of related QNN states, instead of a fault-tolerant quantum computer. In addition to its implications for complexity analysis, the SCP problem and QML method have broad applications. One potential application is the generation of mixed state approximations with a large overlap to the target noisy state (see Corollary 1). Moreover, recent work [69] suggests that a \(\Omega(\log n)\)-depth Haar random circuit with local depolarizing channel can be efficiently simulated by a classical computer. However, the complexity of such circuits with depth \(o(\text{poly}\log(n))\) remains unclear. The SCP problem serves as a valuable tool in linking weakly noisy states to shallower, noiseless quantum states, and can thus be used to characterize potential quantum advantages in noisy state sampling problems [16; 17; 18; 19; 43; 19]. Finally, the proposed QML method is expected be applied in classifying quantum topological phases of matter [70; 71; 72; 73; 74; 53; 75]. Topological phases can be distinguished by their ground state complexity [76], and the shadow tomography [68; 70] method provides a mixed state approximation to the ground state. Thus, the QML method can estimate the ground state complexity from the classical shadow and provide classification results based on this complexity. This paper is organized as follows. In Sec. II we introduce the setup and definitions that will be used in this work. In Sec. III, we present the main result. We provide quantum learning algorithm in Sec. IV, and the corresponding theoretical performance analysis is provided in Sec. V. We finally discuss applications in Sec. VI. ## II Theoretical background To clearly demonstrate motivations and contributions in this paper, we review the related theoretical backgrounds in terms of quantum state complexity, noisy quantum state, and quantum machine learning, then a brief discussion on the hardness result in quantum state learning problems is provided. ### Quantum State Complexity Here, we consider the quantum state complexity of an \(n\)-qubit quantum pure state \(\ket{\psi}\). The complexity of a quantum state is the minimal circuit size required to implement a measurement operator that suffices to distinguish \(\ket{\psi}\!\bra{\psi}\) from the maximally mixed state \(I_{n}/2^{n}\). Since any pure state \(\ket{\psi}\!\bra{\psi}\) satisfies \[d(\psi,\rho_{0})=\frac{1}{2}\left\||\psi\rangle\langle\psi|-I_{n}/2^{n}\right\| _{1}=1-\frac{1}{2^{n}}, \tag{1}\] which is achieved by the optimal measurement strategy \(M=\ket{\psi}\!\bra{\psi}\), such trace distance can be used to quantify the quantum state complexity. Let \(\mathbb{H}_{2^{n}}\) denote the space of \(2^{n}\times 2^{n}\) Hermitian matrices, and for fixed \(r\in\mathbb{N}\), we consider a class of measurement operators \(M_{r}(2^{n})\subset\mathbb{H}_{2^{n}}\) that can be constructed by at most \(r\)\(2\)-local gates. The maximal bias achievable for quantum states with such a restricted set of measurements of the solution is defined as: \[\beta_{QS}(r,\ket{\psi})=\max_{M\in M_{r}(2^{n})}\left|\mathrm{Tr}\left(M( \ket{\psi}\!\bra{\psi}-I_{n}/2^{n})\right)\right|, \tag{2}\] where \(\ket{\psi}=V|0^{n}\rangle\) for some \(V\in\mathrm{SU}(2^{n})\). Noting that the above metric \(\beta_{QS}(r,\ket{\psi})\) degenerates to \(1-2^{-n}\) when \(r\to\infty\). For example, if the quantum state \(\ket{\psi}\) can be easily prepared by a quantum computer (such as computational basis), \(\beta_{QS}(r,\ket{\psi})\) convergences to \(1-2^{-n}\) rapidly with the increase of \(r\), while in general \(\ket{\psi}\) requires exponentially large \(r\) to convergence to \(1-2^{-n}\). Using this property, the quantum state complexity can be defined as follows: Figure 1: (a) An efficient quantum-to-classical representation conversion method that leverages the classical shadow of a noisy state. By measuring only a few copies of the state, it can construct a classical representation that enables the prediction of the state’s properties with a rigorous performance guarantee. (b) A QNN model with \(R\) layers, where each layer corresponds to a causal slice, and the overall architecture follows the design of \(\mathcal{A}\). (c) Illustration of the relationship between a \(\tilde{R}\)-depth weakly noisy quantum state \(\Phi\) and its nearest pure state. All pure states reside on the surface of the \(n\)-qubit Bloch sphere, with the maximum mixed state \(\frac{I_{n}}{2^{n}}\) located at the center of the sphere. In the regime where \(\tilde{R}<\mathcal{O}(\mathrm{poly}\log n)\) and for a small noise strength \(p\), the weakly noisy state \(\Phi=\Phi_{1}\) is located near the surface of the sphere, and it could be approximated by a pure state \(U\ket{0^{n}}\bra{0^{n}}U^{\dagger}\) (refer to Theorem 1 and Lemma 7). However, if \(\Phi=\Phi_{2}\), it is located near the maximum mixed state, and it cannot be approximated by any pure state. **Definition 1** (Approximate State Complexity [12]).: _Given an integer \(r\) and \(\epsilon\in(0,1)\), we say a pure quantum state \(|\psi\rangle\) has \(\epsilon\)-strong state complexity at most \(r\) if and only if_ \[\beta_{QS}(r,|\psi\rangle)\geq 1-\frac{1}{2^{n}}-\epsilon, \tag{3}\] _which is denoted as \(C_{\epsilon}(|\psi\rangle)\leq r\)._ ### Noisy Quantum State Preparation Consider implementing an \(n\)-qubit quantum circuit consisting of \(\tilde{R}\) layers \(U_{1},U_{2},...,U_{\tilde{R}}\), where each layer \(U_{r}\) forms a causal slice with the following property. **Definition 2** (Architecture [7; 17]).: _An architecture \(\mathcal{A}\) is a directed acyclic graph that contains \(|\mathcal{V}|\in\mathbb{Z}_{>0}\) vertices (gates), and two edges (qubits) enter each vertex, and two edges exit. A quantum circuit induced by the architecture \(\mathcal{A}\) is denoted as \(U_{\mathcal{A}}\)._ We define the single layer \(U(\vec{\theta})\) be the unitary constructed by parametrized two-qubit gates with parameters \(\vec{\theta}\in[0,2\pi)^{L}\) where \(L\) is the number of two-qubit gates. Here the number of parameters is set to correspond to Definition 4, but actually there should be more than one parameter for a 2-qubit gate. **Definition 3** (Causal Slice).: _The circuit \(U_{\mathcal{A}}\) is a causal slice if there exists a qubit-reachable path between any two qubit-pairs, where the path only passes through vertices (gates) in the architecture \(\mathcal{A}\)._ The common causal slice layer \(U_{r}\) can take the form of either a brickwork or staircase pattern (an example of the latter is shown in Fig. 1(b)), where the minimal causal slice contains \(n-1\) gates linking the nearest qubits on a 1D qubit chain. However, due to imperfections in quantum hardware devices, two causal slices are separated by a quantum channel \(\mathcal{E}\). We assume that the noise channel \(\mathcal{E}\) is both gate-independent and time-invariant, and describe the output of the noisy quantum circuit using the quantum channel \(\Phi_{p,\tilde{R}}(\cdot)=\bigcirc_{r=1}^{\tilde{R}}\mathcal{E}\circ\mathcal{ U}_{r}(\cdot)\), where \(\mathcal{U}_{r}(\rho)=U_{r}\rho U_{r}^{\dagger}\). In this paper, we consider \(\mathcal{E}\) to represent a general noise channel, such as the local-depolarizing channel, the global-depolarizing channel, bit-flip channel and other common noise models. **Definition 4** (Noisy Quantum State).: _We assume that the noise in the quantum device is modeled by a gate-independent noise channel \(\mathcal{E}\) with strength \(p\). Let \(\mathcal{U}\) be a causal slice with \(L\) parameters, and let \(\mathcal{E}\circ\mathcal{U}\) be the representation of a noisy gate. We define the \(\tilde{R}\)-depth noisy quantum state with noise strength \(p\) as_ \[\Phi_{p,\tilde{R}}(|0^{n}\rangle\langle 0^{n}|,\vec{\theta})=\mathcal{E} \circ\mathcal{U}_{\tilde{R}}(\vec{\theta}_{\tilde{R}})\circ\mathcal{E}\circ \mathcal{U}_{\tilde{R}-1}(\vec{\theta}_{\tilde{R}-1})\circ\cdots\circ\mathcal{ E}\circ\mathcal{U}_{1}(\vec{\theta}_{1})(|0^{n}\rangle\langle 0^{n}|). \tag{4}\] _Here, the ideal circuit is given by \(\mathcal{U}_{\tilde{R}}(\vec{\theta}_{\tilde{R}})\circ\mathcal{U}_{\tilde{R}- 1}(\vec{\theta}_{\tilde{R}-1})\circ\mathcal{U}_{1}(\vec{\theta}_{1})\), with parameters \(\vec{\theta}\in[0,2\pi]^{L\tilde{R}}\). We use the term "weakly noisy states" to refer to noisy states \(\Phi_{p,\tilde{R}}\) with \(\tilde{R}\leq\mathcal{O}(\mathrm{poly}\log n)\) and small \(p\)._ ### Quantum Neural Network State **Definition 5** (Quantum Neural Network (QNN) State).: _Suppose \(\mathcal{U}_{\mathcal{A}}(R)\) represents an \(n\)-qubit quantum neural network set induced by a \(R\)-layer brickwork periodic architecture \(\mathcal{A}\), the quantum neutral network state \(|\Psi(\vec{\alpha})\rangle=U(\vec{\alpha})|0^{\otimes n}\rangle=\prod_{r=1}^ {R}U_{r}(\vec{\alpha}_{r})|0^{\otimes n}\rangle\) where each layer \(U_{r}(\vec{\alpha}_{r})\) contains \(L\) two-qubit gate and \(U(\vec{\alpha})\in\mathcal{U}_{\mathcal{A}}(R)\). A set of \(R\)-layer QNN states with structure \(\mathcal{A}\) is denoted as \(\mathcal{S}_{\mathrm{QNN}}(R,\mathcal{A},N)=\{|\Psi_{i}\rangle=U(\vec{\alpha} _{i})|0^{n}\rangle\}_{i=1}^{N}\)._ **Definition 6** (Quantum State Learning).: _We say a QNN model \(U(\vec{\alpha})\) can learn an unknown quantum state \(\rho_{\mathrm{un}}\) with \(\epsilon\) error if there exists a parameter \(\vec{\alpha}^{*}\) enabling_ \[R_{s}(\vec{\alpha}^{*})=\mathbb{E}_{O_{\mathbf{\alpha}}\sim\mathcal{P}_{n}}\left| \langle 0^{\otimes n}|U^{\dagger}(\vec{\alpha}^{*})O_{\mathbf{x}}U(\vec{\alpha}^{*})|0^{ \otimes n}\rangle-\mathrm{Tr}(\rho_{\mathrm{un}}O_{\mathbf{x}})\right|\leq\epsilon, \tag{5}\] _where \(\mathcal{P}_{n}\) represents \(n\)-qubit Pauli group and \(\epsilon=1/\mathrm{poly}(n)\)._ **Remark 1**.: _Suppose the weakly noisy state \(\rho_{\mathrm{un}}\) is provided, and we wish to estimate its complexity using quantum machine learning (QML) techniques. If we can efficiently train a shallow quantum neural network (SQNN) model to satisfy a certain condition (as expressed in Eq. 5), we can use the resulting QNN state to estimate the complexity of \(\rho_{\mathrm{un}}\). However, recent research suggests that finding the optimal parameters for a SQNN may be a difficult task, and even the optimization of a QNN's parameters is known to be an NP-hard problem in the worst-case scenario [77]. Nonetheless, in this paper, we demonstrate that it is still possible to quantify the complexity of a noisy quantum state using QML and the intrinsic-connection property of SQNNs, even in the face of these challenges._ **Remark 2**.: _Note that Eq. 5 can only be used in learning a pure state or a weakly noisy state which is denoted as \(\Phi_{1}\) in Fig. 1 (c)._ ## III Quantifying weakly noisy state complexity via quantum learning algorithm This section is organized as follows. First, we formally define the learning problem in Task 1: _what is the complexity of an unknown weakly noisy state?_ Next, we present our main result. Theorem 1 characterizes the relationship between circuit depth, general noise models, and approximation error. This necessary condition accurately defines a boundary for using a pure state to approximate a weakly noisy state. Based on this condition, we demonstrate that quantum machine learning can analyze the complexity of weakly noisy states by utilizing the intrinsic-connection property in QNN models. We prove this claim in Theorems 2 and 3. ### Learning Task Statement It is important to note that Definition 1 relies on unstructured quantum circuits to characterize pure state complexity. This approach allows for the use of ancillary qubits to synthesize measurement operators. However, it is generally difficult to exclude shortcuts that could improve the efficiency of a computation. As a result, deriving quantum complexity measures for weakly noisy states can be challenging without additional assumptions. Recent results on the linear growth of the random quantum state complexity suggest that even structured circuits are difficult to compress, either exactly or approximately [7, 78]. Consequently, structured circuits is employed in the study and benchmarking of weakly noisy state complexity. **Definition 7** (Limited-Structured (LS) Complexity of Noisy State).: _Given an integer \(r\) and \(\epsilon\in(0,1)\), we say a weakly noisy state \(\rho\) has \(\epsilon\)-LS complexity at most \(r\) if and only if_ \[\max_{\begin{subarray}{c}M_{r}=U|0^{n}\rangle\langle 0^{n}|U^{\dagger}\\ U\in\mathcal{U}_{\mathcal{A}}([r/L])\end{subarray}}|\mathrm{Tr}\left(M_{r}( \rho-I_{n}/2^{n}))|\geq 1-\frac{1}{2^{n}}-\epsilon\right. \tag{6}\] _which is denoted as \(C_{\epsilon}^{\lim,\mathcal{A}}(\rho)\leq r\). The notation \(L\) represents the number of gates in each layer of \(U\)._ **Definition 8** (Structured State Approximation Property, \(\mathrm{SSAP}(\mathcal{A},\epsilon)\)).: _We say an \(n\)-qubit weakly noisy quantum state \(\rho_{\mathrm{un}}\) (defined as Eq. 4) satisfies \(\mathrm{SSAP}(\mathcal{A},\epsilon)\) if \(\rho_{\mathrm{un}}\) can be approximated by a \(R\)-depth (\(R<\log n\)) SQNN state with architecture \(\mathcal{A}\) within LS error \(\epsilon\)._ Here, an explanation on the concerned pure state boundary \(R<\log n\) is provided. If a noisy state is affected by local depolarizing channel with depth \(\tilde{R}\geq\Omega(\log n)\), then the resulting output distribution can be efficiently simulated by a classical computer. This implies that any possible quantum advantages are lost. This property is particularly relevant for weakly noisy states with a depth of at most \(\mathcal{O}(\log n)\), and the natural weakening of the signal by the noisy channel thus limits the boundary of its pure state approximation to at most \(\log n\). The \(\mathrm{SSAP}(\mathcal{A},\epsilon)\) property of a weakly noisy state \(\rho_{\mathrm{un}}\) implies the existence of a quantum circuit \(U\in\mathcal{U}_{\mathcal{A}}(R)\) such that \(\langle 0^{n}|U^{\dagger}\rho_{\mathrm{un}}U|0^{n}\rangle\geq 1-\epsilon\), or equivalently, \(\frac{1}{2}\|U|0^{n}\rangle\langle 0^{n}|U^{\dagger}-\rho_{\mathrm{un}}\|_{1} \leq\sqrt{\epsilon+2^{-n}}\). When \(\epsilon=1/n\), if sample from \(U|0^{n}\rangle\) is classically hard, then sampling with probability \(p(i)=\langle i|\rho_{\mathrm{un}}|i\rangle\) is also hard for any classical algorithm, unless the Polynomial Hierarchy collapses [16, 17, 18, 19, 21]. Therefore, the \(\mathrm{SSAP}(\mathcal{A},\epsilon)\) property gives a non-trivial upper bound for \(C_{\epsilon}(\rho_{\mathrm{un}})\), that is, \[C_{\epsilon}(\rho_{\mathrm{un}})\leq C_{\epsilon}^{\lim,\mathcal{A}}(\rho_{ \mathrm{un}})\leq\mathrm{size}(U).\] If \(C_{\epsilon}^{\lim,\mathcal{A}}(\rho_{\mathrm{un}})>L\log n\), then \(\rho_{\mathrm{un}}\) cannot be approximated by any pure state within \(\epsilon\)-LS error with architecture \(\mathcal{A}\) and depth \(R<\log n\). Here, the main task is formally defined as follows. **Task 1** (Structured Complexity Prediction (SCP(\(\mathcal{A}\)))).: _Given an architecture \(\mathcal{A}\) and any \(n\)-qubit weakly noisy quantum state \(\rho_{\mathrm{un}}\) (defined as Eq. 4) and an approximation error \(\epsilon\). Learn the unknown quantum state \(\rho_{\mathrm{un}}\) through a quantum machine learning algorithm \(\mathrm{QML}_{n}\) with a \(\text{poly}(n)\)-qubit ideal quantum device, such that the output satisfies:_ 1. _(Completeness/YES) If_ \(\rho_{\mathrm{un}}\) _satisfies_ \(\mathrm{SSAP}(\mathcal{A},\epsilon)\)_, then_ \(\mathrm{QML}_{n}\) _outputs the minimum_ \(R\) _with unit probability._ 2. _(Soundness/NO)_ \(\mathrm{QML}_{n}\) _outputs NO with high probability otherwise._ **Remark 3**.: _Note that the structure of the weakly noisy circuit that generates \(\rho_{\mathrm{un}}\) is independent to the architecture \(\mathcal{A}\) in the SCP(\(\mathcal{A}\)) problem._ ### Main Results To ensure the \(\mathrm{SSAP}(\mathcal{A})\) property in our framework, it is necessary to present a condition that accounts for the effect of \(\mathcal{E}\) on deviating the noisy state from the Bloch spherical. **Theorem 1**.: _Suppose we are given a general noisy channel \(\mathcal{E}(\cdot)=\sum_{l=1}^{r}K_{l}(\cdot)K_{l}^{\dagger}\) and a noisy state \(\Phi_{p,\tilde{R}}(|0^{n}\rangle\langle 0^{n}|)=\bigcirc_{r=1}^{\tilde{R}} \mathcal{E}\circ\mathcal{U}_{r}(|0^{n}\rangle\langle 0^{n}|)\), where the \(\mathcal{U}_{t}\) are drawn independently from a unitary 2-design set \(\mathbb{U}\). Then for each noisy state \(\Phi_{p,\tilde{R}}(|0^{n}\rangle\langle 0^{n}|)\), there exists a corresponding \(\tilde{R}\)-depth pure state \(|\Psi_{\mathcal{U}_{1},\ldots,\mathcal{U}_{\tilde{R}}}\rangle\) such that_ \[\mathbb{E}_{\mathcal{U}_{1},\ldots,\mathcal{U}_{\tilde{R}}\sim \mathbb{U}}\langle\Psi_{\mathcal{U}_{1},\ldots,\mathcal{U}_{\tilde{R}}}|\left( \bigcirc_{r=1}^{\tilde{R}}\mathcal{E}\circ\mathcal{U}_{r}(|0^{n}\rangle \langle 0^{n}|)\right)|\Psi_{\mathcal{U}_{1},\ldots,\mathcal{U}_{\tilde{R}}} \rangle\geq\eta(\tilde{R}), \tag{7}\] _where_ \[\eta(\tilde{R})=\left(\frac{F-1}{d^{2}-1}\right)^{\tilde{R}-1} \frac{F-1}{d(d+1)}+\frac{1}{d}, \tag{8}\] \(d=2^{n}\) _and \(F=\sum_{l=1}^{r}\left|\mathrm{Tr}(K_{l})\right|^{2}\)._ A particularly noteworthy aspect of the presented theorem is its applicability to general noise models. This allows us to establish a precise relationship between the depth of a quantum circuit \(\tilde{R}\), the noise model \(\mathcal{E}\), and the quality of approximation (purity) \(\eta(\tilde{R})=1-\epsilon\) that can be achieved. In light of this, we observe that while depolarizing channels and more general Pauli channels with weak strength do lead to a decrease \(F\), the decrease is not significant enough to preclude the possibility of approximating the noisy state with a pure state. For instance, when \(n=50\), a local depolarizing noise model \(\mathcal{E}_{i}(\cdot)=(1-p)(\cdot)+p\mathrm{Tr}_{i}(\cdot)I_{2}/d\) with \(p=10^{-3}\) gives rise to a depth bound \(\tilde{R}\leq 46\log(1/\eta)\). This finding provides further evidence for a connection between circuit entanglement and the rate at which noise spreads. We leave proof details in Appendix E.1. **Theorem 2** (Intrinsic-Connection Property in SQNN).: _Randomly select \(N\) unitaries \(\left\{U(\vec{\alpha}_{i})\right\}_{i=1}^{N}\) from the SQNN model \(\mathcal{U}_{\mathcal{A}}(R)\) to generate \(\mathcal{S}_{\mathrm{QNN}}(R,\mathcal{A},N)=\left\{U(\vec{\alpha}_{i})|0^{n} \rangle\right\}_{i=1}^{N}\), where each layer in \(U(\vec{\alpha}_{i})\) contains \(L\) variational gates. Then for any \(n\)-qubit noisy state \(\rho\) and any observable \(M(\vec{\mathbf{x}})=U(\vec{\mathbf{x}})|0^{n}\rangle\langle 0^{n}|U(\vec{\mathbf{x}})^{\dagger}\) with \(U(\vec{\mathbf{x}})\in\mathcal{U}_{\mathcal{A}}(R)\), there exists a vector \(\vec{\mathbf{\beta}}(\vec{\mathbf{x}})\) belongs to an \(N\)-dimensional compact set \(\mathcal{D}_{\beta}\) and \(\sum_{j=1}^{N}\vec{\mathbf{\beta}}_{j}(\vec{\mathbf{x}})=1\) such that_ \[\mathbb{E}_{\vec{\mathbf{x}}}\left|\sum_{j=1}^{N}\vec{\mathbf{\beta}}_{j} (\vec{\mathbf{x}})\langle 0^{n}|U^{\dagger}(\vec{\mathbf{\alpha}}_{j})\rho U(\vec{\mathbf{\alpha}}_{j})| 0^{n}\rangle-\mathrm{Tr}\left(M(\vec{\mathbf{x}})\rho\right)\right|\leq\sqrt{\frac{ LRn^{2}}{N}}. \tag{9}\] _Let \(N=LRn^{2}/\epsilon^{2}\), the above approximation error is upper bounded by \(\epsilon\)._ The details of the proof and the explicit expression of the compact set \(\mathcal{D}_{\beta}\) can be found in Appendix E.2 and E.3. Theorem 2 shows that the observable \(M(\vec{\mathbf{x}})\) can be approximated by a linear combination of random SQNN circuits with the same depth. Using Theorem 2, we can state the following theorem. **Theorem 3**.: _Given \(\mathrm{poly}(n)\) copies of an \(n\)-qubit unknown weakly noisy state \(\rho_{\mathrm{un}}\) that is generated by a noisy quantum device with depth \(\tilde{R}=\mathcal{O}(\log n)\) (Def. 4) and a particular architecture \(\mathcal{A}\), there exists a \(\mathrm{poly}(n,\tilde{R})\) quantum and classical cost QML algorithm which can efficiently solve the \(\mathrm{SCP}(\mathcal{A})\) problem._ The proof of Theorem 3 depends on evaluating the sample complexity of QNN states and unknown noisy state, as well as the related iteration complexity during training the QML. We leave proof details in Sec. V. ## IV Quantum machine learning for \(\mathrm{SCP}\) problem _Outline of the learning algorithm_--We start by utilizing the intrinsic connection property of \(\mathcal{U}_{\mathcal{A}}\) to devise a parameterized function \(\mathcal{L}\) for distinguishing \(\rho_{\mathrm{un}}\) from SQNN states. The essential idea is that if the observable \(M\) can differentiate the weakly noisy state \(\rho_{\mathrm{un}}\) from \(\mathcal{S}_{\mathrm{QNN}}\), then \(\rho_{\mathrm{un}}\) cannot be approximated by any quantum circuit in \(\mathcal{U}_{\mathcal{A}}\). On the other hand, if \(\rho_{\mathrm{un}}\) can be approximated by some \(U\in\mathcal{U}_{\mathcal{A}}\), then it cannot be distinguished from \(\mathcal{S}_{\mathrm{QNN}}\) by \(M\). To construct the SQNN state set, we generate \(\mathcal{S}_{\mathrm{QNN}}(R,\mathcal{A},N)=\{|\Psi_{i}\rangle\}_{i=1}^{N}\) with \(R=\mathcal{O}(\mathrm{poly}(\log n))\) and \(N=\mathrm{poly}(n,R)\). We then optimize the objective function \(\mathcal{L}_{R}(\vec{\mathbf{q}},M)\) by tuning the distribution \(\vec{\mathbf{q}}=(\vec{\mathbf{q}}_{1},\dots,\vec{\mathbf{q}}_{N})\), the observable \(M=V|0^{n}\rangle\langle 0^{n}|V^{\dagger}\), and \(V\in\mathcal{U}_{\mathcal{A}}(R)\). Although optimizing a general observable \(M\) is challenging, limiting \(M\) to \(\mathcal{U}_{\mathcal{A}}(R)\) and utilizing the intrinsic connection property allows for efficient optimization of \(M\). If \(\max_{\vec{\mathbf{q}},M}\mathcal{L}\leq\epsilon\), Lemma 1 implies that the noisy state \(\rho_{\mathrm{un}}\) can be approximated by some \(U\in\mathcal{U}_{\mathcal{A}}(R)\) within \(\epsilon\) error. Conversely, if \(\min_{M}\mathcal{L}\) is greater than \(\epsilon+\tilde{\epsilon}\), Lemma 2 implies that \(\rho_{\mathrm{un}}\) cannot be approximated by any \(U\in\mathcal{U}_{\mathcal{A}}\) within \(\tilde{\epsilon}\) error. A QML\({}_{n}\) can be designed by combining the above approach and binary search framework. The algorithm halts and outputs \(C_{\epsilon}^{\mathrm{lim},\mathcal{A}}(\rho_{\mathrm{un}})\leq LR\) and \(C_{\epsilon}^{\mathrm{lim},\mathcal{A}}(\rho_{\mathrm{un}})>L(R-1)\) or \(C_{\epsilon}^{\mathrm{lim},\mathcal{A}}(\rho_{\mathrm{un}})>L\log n\) after \(\mathcal{O}(\log\log n)\) iterations. ### Metric Construction We now present the technical details of our proposed quantum machine learning (QML) method. First, we randomly generate a set of \(N=\mathrm{poly}(R,n)\) quantum neural network states \(\mathcal{S}_{\mathrm{QNN}}(R,\mathcal{A},N)=\{U(\vec{\mathbf{\alpha}}_{i})|0^{n} \rangle\}_{i=1}^{N}\). We then design a variational observable based on the intrinsic-connection property stated in Theorem 2. This observable takes the form of \[M(\vec{\mathbf{\beta}})=\sum_{i=1}^{N}\beta_{i}U(\vec{\mathbf{\alpha}}_{i})|0^{n} \rangle\langle 0^{n}|U^{\dagger}(\vec{\mathbf{\alpha}}_{i}). \tag{10}\] Specifically, we use this variational observable to test whether there exists a low-depth quantum circuit \(U\sim\mathcal{U}_{\mathcal{A}}\) that can approximate the target state \(\rho_{\mathrm{un}}\). To assess the circuit complexity of \(\rho_{\mathrm{un}}\), we use the dataset \(\mathcal{S}_{\mathrm{QNN}}\) to maximize the metric \[\mathcal{L}_{R}(\vec{\mathbf{q}},\vec{\mathbf{\beta}})=\left|\sum_{i=1}^{N}\vec{\mathbf{q} }_{i}\mathrm{Tr}\left(M(\vec{\mathbf{\beta}})(|\Psi_{i}\rangle\langle\Psi_{i}|- \rho_{\mathrm{un}})\right)\right| \tag{11}\] over the set \(\mathcal{D}_{\mathbf{z}}=\{\mathbf{z}=(\vec{\mathbf{q}},\vec{\mathbf{\beta}})\}\), where \(\vec{\mathbf{q}}_{i}\in\mathbb{R}_{\geq 0}\) and \(\|\vec{\mathbf{q}}\|_{1}=1\). The \(N\)-dimensional parameter \(\vec{\mathbf{\beta}}\) is restricted to a compact set \(\mathcal{D}_{\beta}\) with \(\sum_{i=1}^{N}\vec{\mathbf{\beta}}_{i}=1\). In Appendix E.3, we provide a detailed description of an efficient method for estimating the compact set \(\mathcal{D}_{\beta}\). By the Heine-Borel theorem [79], the set \(\mathcal{D}_{\mathbf{z}}\) is compact, meaning it contains all its limit points and is bounded. Given a specific parameter \((\vec{\mathbf{q}},\vec{\mathbf{\beta}})\), we can efficiently calculate the corresponding value of \(\mathcal{L}_{R}(\vec{\mathbf{q}},\vec{\mathbf{\beta}})\) using classical shadow techniques [88; 89; 88; 83]. For the noisy state \(\rho_{\mathrm{un}}\) and each \(|\Psi_{i}\rangle\in\mathcal{S}_{\mathrm{QNN}}\), the corresponding shadow tomography is obtained by repeatedly performing a simple measurement procedure: apply a random unitary to rotate the state and perform a Pauli Z-basis measurement. On receiving the \(n\)-bit measurement outcomes, classical descriptions (stabilizer) of target states \(\{\rho_{\mathrm{sh}}(|\Psi_{1}\rangle),...,\rho_{\mathrm{sh}}(|\Psi_{N} \rangle),\rho_{\mathrm{sh}}(\rho_{\mathrm{un}})\}\) can be efficiently stored in the classical memory [84]. Recall the parameterized observable \(M(\vec{\mathbf{\beta}})=\sum_{i=1}^{N}\beta_{i}|\Psi_{i}\rangle\langle\Psi_{i}|\), then accumulated shadow tomography can be used in estimating \(\left|\langle\Psi_{i}|\Psi_{j}\rangle\right|^{2}\) and \(\langle\Psi_{i}|\rho_{\mathrm{un}}|\Psi_{i}\rangle\), and finally yield \(\mathcal{L}_{R}(\vec{\mathbf{q}},\vec{\mathbf{\beta}})\). Before proposing the quantum learning algorithm, we need the following Lemmas to support our method, and corresponding proof details refer to Appendices 5, and 6. **Lemma 1**.: _Consider a parameterized observable \(M(\vec{\mathbf{\beta}})\) defined as Eq. 10. If the relationship_ \[\max_{\vec{\mathbf{q}},M(\vec{\mathbf{\beta}})}\mathcal{L}_{R}(\vec{\mathbf{q}},M(\vec{ \mathbf{\beta}}))\leq\epsilon \tag{12}\] _holds, then \(\rho_{\mathrm{un}}\) can be approximated by some state \(\rho=U\left|0^{n}\right\rangle\left\langle 0^{n}\right|U^{\dagger}\) such that \(U\in\mathcal{U}_{\mathcal{A}}(R)\), and \(C_{\epsilon}^{\mathrm{lim},\mathcal{A}}(\rho_{\mathrm{un}})\leq LR\), where \(C_{\epsilon}^{\mathrm{lim},\mathcal{A}}(\cdot)\) is defined in Def. 7._ **Lemma 2**.: _Let \(\mathcal{S}_{\mathrm{QNN}}(R,\mathcal{A},N)=\{|\Psi_{i}\rangle\}_{i=1}^{N}\) be a quantum neural network state set, where \(N=LRn^{2}\epsilon^{-2}\). If there exists a distribution \(\vec{\mathbf{q}}\) such that_ \[\min_{\vec{\mathbf{\beta}}}\mathcal{L}_{R}(\vec{\mathbf{q}},M(\vec{\mathbf{\beta}}))> \epsilon+\tilde{\epsilon}, \tag{13}\] _then with nearly unit probability, \(\rho_{\mathrm{un}}\) cannot be approximated by any \(U|0^{n}\rangle\langle 0^{n}|U^{\dagger}\), that is \(\langle 0^{n}|U^{\dagger}\rho_{\mathrm{un}}U|0^{n}\rangle<1-\tilde{\epsilon}\) for any \(U\in\mathcal{U}_{\mathcal{A}}(R)\)._ ``` Input : Noisy quantum state \(\rho_{\mathrm{un}}\), a quantum state set \(\mathcal{S}_{\mathrm{QNN}}(R,\mathcal{A},N)\), failure probability \(\delta\in(0,1)\), approximation error \(\epsilon\) Output : True/False; 1 Initialize\(\mu_{0}(\mathbf{z})=0\), \(\sigma_{0}\), the covariance function \(k(\cdot,\cdot)\); 2for\(t=1,2,...,T\)do 3 Select \(\kappa_{t}=2N\log(t^{2}N)+2\log(t^{2}/\delta)\); 4 Choose \(\mathbf{z}^{(t)}=\arg\max_{\mathbf{z}\in\mathcal{D}_{\mathbf{z}}}\mu_{t-1}(\mathbf{z})+\sqrt{ \kappa_{t}}\sigma_{t-1}(\mathbf{z})\); 5 Using the shadow tomography of \(\rho_{\mathrm{un}}\) and \(\mathcal{S}_{\mathrm{QNN}}\) to estimate \(\mathcal{L}_{R}(\mathbf{z}^{(t)})\); 6 Calculate \(y(\mathbf{z}^{(t)})=\mathcal{L}_{R}(\mathbf{z}^{(t)})+p_{t}\) for \(p_{t}\sim\mathcal{N}(0,1)\); 7 Update \(\mu_{t}\), \(\sigma_{t}^{2}\) as Eq. 15; 8if\(\mathcal{L}_{R}(\mathbf{z}^{(T)})\leq\epsilon\)do 9return True 10else 11return False ``` **Algorithm 1**Bayesian Maximize Subroutine, \(\text{BMaxS}(\rho_{\mathrm{un}},\mathcal{S}_{\mathrm{QNN}}(R,\mathcal{A},N),T,\epsilon)\) In the following, we show how to maximize the loss function \(\mathcal{L}_{R}(\mathbf{z})\) via Bayesian optimization on a compact set. Bayesian optimization is composed by two significant components: (\(i\)) a statistical model, in general _Gaussian process_, provides a posterior distribution conditioned on a prior distribution and a set of observations over \(\mathcal{L}_{R}(\mathbf{z})\). (\(ii\)) an _acquisition function_ determines the position of the next sample point, based on the current posterior distribution over \(\mathcal{L}_{R}(\mathbf{z})\). **Remark 4**.: _A more straightforward choice of the objective function seems to be \(\mathcal{L}(\vec{\mathbf{\beta}})=\sum_{i=1}\vec{\mathbf{\beta}}_{i}\langle\Psi_{i}| \rho_{\mathrm{un}}|\Psi_{i}\rangle\). However, under this construction, \(\max_{\vec{\mathbf{\beta}}}\mathcal{L}\geq 1-\epsilon\) may not directly imply \(\rho_{\mathrm{un}}\) satisfies the SSAP property. Actually, in such scenario, it is hard to utilize the intrinsic-connection property in using Eq. 6 to upper bound \(\max\mathcal{L}\)._ ### Optimization Subroutine Gaussian process is a set of random variables, where any subset forms a multivariate Gaussian distribution. For the optimization task described as Eq. 11, the random variables represent the value of the objective function \(\mathcal{L}_{R}(\mathbf{z})\) at the point \(\mathbf{z}=(\vec{\mathbf{q}},\vec{\mathbf{\beta}})\). As a distribution over \(\mathcal{L}_{R}(\mathbf{z})\), a Gaussian process is completely specified by _mean function_ and _covariance function_ \[\begin{split}\mu(\mathbf{z})&=\mathbb{E}_{\mathbf{z}}[ \mathcal{L}_{R}(\mathbf{z})]\\ k(\mathbf{z},\mathbf{z}^{\prime})&=\mathbb{E}_{\mathbf{z}}[( \mathcal{L}_{R}(\mathbf{z})-\mu(\mathbf{z}))(\mathcal{L}_{R}(\mathbf{z}^{\prime})-\mu(\bm {z}^{\prime}))],\end{split} \tag{14}\] and the Gaussian process is denoted as \(\mathcal{L}_{R}(\mathbf{z})\sim\mathcal{GP}(\mu(\mathbf{z}),k(\mathbf{z},\mathbf{z}^{\prime}))\). Without loss of generality, we assume that the prior mean function \(\mu(\mathbf{z})=0\). In the \(t\)-th iteration step, assuming observations \(\mathrm{Acc}(t)=\{(\mathbf{z}^{(1)},y(\mathbf{z}^{(1)})),\ldots,(\mathbf{z}^{(t)},y(\mathbf{z }^{(t)}))\}\) are accumulated, where \(y(\mathbf{z}^{(i)})=\mathcal{L}_{R}(\mathbf{z}^{(i)})+\epsilon_{i}\), with Gaussian noise \(\epsilon_{i}\sim\mathcal{N}(0,\sigma_{\mathrm{noise}}^{2})\) for \(i\in[t]\). Here, we set \(\sigma_{\mathrm{noise}}=1\) in our algorithm. Conditioned on the accumulated observations \(\mathrm{Acc}(t)\), the posterior distribution of \(\mathcal{L}_{R}(\mathbf{z})\) is a Gaussian process with _mean function_\(\mu_{t}(\mathbf{z})=\mathbb{E}_{\mathbf{z}}[\mathcal{L}_{R}(\mathbf{z})|\mathrm{Acc}(t)]\) and _covariance function_\(k_{t}(\mathbf{z},\mathbf{z}^{\prime})=\mathbb{E}_{\mathbf{z}}[(\mathcal{L}_{R}(\mathbf{z})- \mu(\mathbf{z}))(\mathcal{L}_{R}(\mathbf{z}^{\prime})-\mu(\mathbf{z}^{\prime}))|\mathrm{ Acc}(t)]\), specified by \[\begin{split}\mu_{t}(\mathbf{z})&=\mathbf{k}_{t}^{\mathsf{ T}}[\mathbf{K}_{t}+\sigma_{\mathrm{noise}}^{2}\mathbf{I}]^{-1}\mathbf{y}_{1:t}\\ k_{t}(\mathbf{z},\mathbf{z}^{\prime})&=k(\mathbf{z},\mathbf{z}^{ \prime})-\mathbf{k}_{t}^{\mathsf{T}}[\mathbf{K}_{t}+\sigma_{\mathrm{noise}}^{2}\mathbf{I} ]^{-1}\mathbf{k}_{t},\end{split} \tag{15}\] where \(\mathbf{k}_{t}=[k(\mathbf{z},\mathbf{z}^{(1)})\ \ldots\ \ k(\mathbf{z},\mathbf{z}^{(t)})]^{ \mathsf{T}}\), the positive definite covariance matrix \(\mathbf{K}_{t}=[k(\mathbf{z},\mathbf{z}^{\prime})]_{\mathbf{z},\mathbf{z}^{\prime}\in\mathbf{z}_{1:t}}\) with \(\mathbf{z}_{1:t}=\{\mathbf{z}^{(1)},\ldots,\mathbf{z}^{(t)}\}\) and \(\mathbf{y}_{1:t}=[y(\mathbf{z}^{(1)}),\ldots,y(\mathbf{z}^{(t)})]^{\mathsf{T}}\). The posterior variance of \(\mathcal{L}_{R}(\mathbf{z})\) is denoted as \(\sigma_{t}^{2}(\mathbf{z})=k_{t}(\mathbf{z},\mathbf{z})\). The mean function \(\mu_{t}(\mathbf{z})\) is related to the expected value of \(\mathcal{L}_{R}(\mathbf{z})\), while the covariance \(k_{t}\) estimates the deviations of \(\mu_{t}(\mathbf{z})\) from the value of \(\mathcal{L}_{R}(\mathbf{z})\). Then the prediction is obtained by conditioning the prior Gaussian process on the observations and returns a posterior distribution described by a Gaussian process multivariate distribution. Using the Sherman-Morrison-Woodbury formula [85], the predictive distribution can be explicitly expressed as Eq. 15. In the \(t\)-th iteration of Bayesian optimization, the acquisition function \(\mathcal{A}(\mathbf{z})\) learns from the accumulated observations \(\mathrm{Acc}(t-1)\) and leads the search to the next point \(\mathbf{z}^{(t)}\) which is expected to gradually convergence to the optimal parameters of \(\mathcal{L}_{R}(\mathbf{z})\). This procedure is achieved via maximizing \(\mathcal{A}(\mathbf{z})\). In detail, the design of acquisition function should consider _exploration_ (exploring domains where \(\mathcal{L}_{R}(\mathbf{z})\) has high uncertainty) and _exploitation_ (exploring domains where \(\mathcal{L}_{R}(\mathbf{z})\) is expected to have large image value). The upper confidence bound is a widely used acquisition function, which is defined as \[\mathcal{A}_{\mathrm{UCB}}(\mathbf{z})=\mu_{t-1}(\mathbf{z})+\sqrt{\kappa_{t}}\sigma_{t -1}(\mathbf{z}), \tag{16}\] and the next point \(\mathbf{z}^{(t)}\) is decided by \(\mathbf{z}^{(t)}=\arg\max_{\mathbf{z}\in\mathcal{D}_{\mathrm{domain}}}\mathcal{A}_{ \mathrm{UCB}}(\mathbf{z})\). Here, \(\kappa_{t}\) is a significant hyperparameter, and a suitable \(\kappa_{t}\) may lead \(\mathbf{z}^{(t)}\) rapidly convergence to \(\mathbf{z}_{opt}\). In Theorem 4, a specific \(\kappa_{t}\) is provided. Details for maximizing \(\mathcal{L}_{R}(\mathbf{z})\) are shown in Alg 1. ``` Input : Noisy quantum state \(\rho_{\mathrm{un}}\) and \(\epsilon\); Output : The minimum depth \(R\) (\(R<\log n\)) such that \(\mathrm{BMaxS}(\rho_{\mathrm{un}},\mathcal{S}_{\mathrm{QNN}}(R,\mathcal{A},N),T=N ^{2}n^{k},\epsilon)=\)True and \(\mathrm{BMaxS}(\rho_{\mathrm{un}},\mathcal{S}_{\mathrm{QNN}}(R-1,\mathcal{A},N ),T=N^{2}n^{k},\epsilon)=\)False; Or False if such \(R\) does not exist 1Initialize\(R\gets 1\), \(s\leftarrow\log(n)\); while\(s-R>1\)do 2 Set \(N=LRn^{2}\epsilon^{-2}\) and \(T=N^{2}n^{k}\) such that \(k\log(n)<n^{k/2-1}\epsilon\) for large \(n\); if\(\mathrm{BMaxS}(\rho_{\mathrm{un}},\mathcal{S}_{\mathrm{QNN}}([(R+s)/2],\mathcal{A},N ),T,\epsilon)=\)True,do 3\(s\leftarrow(R+s)/2\) 4else 5\(R\leftarrow(R+s)/2\) 6if\(\mathrm{BMaxS}(\rho_{\mathrm{un}},\mathcal{S}_{\mathrm{QNN}}(R,\mathcal{A},N),T, \epsilon)=\)True,do 7return\(C_{\epsilon}^{\lim,\mathcal{A}}(\rho_{\mathrm{un}})\leq LR\) 8else 9return\(C_{\epsilon}^{\lim,\mathcal{A}}(\rho_{\mathrm{un}})>L\log n\) (False) ``` **Algorithm 2**Quantum Machine Learning for Limited-Structured Complexity approximation ### Quantum Learning Algorithm for Limited-Structured Complexity analysis The Alg. 1 can efficiently verify whether \(\mathcal{L}_{R}(\mathbf{z})\) satisfies Lemma 1. Specifically, Alg. 1 outputs _True_ if there exists some \(U\in\mathcal{U}_{\mathcal{A}}(R)\) that can approximate \(\rho_{\text{un}}\) within \(\epsilon\) additive error, otherwise outputs _False_. As a result, Alg. 1 can be leveraged to design a \(\text{QML}_{n}\) in verifying whether the weakly noisy state \(\rho_{\text{un}}\) satisfies the \(\text{SSAP}(\mathcal{A})\) property, and thus can efficiently solve Task 1. Since \(\mathcal{U}_{\mathcal{A}}(R-1)\) is strictly contained in \(\mathcal{U}_{\mathcal{A}}(R)\)[7], the boolean function \[\mathcal{P}(R)=\text{BMaxS}(\rho_{\text{un}},\mathcal{S}_{\text{QNN}}(R, \mathcal{A},N),T,\epsilon)\] is a monotone predicate in \([1,\tilde{R}]\). Therefore the \(\text{QML}_{n}\) can be designed by a binary search program where Alg. 1 is packaged as an oracle. A monotone predicate \(\mathcal{P}\) is a boolean function defined on a totally ordered set with the property: if \(\mathcal{P}(R)=\textit{True}\), then \(\mathcal{P}(R^{\prime})=\textit{True}\) for all \(R^{\prime}\geq R\) in the domain. In our case, \(\mathcal{P}(R)\) returns _True_ at \(R\) but returns _False_ at \(R-1\) when relationships \(\mathcal{L}_{R}(\mathbf{z}^{(T)})\leq\epsilon\) and \(\mathcal{L}_{R-1}(\mathbf{z}^{(T)})>\epsilon\) hold at the same time. As a result, if the noisy state \(\rho_{\text{un}}\) satisfies the \(\text{SSAP}(\mathcal{A})\) property, the \(\text{QML}_{n}\) outputs the minimum \(R_{\text{min}}\in[1,\tilde{R}]\) enabling \(C_{\epsilon}^{\text{lim},\mathcal{A}}(\rho_{\text{un}})\leq LR_{\text{min}}\) (_True_). Otherwise, the \(\text{QML}_{n}\) outputs \(C_{\epsilon}^{\text{lim},\mathcal{A}}(\rho_{\text{un}})>L\log n\) (_False_). Details are provided in Alg. 2. ## V Theoretical performance guarantee We will prove that a quantum ML algorithm can efficiently solve the SCP problem after learning from the unknown noisy state \(\rho_{un}\) (Theorem 3). Specifically, the complexity of the proposed quantum learning algorithm is determined by the required QNN samples \(N\) and the iteration complexity in Alg. 1 and Alg. 2. ### Sample complexity The sample complexity of QNN states is promised by Theorem 2, where \(N=LRn^{2}/\epsilon^{2}\). To evaluate \(\mathcal{L}_{R}(\mathbf{z})\), the sample complexity of unknown weakly noisy state as well as QNN states is at most \(\mathcal{O}(\log(1/\delta)/\epsilon_{1}^{2})\), where \(\epsilon_{1}=\epsilon/N\) and failure probability \(\delta\in(0,1)\) by leveraging the classical shadow method [68]. ### Iteration complexity Here, we present the iteration complexity in Alg. 1 and Alg. 2. Denote the global maximum \(\mathbf{z}^{*}=\max_{\mathbf{z}\in\mathcal{D}_{\mathbf{z}}}\mathcal{L}_{R}(\mathbf{z})\), and a natural performance metric of \(\text{BMaxS}(\rho_{\text{un}},\mathcal{S}_{\text{QNN}}(R,\mathcal{A},N),T, \sigma_{\text{noise}})\) is the _simple regret_\(s_{T}\) which is the difference between the global maximum \(\mathcal{L}_{R}(\mathbf{z}^{*})\) and \(\mathcal{L}_{R}(\mathbf{z}^{(T)})\), that is \(s_{T}=\mathcal{L}_{R}(\mathbf{z}^{*})-\mathcal{L}_{R}(\mathbf{z}^{(T)})\). Obviously, simple regret is non-negative and asymptotically decreases with the increasing iteration complexity \(T\). To build up an explicit connection between \(s_{T}\) and \(T\), the _average regret_\(R_{T}\) is introduced, that is, \[R_{T}=\frac{1}{T}\sum_{t=1}^{T}[\mathcal{L}_{R}(\mathbf{z}^{*})- \mathcal{L}_{R}(\mathbf{z}^{(t)})]. \tag{17}\] Noting that the relationship \(s_{T}\leq R_{T}\) holds for any \(T\geq 1\). In the following, we show that \(R_{T}\) is upper bounded by \(\mathcal{O}(N\log T/\sqrt{T})\). Therefore, the simple regret \(s_{T}\leq R_{T}\to 0\) with the increase of \(T\). The following theorem derives the average regret bounds for \(\text{BMaxS}(\rho_{\text{un}},\mathcal{S}_{\text{QNN}}(R,\mathcal{A},N),T, \sigma_{\text{noise}})\). **Theorem 4**.: _Take the weakly noisy state \(\rho_{\text{un}}\) and \(\mathcal{S}_{\text{QNN}}(R,\mathcal{A},N)\) into Alg. 1. Pick the failure probability \(\delta\in(0,1)\) and let_ \[\kappa_{t}=2N\log(t^{2}N)+2\log(t^{2}/\delta) \tag{18}\] _in the \(t\)-th iteration step, then the average regret \(R_{T}\) can be upper bounded by_ \[R_{T}\leq\mathcal{O}\left(\sqrt{\frac{4N^{2}\log^{2}T+2N\log T\log(\pi^{2}/(6 \delta))}{T}}\right) \tag{19}\] _with \(1-\delta\) success probability._ Specifically, select an integer \(k\) such that \(k\log(n)<n^{k/2}\epsilon\) for all \(n>n_{0}\), and \(T=N^{2}n^{k}\) enables the simple regret \(s_{T}\) can be upper bounded by \(\epsilon\), where \(N=LRn^{2}\epsilon^{-2}\) (Theorem 2). The proof details refer to Appendix E.8. Alg. 2 is essentially a binary search program on the interval \([1,\tilde{R}]\) by using the oracle \(\text{BMaxS}(\rho_{\text{un}},\mathcal{S}_{\text{QNN}}(R,\mathcal{A},N),T,\epsilon)\). Therefore, Alg. 2 takes \(\mathcal{O}(\log(\tilde{R})T)\) iteration complexity to answer the SCP problem. Putting all together, the proposed quantum learning algorithm can efficiently learn from \((L\tilde{R})n^{2}\epsilon^{-2}\) numbers of QNN states, and the sample complexity to QNN states and noisy state is at most \(N^{2}\epsilon^{-2}\log(1/\delta)\). This thus completes the proof of Theorem 3. ## VI Discussions The completeness answer (YES) of SCP provides an noisy state complexity upper bound which may help us understand the power of noisy quantum computation according to the complexity. In the following, we discuss potential applications of the studied SCP problem and QML method. ### Approximate Weakly Noisy State In the above sections, we only focus on predicting the LS complexity as Def. 7 which is inspired by the pure state complexity in [12], rather than discussing how to reproduce an unknown noisy state. Here, we point that Lemma 1 can be used in generating a mixed state approximation. **Corollary 1**.: _Consider to use the \(n\)-qubit set \(\mathcal{S}_{\text{QNN}}(R,\mathcal{A},N)=\{|\Psi_{i}\rangle\}_{i=1}^{N}\) to generate a parameterized observable \(M(\vec{\mathbf{\beta}})\) defined as Eq. 10, where \(N=\mathrm{poly}(n,R)\). If \(\max_{\vec{\mathbf{q}},M(\vec{\mathbf{\beta}})}\mathcal{L}_{R}(\vec{\mathbf{q}},M(\vec{ \mathbf{\beta}}))\leq\epsilon\) holds, and denote \(\vec{\mathbf{q}}^{*},M(\vec{\mathbf{\beta}}^{*})=\arg\max_{\vec{\mathbf{q}},M(\vec{\mathbf{ \beta}})}\mathcal{L}_{R}\), then we have \(\mathrm{Tr}(M(\vec{\mathbf{\beta}}^{*})\rho_{\text{un}})=\sum_{i=1}^{N}\vec{\mathbf{ \beta}}^{*}_{i}\langle\Psi_{i}|\rho_{\text{un}}|\Psi_{i}\rangle\geq 1-\epsilon\)._ The above corollary provides a method in approximating a mixed state which has a large overlap to the weakly noisy state \(\rho_{\text{un}}\). Furthermore, the estimator \(\hat{\Phi}=M(\vec{\mathbf{\beta}}^{*})^{1/2}\rho_{\text{un}}M(\vec{\mathbf{\beta}}^{* })^{1/2}\) is an approximation to \(\rho_{\text{un}}\) such that \(\|\hat{\Phi}-\rho_{\text{un}}\|_{1}\leq 2\sqrt{\epsilon}\) by the gentle measurement result [86]. If \(M(\vec{\mathbf{\beta}}^{*})\) (linear combinations of classical shadows) is a low-rank matrix, the estimator \(\hat{\Phi}\) can be efficiently computed by dequantized algorithms [87]. We leave the proof details in Appendix E.5. ### Characterizing Quantum Advantages Our work connects the learning algorithm to the noisy state sampling complexity. The classical hardness results for noiseless quantum random states are clear for depth \(R=\Omega(\log n)\)[17; 19; 43; 44], in addition, Napp et al. [88] proved that for shallow depth \(R\leq\mathcal{O}(1)\) on certain architectures, approximating output probability of noiseless quantum circuit to additive error \(2^{-n}\) is classically efficient. As a result, given an unknown noisy state \(\rho_{\text{un}}\) and running Alg. 2, if the LS-complexity \(C_{1/n}^{\text{lim},\mathcal{A}}(\rho_{\text{un}})\leq\mathcal{O}(1)\), then \(\rho_{\text{un}}\) can be efficiently simulated by a classical computer [88; 89]. On the other hand, recent advanced work indicated that a \(\Omega(\log(n))\)-depth random circuit with local depolarizing noise may be efficiently simulated by a classical computer [69], however the classical hardness result of noisy quantum states remains unclear at depth between constant and \(o(\log n)\)[43]. Our result works for noisy states with \(\tilde{R}\leq\mathcal{O}(\text{poly}\log n)\), then the proposed QML is expected to be a proxy for connecting a \(\tilde{R}=o(\log n)\)-depth noisy circuit to an ideal shallower quantum circuit, which may shed light in understanding the noisy state sampling problem in sublogarithm-depth. ## VII Conclusion In this paper, we investigate the capability of QML algorithms in predicting the complexity of weakly noisy quantum states, then provide a circuit implementation for approximating this weakly noisy state. Our proposed QML method exploits the intrinsic-connection property of SQNN to build a learning model \(\mathcal{L}(\mathbf{z})\) The maximum value of \(\mathcal{L}(\mathbf{z})\) indicates whether the noisy state can be distinguished from the SQNN states, and reveals the limited-structured complexity. It is worth noting that optimizing a variational quantum circuit is NP-hard in the worst-case scenario [77]. However, the intrinsic-connection property allows us to construct \(\mathcal{L}(\mathbf{z})\) by linearly combining measurement results with polynomial quantum and classical costs. This enables us to train the QML model by tuning the combination coefficients on a compact domain. Moreover, we emphasize that the Bayesian optimization algorithm presented in Alg. 1 is not the only option. Other optimization algorithms may also work with a similar iteration steps. This highlights the universality of the intrinsic-connection property in combination with optimization subroutines. The predicted complexity of the quantum state can be used to verify the existence of quantum advantages and classify quantum phases of matter. Furthermore, the intrinsic-connection property also suggests that shallow-depth QNN models are classically simulable, provided the data pair \(\{(\mathbf{\vec{\alpha}}_{i},y_{i})\}_{i=1}^{N}\), where \(\mathbf{\vec{\alpha}}_{i}\) and \(y_{i}\) represent the variational parameters in QNN and the expectation value under a Hermitian observable, respectively. We believe that the proposed QML method can help us deeply understand the computational power of NISQ devices. This work leaves room for further research on more general quantum state complexity problems. For example, our QML can provide an approximation to the limited-structured noisy state complexity \(C_{\epsilon}^{\text{lim},\mathcal{A}}(\rho_{\text{un}})\), and indirectly predict the noisy state complexity \(C_{\epsilon}(\rho_{\text{un}})\). Then whether there exists a learning approach in directly predicting \(C_{\epsilon}(\rho_{\text{un}})\) deserves to be further investigated. Additionally, the intrinsic-connection property indicates that representing a large-depth SQNN requires more training data compared to a shallower SQNN model. This observation motivates the exploration of expressivity through the intrinsic-connection property. Furthermore, while QML cannot solve NP-hard problems exactly, the intrinsic-connection property enables searching for nearly optimal solutions in a limited structure. This may stimulate further study of QNN and VQE methods. ###### Acknowledgements. We are grateful for the valuable suggestions provided by Jens Eisert and for the insightful discussions with Philippe Faist and Haihan Wu. This work is supported by the China Scholarship Council (Grant No. 202006470011), and the National Natural Science Foundation of China (Grants No. 12175003).
2309.05294
Non-invertible duality defect and non-commutative fusion algebra
We study non-invertible duality symmetries by gauging a diagonal subgroup of a non-anomalous U(1) $\times$ U(1) global symmetry. In particular, we employ the half-space gauging to $c=2$ bosonic torus conformal field theory (CFT) in two dimensions and pure U(1) $\times$ U(1) gauge theory in four dimensions. In $c=2$ bosonic torus CFT, we show that the non-invertible symmetry obtained from the diagonal gauging becomes emergent on an irrational CFT point. We also calculate the fusion rules concerning the duality defect. We find out that the fusion algebra is non-commutative. We also obtain a similar result in pure U(1) $\times$ U(1) gauge theory in four dimensions.
Yuta Nagoya, Soichiro Shimamori
2023-09-11T08:25:31Z
http://arxiv.org/abs/2309.05294v3
# Non-invertible duality defect and non-commutative fusion algebra ###### Abstract We study non-invertible duality symmetries by gauging a diagonal subgroup of a non-anomalous U(1)\(\times\)U(1) global symmetry. In particular, we employ the half-space gauging to \(c=2\) bosonic torus conformal field theory (CFT) in two dimensions and pure U(1)\(\times\)U(1) gauge theory in four dimensions. In \(c=2\) bosonic torus CFT, we show that the non-invertible symmetry obtained from the diagonal gauging becomes emergent on an _irrational_ CFT point. We also calculate the fusion rules concerning the duality defect. We find out that the fusion algebra is _non-commutative_. We also obtain a similar result in pure U(1)\(\times\)U(1) gauge theory in four dimensions. ###### Contents * 1 Introduction and summary * 2 Non-invertible symmetry from half-space gauging \((\mathbb{Z}_{2N^{\prime}}^{[q]})_{\text{diag}}\) * 2.1 Gauging \((\mathbb{Z}_{2N^{\prime}}^{[q]})_{\text{diag}}\) * 2.2 Rotation and rescaling * 2.3 Duality * 3 Example in two dimensions: \(c=2\) bosonic torus CFT * 3.1 Self-duality condition * 3.2 Duality defect * 3.3 Non-commutative fusion algebra * 4 Example in four dimensions: pure U(1)\(\times\)U(1) gauge theory * 5 Conclusion and outlook * A Derivations of the selected fusion algebras in section 3.3 ## 1 Introduction and summary Background:Global symmetry has always been a pivotal concept in the analysis of quantum field theories (QFTs). One of the most prominent and successful applications of global symmetries is the 't Hooft anomaly matching [1], which aids in our comprehension of the strongly coupled systems. Toward giving further insights into the non-perturbative dynamics of QFTs, the notion of global symmetry has been generalized in [2]. There, it has been revealed that a global symmetry is associated with the existence of the _topological_ defect, and the symmetry transformation can be realized as a boundary condition on the topological defect. Although various types of generalized global symmetries have been concerned so far, the non-invertible symmetry has gained significant attention above all1. Unlike the ordinary symmetries, the non-invertible symmetry has no inverse operation, hence the resulting fusion algebra forms the fusion category rather than a group [29; 30; 31; 32; 33]. In recent years, numerous non-invertible symmetries have been discovered, offering new predictions into the dynamics of QFTs, e.g., constraints on renormalization group flows and realistic QFTs, across diverse dimensions [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 1999; 200; 201; 2021; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 2444; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 2555; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 282; 285; 287; 289; 291; 292; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 324; 325; 326; 327; 328; 329; 333; 340; 341; 342; 343; 344; 345; 346; 347; 348; 349; 350; 351; 352; 353; 354; 355; 356; 357; 358; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 3777; 378; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 388; 389; 390; 388; 389; 391; 392; 393; 40; 411; 412; 413; 414; 415; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 429; 443; 431; 432; 433; 434; 435; 436; 437; 438; 439; 444; 445; 446; 447; 439; 451; 452; 453; 454; 455; 456; 457; 458; 459; 460; 461; 462; 463; 464; 465; 466; 467; 468; 469; 470; 471; 472; 473; 474; 475; 476; 478; 479; 480; 481; 482; 483; 484; 485; 486; 487; 488; 489; 490; 488; 489; 491; 489; 492; 493; 494; 495; 496; 497; 498; 499; 500; 500; 501; 502; 503; 504; 505; 506; 507; 508; 509; 510; 511; 509; 520; 509; 531; 533; 540; 535; 540; 551; 536; 537; 541; 542; 558; 545; 559; 561; 570; 571; 572; 573; 574; 575; 576; 577; 578; 58; 590; 591; 592; 593; 594; 595; 596; 597; 598; 599; 600; 610; 611; 612; 613; 614; 615; 616; 617; 618; 619; 620; 619; 630; 614; 619; 642; 619; 643; 65; 617; 618; 621; 622; 631; 624; 625; 626; 627; 628; 633; 639; 644; 658; 66; 667; 687; 690; 688; 69; 691; 601; 621; 631; 632; 633; 634; 635; 636; 647; 648; 659; 669; 67; 688; 692; 693; 650; 68; 694; 695; 696; 697; 70; 698; 699; 71; 72; 740; 699; 73; 75; 76; 781; 790; 791; 792; 793; 794; 795; 796; 797; 80; 81; 828; 83; 840; 85; 86; 87; 898; 996; 977; 81; 80; 839; 88; 899; 980; 80; 81; 841; 85; 899; 998; 999; 999; 1001; 102; 103; 104; 105; 106; 81; 86; 87; 88; 899; 999; 111; 112; 107; 88; 899; 113; 80; 81; 80; 82; 84; 85; 87; 89; 99; 114; 83; 86; 88; 87; 88; 899; 150; 89; 916; 88; 89; 197; 98; 117; 800; 82; 89; 999; 102; al-space gauging:In these developments, the half-space gauging plays a crucial role in systematically constructing the non-invertible duality defects [45; 46]. To consider half-space gauging, we first split the spacetime manifold \(X\) into the left and right regions separated by the co-dimension one interface as depicted in Figure 1. We perform gauging a non-anomalous discrete global symmetry \(H\) of a theory \(\mathcal{T}\) only in half of the spacetime and impose the Dirichlet boundary condition on the \(H\) gauge field at the interface. Then, for some special cases, the theory becomes invariant under gauging \(H\): \(\mathcal{T}/H\cong\mathcal{T}\), and the interface becomes (topological) non-invertible defect \(\mathcal{N}\). Here, let us briefly summarize the non-invertible symmetries constructed from the half-space gauging in \(c=1\) compact boson model [45, section 4.1]; \[\frac{R^{2}}{4\pi}\int_{X_{2}}d\phi\,\wedge\,\star d\phi\, \tag{1}\] where \(X_{2}\) is a two-dimensional orientable manifold, and \(\phi\) is the compact boson with the periodicity \(2\pi\). In this case, we gauge the discrete shift symmetry \(\mathbb{Z}_{N}\subset\)U(1)\({}^{\rm shift}\) only in the right region. By using T-duality, we can achieve \(\mathcal{T}/\mathbb{Z}_{N}\cong\mathcal{T}\) only if we tune the compact radius such that \(R=\sqrt{N}\), which describes the rational conformal field theory (RCFT). Notably, the non-invertible duality defect \(\mathcal{N}\) can be expressed by the following action; \[\mathcal{N}\ :\ {\rm i}\frac{N}{2\pi}\int_{x=0}\phi_{\rm L}d \phi_{\rm R}\, \tag{2}\] where \(\phi_{\rm L}\) and \(\phi_{\rm R}\) are the compact boson fields that live in the left and right regions, respectively. The fusion algebra concerning the non-invertible duality defect \(\mathcal{N}\) and the \(\mathbb{Z}_{N}\) shift symmetry generator \(\eta\) are given by the following Tambara-Yamagami category [97]; \[\mathcal{N}\times\mathcal{N} =\mathcal{C}\, \tag{3}\] \[\eta\times\mathcal{N} =\mathcal{N}\times\eta=\mathcal{N}\,\] \[\eta^{N} =1\,\] where \(\mathcal{C}=1+\eta+\eta^{2}+\cdots\eta^{N-1}\) is the projection operator of the \(\mathbb{Z}_{N}\) shift symmetry up to normalization. In summary, in \(c=1\) compact boson CFT, the non-invertible symmetry obtained by the half-space gauging becomes emergent at \(R=\sqrt{N}\), namely RCFT point, and the fusion algebra is given by the Tambara-Yamagami category (3). Figure 1: Pictorical representation of the half-space gauging. Motivations:The most natural and simplest generalization of the above \(c=1\) compact boson CFT is the \(c=2\) bosonic torus CFT;2 Footnote 2: For simplicity, we do not include the topological term in the action (1.4) in this paper. \[\frac{1}{4\pi}\int_{X_{2}}\,G_{IJ}\,d\phi^{I}\wedge\star d\phi^{J}\qquad,\qquad I,J=1\,,2\, \tag{1.4}\] where \(\phi^{I}\) is the compact boson with periodicity \(2\pi\). Then, inspired by the above example of the \(c=1\) compact boson theory, the following two questions naturally arise; * Where do the non-invertible symmetries obtained from the half-space gauging become emergent on the conformal manifold? In particular, are these non-invertible symmetries found at rational or irrational CFT points? * What is the fusion algebra associated with the non-invertible symmetry defect? The main aim of this paper is to address these questions. As we will see later, the landscape of non-invertible symmetries from the half-space gauging in the \(c=2\) bosonic torus CFT is richer than the \(c=1\) case. We also apply the half-space gauging to the pure U(1)\(\times\)U(1) gauge theory in four dimensions; \[\frac{1}{4\pi}\int_{M_{4}}\,\mathcal{G}_{IJ}\,dA^{I}\wedge\star dA^{J}\qquad,\qquad I,J=1\,,2\, \tag{1.5}\] and investigate the non-invertible structures of this theory. In the remainder of the Introduction, we present a concise summary of our work. Summary:The \(c=2\) bosonic torus CFT has the zero-form shift-symmetry \(\text{U}(1)^{\text{shift}}_{1}\times\text{U}(1)^{\text{shift}}_{2}\). Its charged operator is the vertex operator \(e^{\text{i}\vec{n}\cdot\vec{\phi}}\) which is characterized by two integers: \(\vec{n}\in\mathbb{Z}\times\mathbb{Z}\). Under \(\text{U}(1)^{\text{shift}}_{1}\times\text{U}(1)^{\text{shift}}_{2}\), the vertex operator is transformed in the following way; \[\text{U}(1)^{\text{shift}}_{1}\times\text{U}(1)^{\text{shift}}_{2}\ :\ e^{\text{i}\vec{n}\cdot\vec{\phi}}\mapsto e^{\text{i}\vec{\theta}\cdot\vec{n }}\,e^{\text{i}\vec{n}\cdot\vec{\phi}}\qquad,\qquad\theta^{1},\theta^{2}\in[0, 2\pi). \tag{1.6}\] As a discrete subgroup of the shift symmetry to be gauged, we choose the diagonal subgroup \((\mathbb{Z}^{[0]}_{2N^{\prime}})_{\text{diag}}\), whose generator is specified by \((\,e^{\text{i}\frac{1}{N^{\prime}}},e^{\text{i}\frac{1}{N^{\prime}}}\,)\). As a result of the gauging \((\mathbb{Z}^{[0]}_{2N^{\prime}})_{\text{diag}}\), the charge lattice of the original theory \(\mathbb{Z}\times\mathbb{Z}\) is reduced to its sublattice \(\Lambda_{2N^{\prime}}\) defined by; \[\begin{split}\Lambda_{2N^{\prime}}&\equiv\{\vec{n }\in\mathbb{Z}\times\mathbb{Z}\mid n_{1}+n_{2}=0\mod 2N^{\prime}\}\\ &=\text{Span}(\vec{\ell}_{1},\vec{\ell}_{2})\,\end{split} \tag{1.7}\] where the charge lattice \(\Lambda_{2N^{\prime}}\) is spanned by the two orthogonal vectors \(\vec{\ell}_{1}\) and \(\vec{\ell}_{2}\) (see the upper right lattice in Figure 2.); \[\vec{\ell}_{1}=\begin{pmatrix}N^{\prime}\\ N^{\prime}\end{pmatrix}\qquad,\qquad\vec{\ell}_{2}=\begin{pmatrix}-1\\ 1\end{pmatrix}. \tag{1.8}\] or later convenience, we put these two basis vectors \(\vec{\ell}_{1}\) and \(\vec{\ell}_{2}\) into the matrix \(K\) defined by;3 Footnote 3: Throughout this paper, we choose the charge matrix \(K\) such that \(\det K=+2N^{\prime}\). \[K\equiv(\vec{\ell}_{1},\vec{\ell}_{2})=\begin{pmatrix}N^{\prime}&-1\\ N^{\prime}&1\end{pmatrix}\, \tag{9}\] which clearly carries an information of the charge lattice \(\Lambda_{2N^{\prime}}\). In order for the theory to be invariant under the diagonal gauging \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\text{diag}}\), we must perform the rotation and rescaling on the charge lattice \(\Lambda_{2N^{\prime}}\) and bring it back to the original one \(\mathbb{Z}\times\mathbb{Z}\). The charge lattice transition under these operations is depicted in Figure 2. Note that the rotation is a peculiar operation to the \(c=2\) bosonic torus CFT. Moreover, by using the T-duality, we can perfectly restore the original theory \(\mathcal{T}/(\mathbb{Z}_{2N^{\prime}}^{[0]})_{\text{diag}}\cong\mathcal{T}\), if the kinetic matrix \(G_{IJ}\) is tuned to satisfy the Figure 2: The transition of the charged lattice corresponding to the gauging \((\mathbb{Z}_{4}^{[0]})_{\text{diag}}\). The horizontal/vertical axis denotes the U(1)\({}_{1}^{\text{shift}}\)/ U(1)\({}_{2}^{\text{shift}}\) charge of the vertex operator \(e^{\text{i}\vec{n}\cdot\vec{\phi}}\), respectively. The red points in each diagram mean the properly quantized charges and the green realm shows the unit cell. In order to bring the charge lattice after gauging \((\mathbb{Z}_{4}^{[0]})_{\text{diag}}\) back to the original one \(\mathbb{Z}\times\mathbb{Z}\), we need to perform rotation and rescaling as depicted in the figure. following self-duality condition; \[G=K^{\rm T}G^{-1}K. \tag{10}\] The solution \(G^{*}_{IJ}\) to the self-duality condition (10) is given by \[G^{*}=\sqrt{-\frac{N}{D}}\begin{pmatrix}2K_{11}&K_{12}+K_{21}\\ K_{12}+K_{21}&2K_{22}\end{pmatrix}\quad,\quad D\equiv(K_{12}+K_{21})^{2}-4K_{1 1}K_{22}\, \tag{11}\] and we can show that the solution \(G^{*}_{IJ}\) corresponds to the complex multiplication (CM) point4[98], which is a wider class of the \(c=2\) bosonic torus RCFTs. We also prove that the CM point can be promoted to the RCFT if and only if the charge matrix \(K\) is symmetric; Footnote 4: Throughout this paper, we call the CM point when either the complex structure modulus \(\tau\)_or_ the complexified Kähler modulus \(\rho\) (see (3.2) and (3.2) for their definitions) to an imaginary quadratic field. If these two moduli are the elements of the _same_ imaginary quadratic field, the CM point is enhanced to the RCFT. \[K\ :\ {\rm symmetric}\quad\Longleftrightarrow\quad{\rm RCFT}. \tag{12}\] Hence, from the explicit formula (9), we conclude that the non-invertible symmetry constructed from the half-space gauging associated with the diagonal gauging becomes emergent on the _irrational_ CFT. Furthermore, we also show that the non-invertible symmetry defect \(\mathcal{D}\) associated with the gauging \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\rm diag}\) can be put into the following Lagrangian form; \[\mathcal{D}\ :\ \frac{{\rm i}}{2\pi}\int_{x=0}K_{IJ}\phi_{\rm L}^{I}\,d \phi_{\rm R}^{J}\, \tag{13}\] and derive the fusion algebra. We find that the resulting fusion algebra is _infinitely_ generated and _non-commutative_. We also discover the closed fusion subalgebra. To see this, we must put the various global symmetry generators on the duality defect \(\mathcal{D}\), and define the _dressed_ duality defect \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\)\((s_{1},s_{2}=0,1,\cdots 2N^{\prime}-1)\). (See section 3.3 for the definition.) Thereby, the projection operator should also be replaced by the dressed one \(\widehat{\mathcal{C}}_{s_{1},s_{2}}\). As a result of this dressing, the fusion algebra concerning \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\,,\,\widehat{\mathcal{C}}_{s_{1},s_{2}}\,,\) and the \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\rm diag}\) shift symmetry generator \(\eta_{\vec{p}}\) can be summarized as follows; Non-commutative fusion subalgebra at the irrational CFT point \[\begin{split}&\widehat{\mathcal{D}}_{s_{1},s_{2}}\times \widehat{\mathcal{D}}_{s_{3},s_{4}}=\widehat{\mathcal{C}}_{s_{2}+s_{3},s_{1}+s _{4}}\,\\ &\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\eta_{\vec{p}}=\eta_{ \vec{p}}\times\widehat{\mathcal{D}}_{s_{1},s_{2}}=\widehat{\mathcal{D}}_{s_{1 },s_{2}}\,\\ &\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{C}}_{s _{3},s_{4}}=2N^{\prime}\,\widehat{\mathcal{D}}_{s_{1}+s_{3},s_{2}+s_{4}}\,\\ &\widehat{\mathcal{C}}_{s_{1},s_{2}}\times\widehat{\mathcal{D}}_{s _{3},s_{4}}=2N^{\prime}\,\widehat{\mathcal{D}}_{s_{2}+s_{3},s_{1}+s_{4}}\,\\ &\widehat{\mathcal{C}}_{s_{1},s_{2}}\times\widehat{\mathcal{C}}_{s _{3},s_{4}}=2N^{\prime}\,\widehat{\mathcal{C}}_{s_{1}+s_{3},s_{2}+s_{4}}\,\\ &\eta_{\vec{p}}^{2N^{\prime}}=1\.\end{split}\] (14) We also consider the half-space gauging with respect to the product group \(\mathbb{Z}_{N_{1}}\times\mathbb{Z}_{N_{2}}\), instead of the diagonal one. In this case, the non-invertible symmetry arises on the RCFT point, and the fusion algebra is given by the standard Tambara-Yamagami category (3) since \(c=2\) bosonic torus CFT is reduced to the two sets of the \(c=1\) compact boson CFTs. We emphasize that this is perfectly consistent with the promoting condition (12) since the charge matrix \(K\) is diagonal. Our main results described above are summarized in Table 1. Finally, we apply the half-space gauging and explore the non-invertible symmetries in the pure \(\mathrm{U}(1)\times\mathrm{U}(1)\) gauge theory in four dimensions. This theory has the \(\mathrm{U}(1)^{\mathrm{ele}}_{1}\times\mathrm{U}(1)^{\mathrm{ele}}_{2}\) electric one-form symmetry, whose charged object is the Wilson loop. In a similar manner to the two dimensions, we construct the non-invertible symmetries from gauging the diagonal subgroup \((\mathbb{Z}_{N}^{[1]})_{\mathrm{diag}}\subset\mathrm{U}(1)^{\mathrm{ele}}_{1 }\times\mathrm{U}(1)^{\mathrm{ele}}_{2}\). By utilizing the electric-magnetic duality transformation, we find out the special gauge couplings where the non-invertible symmetries appear. As in the two dimensions, we construct the duality defect associated with the diagonal gauging and calculate the fusion rules concerning the duality defect. The resulting fusion algebra is again _infinitely_ generated and _non-commutative_. We also find out the closed fusion subalgebra which is mostly similar to (14). It remains an open question how we interpret the obtained non-commutative fusion algebra in the framework of the higher category [33]. The rest of the paper is organized as follows. In section 2, we describe our method to construct non-invertible symmetries from the half-space gauging in arbitrary even dimensions. In particular, we give a detailed explanation of each step from the diagonal gauging to the rotation and the rescaling of the charge lattice, to the duality. In section 3, we discuss the non-invertible symmetries in \(c=2\) bosonic torus CFT. In section 3.1, we derive the self-duality condition (10), and show that the solution corresponds to the CM point. Also, we derive the condition (12) for promoting the CM point to the RCFT. In section 3.2, we construct the duality defect action (13), and describe various aspects of the duality defect e.g., the boundary condition on the defect, topological property, and the orientation reversion. In section 3.3, we elucidate the precise definition of the dressed duality defect \begin{table} \begin{tabular}{c c c c} \hline \hline Gauging group & Charge matrix \(K\) & Emergent point & Fusion algebra \\ \hline Diagonal group & \(K^{\mathrm{T}}\neq K\) & Irrational CFT & Non-commutative \\ \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\mathrm{diag}}\) & & & (14) \\ \hline Product group & \(K^{\mathrm{T}}=K\) & RCFT & Tambara-Yamagami \\ \(\mathbb{Z}_{N_{1}}^{[0]}\times\mathbb{Z}_{N_{2}}^{[0]}\) & & & (3) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the main results in \(c=2\) bosonic torus CFT. and discuss the fusion algebra. In section 4, we consider the pure U(1)\(\times\)U(1) gauge theory in four dimensions and explain that the non-invertible symmetries from the half-space gauging from the diagonal gauging can be constructed in a very similar manner to the two-dimensions. Furthermore, we show that the resulting fusion algebra is also non-commutative, and end with giving an open question on our fusion algebra. In section 5, we briefly summarize this paper and discuss the future directions. In appendix A, we derive some selected fusion rules, skipped in the main text. ## 2 Non-invertible symmetry from half-space gauging \((\mathbb{Z}_{2N^{\prime}}^{[q]})_{\bf diag}\) In this section, we describe the general method to construct the non-invertible symmetry of the theory \(\mathcal{T}_{g}\) whose collective couplings are symbolically denoted by \(g\). We assume that the global symmetry contains a non-anomalous \(q=\frac{d-2}{2}\) form symmetry \(\text{U}(1)_{1}^{[q]}\times\text{U}(1)_{2}^{[q]}\) whose \(q\) dimensional charged object is denoted by \(V_{\vec{n}}^{[q]}\). The charged operator \(V_{\vec{n}}\) transforms as; \[\text{U}(1)_{1}^{[q]}\times\text{U}(1)_{2}^{[q]}:\quad V_{\vec{n}}^{[q]} \mapsto e^{\text{i}\vec{\theta}\cdot\vec{n}}\,V_{\vec{n}}^{[q]}\,, \tag{1}\] where \(\vec{\theta}\equiv(\theta^{1},\theta^{2})\) are rotational angles. Importantly, in order for the \(2\pi\) rotation to be trivial, the \(\text{U}(1)_{1}^{[q]}\times\text{U}(1)_{2}^{[q]}\) charge \(\vec{n}\) must be quantized to be integers; \[\vec{n}\in\mathbb{Z}\times\mathbb{Z}. \tag{2}\] We refer to the set of properly quantized charges of \(V_{\vec{n}}^{[q]}\) as the _charge lattice_. In this sense, the original theory \(\mathcal{T}_{g}\) has the charge lattice \(\mathbb{Z}\times\mathbb{Z}\). (See Table 2 for referring examples treated in this paper.) \begin{table} \begin{tabular}{c c c c c} \hline \hline & Theory & U(1)\({}_{1}^{[q]}\times\)U(1)\({}_{2}^{[q]}\) & Charged ops. & \\ & & & \(V_{\vec{n}}^{[q]}\) & Duality \\ \hline \(d=2\) & \(c=2\) bosonic torus & & & \\ (\(q=0\)) & CFT (section 3) & & & \\ \hline \(d=4\) & U(1)\(\times\)U(1) gauge & & & \\ (\(q=1\)) & theory (section 4) & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Examples treated in this paper and the corresponding notations with section 2. Detailed explanations for the two-dimensional and four-dimensional examples are deferred to section 3 and 4, respectively. Our construction of the non-invertible symmetry in the theory \({\cal T}_{g}\) can be schematically summarized as the following diagram; \[{\cal T}_{g}\ \xrightarrow{\text{\tiny{Gauging }}(\mathbb{Z}_{2N^{\prime}}^{[q]})_{ \text{\tiny{diag}}}}\ {\cal T}_{g}/(\mathbb{Z}_{2N^{\prime}}^{[q]})_{\text{\tiny{diag}}}\ \xrightarrow{\text{\tiny{${\cal R}_{-\theta }$ and ${\cal M}$}}}\ {\cal T}_{g^{\prime}}\ \xrightarrow{\text{\tiny{Duality}}}\ \widehat{\cal T}_{\widehat{g}^{ \prime}}\cong{\cal T}_{g}. \tag{3}\] In particular, the transition of the charge lattice at each step is shown below; \[\mathbb{Z}\times\mathbb{Z}\ \xrightarrow{\text{\tiny{Gauging }}(\mathbb{Z}_{2N^{\prime}}^{[q]})_{ \text{\tiny{diag}}}}\ \Lambda_{2N^{\prime}}\ \xrightarrow{\text{\tiny{${\cal R}_{-\theta }$ and ${\cal M}$}}}\ {\cal M}{\cal R}_{-\theta}\,\Lambda_{2N^{\prime}}\cong\mathbb{Z}\times \mathbb{Z}\ \xrightarrow{\text{\tiny{Duality}}}\ \widehat{\mathbb{Z}\times \mathbb{Z}}\cong\mathbb{Z}\times\mathbb{Z}. \tag{4}\] In the rest of this section, we provide a detailed explanation of each step in (3) and (4). ### Gauging \((\mathbb{Z}_{2N^{\prime}}^{[q]})_{\text{\tiny{diag}}}\) The story begins with gauging the diagonal discrete subgroup \((\mathbb{Z}_{2N^{\prime}}^{[q]})_{\text{\tiny{diag}}}\subset\text{U}(1)_{1}^{[q ]}\times\text{U}(1)_{2}^{[q]}\). Here, the diagonal group \((\mathbb{Z}_{2N^{\prime}}^{[q]})_{\text{\tiny{diag}}}\) is generated by \((\,e^{\,\mathrm{i}\frac{\pi}{N^{\prime}}},e^{\,\mathrm{i}\frac{\pi}{N^{\prime} }}\,)\). From (1), the charged operator \(V_{\vec{n}}^{[q]}\) is transformed under \((\mathbb{Z}_{2N^{\prime}}^{[q]})_{\text{\tiny{diag}}}\) as follows; \[(\mathbb{Z}_{2N^{\prime}}^{[q]})_{\text{\tiny{diag}}}:\ V_{\vec{n}}^{[q]}\mapsto e ^{\,\mathrm{i}\frac{\pi}{N^{\prime}}\vec{p}\cdot\vec{n}}\,V_{\vec{n}}^{[q]} \qquad,\qquad\vec{p}=(1,1)^{\text{\tiny{T}}}. \tag{5}\] Figure 3: Picture of the charge lattice \(\Lambda_{2N^{\prime}}\). The elements of the charge lattice \(\Lambda_{2N^{\prime}}\) are depicted by red points, with the origin denoted by \(O\). The unit cell is shown by the green realm, and its orthogonal vectors \(\vec{\ell}_{1}\) and \(\vec{\ell}_{2}\) are represented by two blue arrows. Also, for later convenience, we introduce the angle \(\theta\) between the first axis and the vector \(\vec{\ell}_{1}\). Therefore, as a consequence of the diagonal gauging, the charge lattice of the original theory \(\mathbb{Z}\times\mathbb{Z}\) is projected out to its sublattice \(\Lambda_{2N^{\prime}}\) defined by; \[\Lambda_{2N^{\prime}}\equiv\{\vec{n}\in\mathbb{Z}\times\mathbb{Z}\mid n_{1}+n_{2 }=0\mod 2N^{\prime}\}\, \tag{6}\] which is depicted in Figure 3. The new charge lattice \(\Lambda_{2N^{\prime}}\) is spanned by the two orthogonal vectors \(\vec{\ell}_{1}\) and \(\vec{\ell}_{2}\); \[\vec{\ell}_{1}=\begin{pmatrix}N^{\prime}\\ N^{\prime}\end{pmatrix}\qquad,\qquad\vec{\ell}_{2}=\begin{pmatrix}-1\\ 1\end{pmatrix}\, \tag{7}\] and we define the charge matrix \(K\) as follows; \[K\equiv(\vec{\ell}_{1},\vec{\ell}_{2})=\begin{pmatrix}N^{\prime}&-1\\ N^{\prime}&1\end{pmatrix}. \tag{8}\] ### Rotation and rescaling As a result of the diagonal gauging, the charge lattice \(\Lambda_{2N^{\prime}}\) is clearly different from the original one \(\mathbb{Z}\times\mathbb{Z}\). Therefore, in order to construct a symmetry, the charge lattice \(\Lambda_{2N^{\prime}}\) must be restored to the original one \(\mathbb{Z}\times\mathbb{Z}\). To achieve this, two operations are needed. Firstly, the charge lattice \(\Lambda_{2N^{\prime}}\) must be rotated by an angle of \(-\theta=-\pi/4\). Secondly, the rotated charge lattice should be rescaled to accomplish a grid scale of one. We denote these operations as \(\mathcal{R}_{-\theta}\) and \(\mathcal{M}\), respectively. Under these operations, indeed, the basis vectors \(\vec{\ell}_{1}\) and \(\vec{\ell}_{2}\) are transformed to \((1,0)^{\rm T}\) and \((0,1)^{\rm T}\), respectively; \[\begin{split}\mathcal{MR}_{-\theta}\;:\;\vec{\ell}_{1}\xrightarrow{ \mathcal{R}_{-\theta}}& R_{-\theta}\,\vec{\ell}_{1}=(\ell_{1},0)^{\rm T }\xrightarrow{\mathcal{M}}\ (M^{-1}R_{-\theta})\,\vec{\ell}_{1}=(1,0)^{\rm T}\,\\ \mathcal{MR}_{-\theta}\;:\;\vec{\ell}_{2}\xrightarrow{\mathcal{R}_{- \theta}}& R_{-\theta}\,\vec{\ell}_{2}=(0,\ell_{2})^{\rm T}\ \xrightarrow{\mathcal{M}}\ (M^{-1}R_{-\theta})\,\vec{\ell}_{2}=(0,1)^{\rm T}\,\end{split} \tag{9}\] where \(\ell_{1}\equiv|\vec{\ell}_{1}|\), \(\ell_{2}\equiv|\vec{\ell}_{2}|\), and the matrices \(R_{-\theta}\) and \(M\) are defined as follows; \[R_{-\theta}\equiv\begin{pmatrix}\cos\theta&\sin\theta\\ -\sin\theta&\cos\theta\end{pmatrix}\qquad,\qquad M\equiv\begin{pmatrix}\ell_{ 1}&0\\ 0&\ell_{2}\end{pmatrix}. \tag{10}\] Therefore, we have succeeded in bringing the charge lattice \(\Lambda_{2N^{\prime}}\) to the original one \(\mathbb{Z}\times\mathbb{Z}\); \[\mathcal{MR}_{-\theta}\,\Lambda_{2N^{\prime}}\,\cong\,\mathbb{Z}\times\mathbb{ Z}. \tag{11}\] We refer the reader to consult the Figure 2, where the above operations from gauging to rotation to rescaling are illustrated in the case of \(c=2\) bosonic torus CFT. Finally, we notice that the charge matrix \(K\) defined by (8) can be written in terms of the rotation and rescaling matrices; \[K=R_{\,\theta}\,M. \tag{12}\] ### Duality While the charge lattice indeed comes back to the original one with the above steps, the theory has not yet been restored to the original one \({\cal T}_{g}\). This is because the coupling constants \(g^{\prime}\) of the theory \({\cal T}^{\prime}_{g^{\prime}}\) typically differ from the original ones \(g\). We, however, can make use of the duality transformation, which maps the theory \({\cal T}^{\prime}_{g^{\prime}}\) to the dual one \(\widehat{{\cal T}}_{\widehat{g}^{\prime}}={\cal T}\), and connects the value of couplings \(g^{\prime}\) to the dual one \(\widehat{g}^{\,\prime}\). Hence, by tuning the original couplings \(g\) such that they satisfy the following self-duality condition; \[\widehat{g}^{\,\prime}=g\, \tag{13}\] then the dual theory \(\widehat{{\cal T}}_{\widehat{g}^{\prime}}\) becomes equivalent to the original one \({\cal T}_{g}^{\,5}\). Dualities, of course, depend on theories, e.g., T-duality for the \(c=2\) bosonic torus CFT and electric-magnetic duality for the pure U(1)\(\times\)U(1) gauge theory. Hence, we relegate the details of self-duality conditions to the subsequent sections. Following the all above steps, we can conclude that the theory \({\cal T}_{g}\) is invariant under the diagonal gauging: \({\cal T}_{g}/({\mathbb{Z}}_{2N^{\prime}}^{[q]})_{\rm diag}\cong{\cal T}_{g}\). Then, we anticipate some (generalized) symmetry associated with the diagonal gauging, which will later be identified with the _non-invertible_ one. ## 3 Example in two dimensions: \(c=2\) bosonic torus CFT In this section, we explore the non-invertible symmetries in the \(c=2\) bosonic torus CFT following the method described in section 2. This theory can be described by the following action; \[S[\phi^{1},\phi^{2}]=\frac{1}{4\pi}\int_{X_{2}}\,G_{IJ}\,d\phi^ {I}\wedge\star d\phi^{J}\qquad\,\qquad I,J=1\,,2\, \tag{14}\] where \(X_{2}\) is the orientable two-dimensional manifold6, and \(\phi^{I}\) is the compact boson with the periodicity \(2\pi\); Footnote 6: Here, we assume that the charge lattice \(\widehat{\mathbb{Z}}\times\mathbb{Z}\) of the dual theory \(\widehat{{\cal T}}_{\widehat{g}^{\,\prime}}\) is equivalent to the original one \(\mathbb{Z}\times\mathbb{Z}\). Indeed, all examples treated in this paper satisfy this property. \[\phi^{I}\sim\phi^{I}+2\pi. \tag{15}\] Also, \(G_{IJ}\) is the real kinetic matrix, which must satisfy the following stability condition; \[G_{11}>0\quad,\quad G_{22}>0\quad,\quad\det G>0. \tag{16}\] In this paper, we consider a particular element of the T-duality group O(2,2,\mathbb{Z}\)), which maps the kinetic matrix to its inverse; \[\text{T-duality}\ :\ G\mapsto G^{-1}. \tag{17}\] The global symmetry of the \(c=2\) bosonic torus CFT contains the shift symmetry \(\mathrm{U}(1)^{\mathrm{shift}}_{1}\times\mathrm{U}(1)^{\mathrm{shift}}_{2}\), which acts on the vertex operator \(V^{[0]}_{\vec{n}}\equiv e^{i\vec{n}\cdot\vec{\phi}}\) as follows; \[\mathrm{U}(1)^{\mathrm{shift}}_{1}\times\mathrm{U}(1)^{\mathrm{ shift}}_{2}\ :\ V^{[0]}_{\vec{n}}\mapsto e^{i\vec{\theta}\cdot\vec{n}}\,V^{[0]}_{\vec{n}}. \tag{10}\] As a first step toward realizing our program (3), we perform gauging the diagonal subgroup \((\mathbb{Z}^{[0]}_{2N^{\prime}})_{\mathrm{diag}}\subset\mathrm{U}(1)^{\mathrm{ shift}}_{1}\times\mathrm{U}(1)^{\mathrm{shift}}_{2}\). Then, the original charge lattice \(\mathbb{Z}\times\mathbb{Z}\) is reduced to the sublattice defined by (6); \[\mathbb{Z}\times\mathbb{Z}\longrightarrow\Lambda_{2N^{\prime}}. \tag{11}\] As explained in section 2, we can bring the charge lattice \(\Lambda_{2N^{\prime}}\) to the original one by performing the rotation and rescaling successively. All we need is to perform the duality transformation, hence we proceed to give details on it below. ### Self-duality condition In this subsection, we first derive the kinetic matrix after a series of operations (diagonal gauging, rotation, and rescaling), which typically differs from the original one. Next, we show that the T-duality manifestly makes the diagonal gauging \((\mathbb{Z}^{[0]}_{2N^{\prime}})_{\mathrm{diag}}\) a symmetry when the kinetic matrix \(G_{IJ}\) satisfies the self-duality condition; \[G=K^{\mathrm{T}}G^{-1}K. \tag{12}\] First of all, under the diagonal gauging and the rotation \(\mathcal{R}_{-\theta}\), the compact boson \(\vec{\phi}\) is transformed as follows; \[\vec{\phi}\mapsto\vec{\phi}^{\prime}=R_{-\theta}\,\vec{\phi}\, \tag{13}\] where \(R_{-\theta}\) is defined by (10), and the periodicity condition for \(\vec{\phi}^{\prime}\) reads; \[\phi^{\prime 1}\sim\phi^{\prime 1}+\frac{2\pi}{\ell_{1}}\qquad,\qquad\phi^{ \prime 2}\sim\phi^{\prime 2}+\frac{2\pi}{\ell_{2}}. \tag{14}\] We should note that the periodicity of \(\vec{\phi}^{\prime}\) is not \(2\pi\), and this corresponds to the fact that the grid scale of the charged lattice \(\Lambda_{2N^{\prime}}\) is not one. (See also the flow from the upper left lattice to the upper right one to the bottom middle one in Figure 2.) Therefore, to restore the original theory, we must make the periodicity of the compact boson \(2\pi\). This can be done by the following rescaling transformation; \[\vec{\phi}^{\,\prime}\mapsto\vec{\phi}^{\prime\prime}=M\,\vec{ \phi}^{\prime}\, \tag{15}\] where the matrix \(M\) is the rescaling matrix defined in (10). We should notice that the periodicity of \(\vec{\phi}^{\prime\prime}\) is \(2\pi\), and this restoration can be seen in the flow from the bottom middle lattice to the upper left one in Figure 2. In summary, the diagonal gauging \((\mathbb{Z}^{[0]}_{2N^{\prime}})_{\rm diag}\), rotation and rescaling transform the compact boson field \(\vec{\phi}\) as follows; \[\begin{split}\vec{\phi}\mapsto\vec{\phi}^{\prime\prime}& =MR_{-\theta}\,\vec{\phi}\\ &=K^{\rm T}\vec{\phi}\,\end{split} \tag{3.11}\] where we used (2.12). Since the periodicities of both compact bosons \(\vec{\phi}\) and \(\vec{\phi}^{\prime\prime}\) are \(2\pi\), the above map (3.11) can be rephrased by the transformation law for the kinetic matrix \(G_{IJ}\); \[G\,\mapsto\,K^{-1}\,G\,(K^{\rm T})^{-1}. \tag{3.12}\] As stressed in earlier times, the kinetic matrix after the series of operations, takes the different values from the original one. However, by making full use of the T-duality, we can put the theory back to the original one. The T-duality transformation (3.4), indeed, maps the deformed kinetic matrix \(K^{-1}G(K^{\rm T})^{-1}\) to its inverse; \[\text{T-duality}:\,K^{-1}G(K^{\rm T})^{-1}\mapsto K^{\rm T}G^{-1}K. \tag{3.13}\] Therefore, if we choose the kinetic matrix \(G_{IJ}\) such that it satisfies the following self-duality condition; \[G=K^{\rm T}G^{-1}K\, \tag{3.14}\] the diagonal gauging \((\mathbb{Z}^{[0]}_{2N^{\prime}})_{\rm diag}\) becomes a true symmetry. In the following, we denote the solution to (3.14) by \(G^{*}\). By noticing \(\det G=2N^{\prime}\), we easily obtain the solution to the self-duality condition; \[G^{*}=\sqrt{-\frac{2N^{\prime}}{D}}\begin{pmatrix}2K_{11}&K_{12}+ K_{21}\\ K_{12}+K_{21}&2K_{22}\end{pmatrix}\, \tag{3.15}\] where \(D\) is defined by \[D\equiv(K_{12}+K_{21})^{2}-4K_{11}K_{22}. \tag{3.16}\] We should note that the kinetic matrix \(G_{IJ}\) must be real in physical theory, hence \(D\) takes the negative value; \[D<0. \tag{3.17}\] Interestingly, the self-dual solution (3.15) is the same one known as _complex multiplication (CM) point_ which is a more generic point than a RCFT in the \(c=2\) bosonic torus CFT 7[98]. To see this, we put the kinetic matrix into the complex structure modulus \(\tau\) defined by; Footnote 7: We thank Justin Kaidi for plentiful discussions on this point. \[\tau\equiv\frac{G_{12}}{G_{22}}+{\rm i}\frac{\sqrt{\det G}}{G_{ 22}}\, \tag{3.18}\] then the complex structure modulus at the self-dual point \(\tau^{*}\) satisfies the following quadratic equation; \[K_{22}\,(\tau^{*})^{2}-(K_{12}+K_{21})\tau^{*}+K_{11}=0. \tag{3.19}\] Importantly, the discriminant of the above quadratic equation (3.19) is presicely same as \(D\) defined in (3.16), and its negative property \(D<0\) ensures that \(\tau^{*}\) belongs to the imaginary quadratic number field \(\mathbb{Q}(\sqrt{D})\)[98]; \[\tau^{*}\in\mathbb{Q}(\sqrt{D}). \tag{3.20}\] Since it is known that the elliptic curves with the modular parameter \(\tau^{*}\) satisfying (3.20) have complex multiplication properties, such modulus is called the CM point. Then, the following natural question arises; When can the CM point be lifted up to the RCFT? We can show that only if the charge matrix \(K\) is symmetric, namely \(K^{\rm T}=K\), this promoting occurs. The proof is as follows. In order for this lifting to be achieved, it is sufficient to show that the complexified Kahler modulus \(\rho\) also needs to belong to the _same_ imaginary quadratic number field \(\mathbb{Q}(\sqrt{D})\)[98]. Now, the complexified Kahler modulus \(\rho\) is given by the pure imaginary number due to the absence of the B-field; \[\rho\equiv{\rm i}\sqrt{\det G}\, \tag{3.21}\] and at the self-duality point, the modulus \(\rho\) becomes \(\rho^{*}={\rm i}\sqrt{2N^{\prime}}\). Hence, when there exist the integers \(\alpha,\beta\) and \(\gamma\) such that; \[\alpha(\rho^{*})^{2}+\beta\rho^{*}+\gamma=0\qquad\text{and} \qquad\beta^{2}-4\alpha\gamma=D\, \tag{3.22}\] the CM points can get promoted to RCFT ones. We first notice that \(\beta=0\) due to the pure imaginary property of \(\rho^{*}\), then resulting in \(\gamma=2N^{\prime}\alpha\). Next, we can rewrite the discriminant \(D\) given in (3.16) as follows; \[D=(K_{12}-K_{21})^{2}-8N^{\prime}\, \tag{3.23}\] where the formula \(\det K=2N^{\prime}\) is used. Therefore, to realize \(\rho^{*}\in\mathbb{Q}(\sqrt{D})\), there must exist some integer \(\alpha\) such that \[\alpha^{2}=1-\frac{(K_{12}-K_{21})^{2}}{8N^{\prime}}. \tag{3.24}\] Since \(\alpha^{2}\) must take its value in positive integers, \(K_{12}\) must be equal to \(K_{21}\), which completes the proof. Note that we cannot find the symmetric charge matrix \(K\) in the case of the diagonal gauging8, as clearly seen from (2.8). Hence, we arrive at the following conclusion; Footnote 8: One may notice we can make the charge matrix \(K\) be symmetric if we exchange the two basis vectors \(\vec{\ell}_{1}\) and \(\vec{\ell}_{2}\). In that case, however, we can never obtain the integer \(\alpha\) because of \(\det K=-2N^{\prime}\). A (generalized) symmetry associated with the diagonal gauging \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\rm diag}\) becomes emergent at the _irrational_ CFT point. Emergent \(\mathbb{Z}_{2}\) symmetry at the self-dual point.Finally, we close this subsection by mentioning the non-trivial emergent \(\mathbb{Z}_{2}\) symmetry at the irrational CFT point. We should note that the self-duality condition (3.14) is invariant under the replacement of the charge matrix \(K\) with its transposed one \(K^{\rm T}\); \[K^{\rm T}G^{-1}K=G\qquad\Longleftrightarrow\qquad KG^{-1}K^{\rm T}=G. \tag{3.25}\] This implies some emergent \(\mathbb{Z}_{2}\) symmetry at the self-dual point, and we find out that the mapping \(K\mapsto K^{\rm T}\) can be realized by the transformation of the compact boson field; \[\phi^{I}\mapsto\phi^{\prime I}=\mathsf{S}_{IJ}\,\phi^{J}\qquad,\qquad\mathsf{S }=\begin{pmatrix}1&0\\ 1-N^{\prime}&-1\end{pmatrix}. \tag{3.26}\] Indeed, the matrix \(\mathsf{S}\) satisfies the following properties; \[\mathsf{S}^{2}=1_{2\times 2}\qquad,\qquad\mathsf{S}^{\rm T}\,G^{*}\,\mathsf{S} \ \ =G^{*}\qquad,\qquad\mathsf{S}^{\rm T}\,K\,\mathsf{S}=K^{\rm T}\, \tag{3.27}\] and we can easily check that the theory at the self-dual point is invariant under the \(\mathbb{Z}_{2}\) transformation (3.26). This emergent \(\mathbb{Z}_{2}\) symmetry plays a crucial role in discussing the fusion algebra, and we denote the topological defect associated with this emergent \(\mathbb{Z}_{2}\) symmetry by \(\mathcal{S}\). ### Duality defect In this subsection, following the spirit of [45; 46], we derive the duality defect associated with the diagonal gauging \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\rm diag}\). First of all, we divide the ambient spacetime into the left and right regions separated by the co-dimension one defect residing at \(x=0\). Then, we propose that the duality defect \(\mathcal{D}\) can be expressed by the following Lagrangian; \[\mathcal{D}\ :\ \frac{\mathrm{i}}{2\pi}\int_{x=0}K_{IJ}\,\phi_{\rm L}^{I}\,d \phi_{\rm R}^{J}. \tag{3.28}\] where \(\phi_{\rm L}\) and \(\phi_{\rm R}\) are the compact boson fields which are located in the bulks \(x<0\) and \(x>0\), respectively. We should note that the duality defect (3.28) is gauge-invariant since the charge matrix \(K\) is an integer matrix. In the following, we explicitly show that only when the bulk kinetic matrix is tuned to be self-dual one \(G^{*}\), the duality defect \(\mathcal{D}\) correctly reflects the sequence of operations in (2.3) by seeing the boundary conditions of the left and right fields. Finally, we give some comments on the topological property of the duality defect and its orientation-reversion. The combined system of the bulk theory and the duality defect can be described in the following action (see Figure 4.); \[\frac{1}{4\pi}\int_{\rm L}G^{*}_{IJ}\,d\phi^{I}_{\rm L}\wedge\star d\phi^{J}_{ \rm L}+\frac{1}{4\pi}\int_{\rm R}G^{*}_{IJ}\,d\phi^{I}_{\rm R}\wedge\star d\phi^ {J}_{\rm R}+\frac{{\rm i}}{2\pi}\int_{x=0}K_{IJ}\,\phi^{I}_{\rm L}\,d\phi^{J}_{ \rm R}\, \tag{3.29}\] then the variations of the left and right compact boson fields give rise to the following boundary conditions at \(x=0\); \[x=0\ :\ {\rm i}\,G^{*}_{IJ}\star d\phi^{J}_{\rm L}=K_{IJ}\,d\phi^{J}_{ \rm R}\, \tag{3.30}\] \[x=0\ :\ {\rm i}\,G^{*}_{IJ}\star d\phi^{J}_{\rm R}=K_{JI}\,d\phi^{J}_{ \rm L}. \tag{3.31}\] We can readily check the equivalence of these two conditions by using the self-duality condition (3.14). Concretely speaking, we can obtain the latter boundary condition (3.31) from the former one (3.30) by acting the Hodge dual operation \(\star\) on both hand sides in (3.30), and using the self-duality condition (3.14), and vice versa. Furthermore, we should note that by rewriting the matrix \(K\) in terms of the rotation matrix and the rescaling one, the boundary condition becomes; \[x=0\ :\ {\rm i}\,G^{*}_{IJ}\,d\phi^{J}_{\rm R}=\star\left((MR_{-\theta})_{IJ} \,d\phi^{J}_{\rm L}\right). \tag{3.32}\] This can be interpreted as first performing a \((\mathbb{Z}^{[0]}_{2N^{\prime}})_{\rm diag}\) gauging to rotate the compact boson by angle \(-\theta\) to rescale by the matrix \(M\), and finally performing the T-duality transformation. This observation corroborates that our construction (2.3) can be realized by insertion of the duality defect \(\mathcal{D}\) defined by (3.28) into the spacetime. In addition, we insist that the duality defect \(\mathcal{D}\) becomes topological when \(G=G^{*}\). Although this is clear from the viewpoint of the half-space gauging [45, 46], we provide Figure 4: Pictorical representation of the duality defect \(\mathcal{D}\). another proof in the spirit of [53; 99]. To show this topological property, it is enough to show that the energy-momentum tensors must satisfy the following matching condition; \[x=0\ :\ n_{\mu}(T_{\rm L}^{\mu\nu}-T_{\rm R}^{\mu\nu})=0. \tag{3.33}\] Here, \(T_{\rm L}^{\mu\nu}\) and \(T_{\rm R}^{\mu\nu}\) are the energy-momentum tensors in the left and right regions, and given by \[\begin{split} T_{\rm L}^{\mu\nu}&=\frac{1}{4\pi}G_{ IJ}\,\partial_{\alpha}\phi_{\rm L}^{I}\partial_{\beta}\phi_{\rm L}^{J}\left( \frac{1}{2}\delta^{\alpha\beta}\delta^{\mu\nu}-\delta^{\mu\alpha}\delta^{\nu \beta}\right)\,\\ T_{\rm R}^{\mu\nu}&=\frac{1}{4\pi}G_{IJ}\, \partial_{\alpha}\phi_{\rm R}^{I}\partial_{\beta}\phi_{\rm R}^{J}\left( \frac{1}{2}\delta^{\alpha\beta}\delta^{\mu\nu}-\delta^{\mu\alpha}\delta^{\nu \beta}\right)\,\end{split} \tag{3.34}\] respectively, and \(n^{\mu}\) is the normal vector to the duality defect \(\mathcal{D}\). We can easily prove that the matching condition (3.33) can be achieved by using (3.14) and (3.30). As a result of this reasoning, we can conclude that the duality defect \(\mathcal{D}\) is topological. Finally, we comment on the orientation-reversing of the duality defect \(\mathcal{D}\). Its orientation reversal \(\overline{\mathcal{D}}(M)\) is defined by [47; 52]; \[\overline{\mathcal{D}}(M)=\mathcal{D}(\overline{M})\, \tag{3.35}\] where \(M\) is the support manifold of the duality defect, namely \(x=0\), and \(\overline{M}\) denotes the orientation reversal of \(M\). The defect action of \(\overline{\mathcal{D}}\) can be obtained by swapping \(\phi_{L}\) with \(\phi_{R}\), and flipping the overall sign stemming from the orientation-reversion of \(M\); \[\overline{\mathcal{D}}(M)\ :\ \frac{\mathrm{i}}{2\pi}\int_{x=0}K_{JI}\, \phi_{\rm L}^{I}\,d\phi_{\rm R}^{J}. \tag{3.36}\] Note that we can also obtain \(\overline{\mathcal{D}}\) only by replacing the charge matrix \(K\) with its transposed one in the duality defect \(\mathcal{D}\). We can realize this replacement by utilizing the emergent \(\mathbb{Z}_{2}\) symmetry discussed in section 3.1, and write the orientation-reversed duality defect \(\overline{\mathcal{D}}\) in terms of \(\mathcal{D}\) and \(\mathcal{S}\); \[\overline{\mathcal{D}}=\mathcal{S}\times\mathcal{D}\times\mathcal{S}\,. \tag{3.37}\] We should notice that the orientation-reversed duality defect \(\overline{\mathcal{D}}\) can be interpreted as the duality defect obtained by gauging \(\mathbb{Z}_{2N^{\prime}}^{[0]}\) symmetry generated by \(\eta_{\mathcal{S}\vec{p}}\); \[\eta_{\mathcal{S}\vec{p}}=\mathcal{S}\times\eta_{\vec{p}}\times\mathcal{S}\,. \tag{3.38}\] This is because, after gauging this \(\mathbb{Z}_{2N^{\prime}}^{[0]}\) symmetry, the charge matrix is given by the transposed matrix \(K^{\rm T}\). ### Non-commutative fusion algebra In this subsection, we describe various fusion rules involving the duality defect \(\mathcal{D}\) introduced in section 3.2. Since the derivations of the fusion algebra require somewhat technical calculations, we just digest our results here. If the readers have some interest in the detailed calculations, we refer to reading appendix A, where some skipped derivations are demonstrated. First of all, we consider the fusion rules between the duality defect \(\mathcal{D}\) and the \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\text{diag}}\) shift symmetry generator \(\eta_{\vec{p}}\,(\Sigma)\) defined by; \[\eta_{\vec{p}}\,(\Sigma)\equiv\exp\left[-\frac{p^{I}}{2N^{\prime}}\int_{\Sigma }G_{IJ}\star d\phi^{J}\right]\qquad,\qquad\vec{p}\equiv(1,1)^{\text{T}}\, \tag{3.39}\] where \(\Sigma\) is the parallel line to the duality defect \(\mathcal{D}\). Interestingly, unlike the ordinary fusion rules of the duality defect, \(\eta_{\vec{p}}\times\mathcal{D}\) and \(\mathcal{D}\times\eta_{\vec{p}}\) do not give rise to the same results in general. If we bring the \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\text{diag}}\) shift symmetry generator \(\eta_{\vec{p}}\) closer to the duality defect \(\mathcal{D}\) from the left side, the fusion rule \(\eta_{\vec{p}}\times\mathcal{D}\) reads; \[\eta_{\vec{p}}\times\mathcal{D}\ :\ \frac{\text{i}}{2\pi}\int_{x=0}\,K_{IJ}\, \phi_{\text{L}}^{I}\,d\phi_{\text{R}}^{J}+\frac{\text{i}p^{I}K_{IJ}}{2N^{ \prime}}\int_{x=0}\,d\phi_{\text{R}}^{J}. \tag{3.40}\] Here, we should recall \(p^{I}K_{IJ}=0\mod 2N^{\prime}\), hence the last term in (3.40) becomes trivial and can be dropped. This implies that the symmetry generator \(\eta_{\vec{p}}\) is absorbed into the duality defect \(\mathcal{D}\), and the fusion rule \(\eta_{\vec{p}}\times\mathcal{D}\) becomes as follows; \[\eta_{\vec{p}}\times\mathcal{D}=\mathcal{D}. \tag{3.41}\] From this, it turns out that the duality defect \(\mathcal{D}\) is non-invertible9 Footnote 9: This can be readily checked as follows. Suppose that the duality defect \(\mathcal{D}\) is invertible, i.e., \(\mathcal{D}\times\mathcal{D}^{-1}=1\), this contradicts with the fusion rule derived in (3.41); \[\mathcal{D}\times\mathcal{D}^{-1}=\eta_{\vec{p}}\times\mathcal{D}\times \mathcal{D}^{-1}=\eta_{\vec{p}}\neq 1. \tag{3.42}\] . On the other hand, if we put the \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\text{diag}}\) shift symmetry generator \(\eta_{\vec{p}}\) to the duality defect \(\mathcal{D}\) from the right, the _non-trivial_\(\mathbb{Z}_{2N^{\prime}}^{[0]}\) winding symmetry generator becomes emergent on the left side of the duality defect; \[\mathcal{D}\times\eta_{\vec{p}}\ :\ \frac{\text{i}}{2\pi}\int_{x=0}\,K_{ IJ}\,\phi_{\text{L}}^{I}\,d\phi_{\text{R}}^{J}+\frac{\text{i}p^{J}K_{IJ}}{2N^{ \prime}}\int_{x=0}\,d\phi_{\text{L}}^{I}\,, \tag{3.43}\] since \(p^{J}K_{IJ}\neq 0\mod 2N^{\prime}\) in general. We denote this emergent \(\mathbb{Z}_{2N^{\prime}}^{[0]}\) winding symmetry generator by \(\widetilde{\eta}_{K\vec{p}}\), then the fusion rule \(\mathcal{D}\times\eta_{\vec{p}}\) becomes as follows; \[\mathcal{D}\times\eta_{\vec{p}}=\widetilde{\eta}_{K\vec{p}}\times\mathcal{D}. \tag{3.44}\] By comparing the above results (3.41) and (3.44), we can conclude that the obtained fusion rules are non-commutative, namely \(\eta_{\vec{p}}\times{\cal D}\neq{\cal D}\times\eta_{\vec{p}}\). What happens if we close the \(\mathbb{Z}_{2N^{\prime}}^{[0]}\) winding symmetry generator \(\widetilde{\eta}_{K\vec{p}}\) to the right side of the duality defect \({\cal D}\,\)? The fusion rule \({\cal D}\times\widetilde{\eta}_{K\vec{p}}\) can be calculated as follows; \[{\cal D}\times\widetilde{\eta}_{K\vec{p}}\ :\ \frac{\mathrm{i}}{2\pi}\int_{x=0}\,K_{ IJ}\,\phi_{\mathrm{L}}^{I}\,d\phi_{\mathrm{R}}^{J}+\frac{(\vec{p}^{\,\mathrm{T}}\,K ^{\mathrm{T}}\,K^{-1})^{I}}{2N^{\prime}}\int_{x=0}\,G_{IJ}\,\star d\phi_{ \mathrm{L}}^{I}\,. \tag{3.45}\] From (3.39), the last term in (3.45) is none other than the shift symmetry generator \(\eta_{MK\vec{p}}\,\); \[\eta_{MK\vec{p}}\,(\Sigma)\equiv\exp\left[-\frac{(\vec{p}^{\,\mathrm{T}}K^{ \mathrm{T}}M^{\mathrm{T}})^{I}}{2N^{\prime}}\int_{\Sigma}G_{IJ}\star d\phi^{J }\right]\qquad,\qquad M\equiv(K^{\mathrm{T}})^{-1}\, \tag{3.46}\] and the fusion rule \({\cal D}\times\widetilde{\eta}_{K\vec{p}}\) becomes as follows; \[{\cal D}\times\widetilde{\eta}_{K\vec{p}}=\eta_{MK\vec{p}}\times{\cal D}. \tag{3.47}\] We should notice that \(\eta_{MK\vec{p}}\) is the symmetry generator associated with \(\mathbb{Z}_{(2N^{\prime})^{2}}^{[0]}\) shift symmetry. If we moreover put this the shift symmetry generator \(\eta_{MK\vec{p}}\) to \({\cal D}\) from the right, \(\mathbb{Z}_{(2N^{\prime})^{2}}^{[0]}\) winding symmetry generator \(\widetilde{\eta}_{KMK\vec{p}}\) appears in the left side; \[{\cal D}\times\eta_{MK\vec{p}}=\widetilde{\eta}_{KMK\vec{p}}\times{\cal D}. \tag{3.48}\] We can straightforwardly keep going the above discussions, and eventually obtain the following fusion rules: \[{\cal D}\times\eta_{(MK)^{i}\,\vec{p}}=\widetilde{\eta}_{K(MK)^{i}\,\vec{p}} \times{\cal D}\quad,\quad{\cal D}\times\widetilde{\eta}_{K(MK)^{i}\,\vec{p}}= \eta_{(MK)^{i+1}\,\vec{p}}\times{\cal D}\quad,\quad i=0,1,2,\cdots\, \tag{3.49}\] where \(\eta_{(MK)^{i}\,\vec{p}}\) and \(\widetilde{\eta}_{K(MK)^{i}\,\vec{p}}\) are \(\mathbb{Z}_{(2N^{\prime})^{i}}^{[0]}\) shift and winding symmetry generators, respectively. This implies that the fusion algebra concerning the non-invertible duality defect constructed from the diagonal gauging is _infinitely_ generated. Notably, this is consistent with the general property of the irrational CFT where the number of topological defect lines is expected to be infinite. Interestingly, we find out the closed fusion subalgebra of the infinitely generated fusion algebra described above. To see this, we first redefine the duality defect \({\cal D}\) by dressing the \(\mathbb{Z}_{2N^{\prime}}^{[0]}\) winding symmetry generator; \[{\cal D}_{s}(\Sigma)\ :\ \int_{\Sigma}\,\left(\frac{\mathrm{i}}{2\pi}K_{IJ}\, \phi_{\mathrm{L}}^{I}\,d\phi_{\mathrm{R}}^{J}+\frac{\mathrm{i}s\,p^{J}K_{IJ}} {2N^{\prime}}\,d\phi_{\mathrm{L}}^{I}\right)\, \tag{3.50}\] which is labelled by the \(\mathbb{Z}_{2N^{\prime}}\) element \(s=0,1,\cdots 2N^{\prime}-1\). The fusion rules between the dressed duality defect \({\cal D}_{s}\) and the \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\mathrm{diag}}\) shift symmetry generator \(\eta_{\vec{p}}\) becomes; \[\eta_{\vec{p}}\times{\cal D}_{s}={\cal D}_{s}\, \tag{3.51}\] \[{\cal D}_{s}\times\eta_{\vec{p}}={\cal D}_{s+1}. \tag{3.52}\] However, as it is, the fusion algebra is not closed, which can be seen from the direct calculation of \({\cal D}_{s_{1}}\times{\cal D}_{s_{2}}\); \[{\cal D}_{s_{1}}\times{\cal D}_{s_{2}}\ :\ \frac{{\rm i}}{2\pi}\int_{x=0}(K_{IJ} \phi^{I}_{\rm L}-K_{JI}\phi^{I}_{\rm R})d\phi^{J}_{\rm M}+\frac{{\rm i}s_{2}\,p ^{J}K_{IJ}}{2N^{\prime}}\int_{x=0}d\phi^{I}_{\rm M}. \tag{3.53}\] To our best effort, we cannot write the above result in a closed form by using the known topological defects. Hence, further modification for the duality defect is needed to close the fusion algebra. After some trial and error, we find out that the following combination works well for the closure of the fusion algebra;10 Footnote 10: We note that orientation-reversed dressed duality defect \(\overline{\widehat{\cal D}}_{s_{1},s_{2}}(M)\equiv\widehat{\cal D}_{s_{1},s_{ 2}}(\overline{M})\) can be expressed in terms of the dressed duality defect \(\widehat{\cal D}_{s_{1},s_{2}}\) as follows; \[\begin{split}\overline{\widehat{\cal D}}_{s_{1},s_{2}}& =\eta_{5\not{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! We can easily check that the above fusion algebra satisfies the associativity condition. For instance, the fusion rule \((\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{D}}_{s_{3},s_{4}}) \times\widehat{\mathcal{D}}_{s_{5},s_{6}}\) becomes as follows; \[\begin{split}(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{ \mathcal{D}}_{s_{3},s_{4}})\times\widehat{\mathcal{D}}_{s_{5},s_{6}}& =\widehat{\mathcal{C}}_{s_{2}+s_{3},s_{1}+s_{4}}\times\widehat{ \mathcal{D}}_{s_{5},s_{6}}\\ &=2N^{\prime}\widehat{\mathcal{D}}_{s_{1}+s_{4}+s_{5},s_{2}+s_{3} +s_{6}}\.\end{split} \tag{3.58}\] On the other hand, the fusion rule \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times(\widehat{\mathcal{D}}_{s_{3},s_{4}} \times\widehat{\mathcal{D}}_{s_{5},s_{6}})\) can be evaluated as follows; \[\begin{split}\widehat{\mathcal{D}}_{s_{1},s_{2}}\times(\widehat{ \mathcal{D}}_{s_{3},s_{4}}\times\widehat{\mathcal{D}}_{s_{5},s_{6}})& =\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{C}}_{ s_{4}+s_{5},s_{3}+s_{6}}\\ &=2N^{\prime}\widehat{\mathcal{D}}_{s_{1}+s_{4}+s_{5},s_{2}+s_{3} +s_{6}}\.\end{split} \tag{3.59}\] From (3.58) and (3.59), the associativity condition does hold; \[(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{D}}_{s_{3},s_{4}} )\times\widehat{\mathcal{D}}_{s_{5},s_{6}}=\widehat{\mathcal{D}}_{s_{1},s_{2 }}\times(\widehat{\mathcal{D}}_{s_{3},s_{4}}\times\widehat{\mathcal{D}}_{s_{5 },s_{6}}). \tag{3.60}\] For other defects, we can show the associativity in a similar manner to the above. Here, it is instructive to compare the above fusion algebra (3.56) with the one obtained by gauging the product subgroup \(\mathbb{Z}_{N_{1}}\times\mathbb{Z}_{N_{2}}\subset\)U(1)\({}_{1}\times\)U(1)\({}_{2}\). In this case, the charge matrix is given by the following diagonal form; \[K=\begin{pmatrix}N_{1}&0\\ 0&N_{2}\end{pmatrix}\, \tag{3.61}\] then the self-dual kinetic matrix is also diagonal; \[G=\begin{pmatrix}N_{1}&0\\ 0&N_{2}\end{pmatrix}. \tag{3.62}\] The above result (3.62) shows that we can split the \(c=2\) bosonic torus CFT into the double \(c=1\) compact boson CFTs, whose radiuses are given by \(\sqrt{N_{1}}\) and \(\sqrt{N_{2}}\). By recalling the facts about non-invertible symmetries in \(c=1\) compact boson CFT [45; 46], we can convince that the non-invertible symmetry from the gauging \(\mathbb{Z}_{N_{1}}\times\mathbb{Z}_{N_{2}}\) becomes emergent at a RCFT point, and the resulting fusion algebra is given by the Tambara-Yamagami category (1.3). This observation is consistent with the fact that we can undress totally of various global symmetry generators from the dressed duality defect \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\), and the resulting fusion algebra (3.56) is reduced to the Tambara-Yamagami category. ## 4 Example in four dimensions: pure U(1)\(\times\)U(1) gauge theory In this section, we provide a four-dimensional example that has a non-invertible symmetry obtained from the diagonal gauging. In particular, we consider the pure U(1)\(\times\)U(1) gauge theory; \[S[A^{1},A^{2}]=\frac{1}{4\pi}\int_{M_{4}}\,\mathcal{G}_{IJ}\,dA^{I}\wedge \star dA^{J}\qquad,\qquad I,J=1\,,2\, \tag{4.1}\] where \(A^{1}\) and \(A^{2}\) are U(1) gauge fields. Note that this model is the higher-dimensional analog of the \(c=2\) bosonic torus CFT, and has the same symmetry structures with that. (See Table 2.) Therefore, we can parallelly discuss the non-invertible symmetry in this model based on the method accomplished in section 3. This theory has the electric one-form symmetry U(1)\({}_{1}^{\rm ele}\times\)U(1)\({}_{2}^{\rm ele}\)[2] and the electric-magnetic duality: \({\cal G}\mapsto{\cal G}^{-1}\). First, we gauge the diagonal subgroup \((\mathbb{Z}_{2N^{\prime}}^{[1]})_{\rm diag}\subset\) U(1)\({}_{1}^{\rm ele}\times\)U(1)\({}_{2}^{\rm ele}\), generated by; \[\eta_{\vec{p}}\,(\Sigma_{2})=\exp\left[-\frac{p^{I}}{2N^{\prime}}\int_{\Sigma _{2}}{\cal G}_{IJ}\star dA^{J}\right]\quad,\quad\vec{p}\equiv(1,1)\, \tag{4.2}\] where \(\Sigma_{2}\) is a two-dimensional closed manifold11. When the kinetic matrix \({\cal G}_{IJ}\) satisfies the self-duality condition \(K^{\rm T}{\cal G}^{-1}K={\cal G}\), namely its solution is given by Footnote 11: Here, we assume that the two-dimensional manifold \(\Sigma_{2}\) is stretched in the Euclidean time direction. \[{\cal G}^{*}=\sqrt{-\frac{2N^{\prime}}{D}}\begin{pmatrix}2K_{11}&K_{12}+K_{21 }\\ K_{12}+K_{21}&2K_{22}\end{pmatrix}\quad,\quad D=(K_{12}+K_{21})^{2}-4K_{11}K_{ 22}\, \tag{4.3}\] the diagonal gauging \((\mathbb{Z}_{2N^{\prime}}^{[1]})_{\rm diag}\) becomes a non-invertible symmetry. Furthermore, the (topological) duality defect \({\cal D}\) and its orientation reversal \(\overline{\cal D}\) is obtained by the same procedure as discussed in section 2, and its defect action can be written as the following; \[{\cal D} : \frac{{\rm i}}{2\pi}\int_{M_{3}}K_{IJ}\,A^{I}_{\rm L}\,dA^{J}_{ \rm R}\, \tag{4.4}\] \[\overline{\cal D} : -\frac{{\rm i}}{2\pi}\int_{M_{3}}K_{JI}\,A^{I}_{\rm L}\,dA^{J}_{ \rm R}\, \tag{4.5}\] where \(M_{3}\) is a three dimensional manifold, and \(A_{\rm L}\) and \(A_{\rm R}\) are gauge fields living in left and right regions, respectively. As with the two-dimensions (3.37), we can rewrite \(\overline{\cal D}\) in a following way; \[\overline{\cal D}={\cal S}\times{\cal D}\times{\cal S}\times{\cal U}\,, \tag{4.6}\] where \({\cal S}\) and \({\cal U}\) are \(\mathbb{Z}_{2}\) symmetry defects which act on the gauge field \(A^{I}\) as follows; \[{\cal S}\ :\ A^{I}\mapsto{\sf S}_{IJ}\,A^{J}\qquad,\qquad{\cal U}\ :\ A^{I}\mapsto-A^{I}. \tag{4.7}\] Here, the matrix \({\sf S}\) is defined in (3.26). The resulting fusion algebra is again infinitely generated and non-commutative, and in a similar manner to the two dimensions, we can find the closed subalgebra by defining the dressed duality defect \(\widehat{\cal D}_{{\mathbf{t}}_{1},{\mathbf{t}}_{2}}\) as follows; \[\widehat{\cal D}_{{\mathbf{t}}_{1},{\mathbf{t}}_{2}}(M_{3})\equiv\eta_{{\sf S}\vec{p} }\,({\mathbf{t}}_{1})\times{\cal D}_{{\mathbf{t}}_{2}}(M_{3})\times{\cal S}\, \tag{4.8}\] where \(\mathbf{t}_{1}\) and \(\mathbf{t}_{2}\) denote the homology cycles belonging to \(H_{2}(M_{3},\mathbb{Z}_{2N^{\prime}})\), and \[\eta_{\mathsf{S}\vec{p}}\left(\mathbf{t}_{1}\right) =\exp\left[-\frac{p^{K}\,\mathsf{S}_{IK}}{2N^{\prime}}\int_{\mathbf{t }_{1}}\mathcal{G}_{IJ}*dA^{J}\right]\, \tag{4.9}\] \[\mathcal{D}_{\mathbf{t}_{2}}(M_{3}) \equiv\exp\left[-\frac{\mathrm{i}}{2\pi}\int_{M_{3}}\,K_{IJ}\,A _{\mathrm{L}}^{I}\,dA_{\mathrm{R}}^{J}+\frac{\mathrm{i}\,p^{J}K_{IJ}}{2N^{ \prime}}\,\int_{\mathbf{t}_{2}}dA_{\mathrm{L}}^{I}\right]. \tag{4.10}\] We also have the orientation-reversed duality defect \(\overline{\widehat{\mathcal{D}}}_{\mathbf{t}_{1},\mathbf{t}_{2}}(M)\equiv\widehat{ \mathcal{D}}_{\mathbf{t}_{1},\mathbf{t}_{2}}(\overline{M})\) as follows; \[\overline{\widehat{\mathcal{D}}}_{\mathbf{t}_{1},\mathbf{t}_{2}}=\widehat{\mathcal{D }}_{-\mathbf{t}_{2},\mathbf{t}_{1}}\times\mathcal{U}=\mathcal{U}\times\widehat{ \mathcal{D}}_{\mathbf{t}_{2},-\mathbf{t}_{1}}\,. \tag{4.11}\] Accordingly, we must also dress the condensation defect [100, 101, 102, 32, 103, 104] as follows; \[\begin{split}\widehat{\mathcal{C}}_{\mathbf{t}_{1},\mathbf{t}_{2}}(M_{3 })\equiv\exp\left[\frac{-\mathrm{i}p^{J}K_{IJ}}{2N^{\prime}}\int_{\mathbf{t}_{1}} dA_{\mathrm{L}}^{I}-\frac{p^{K}\mathsf{S}_{JK}}{2N^{\prime}}\int_{\mathbf{t}_{2}} \mathcal{G}_{JM}*dA_{\mathrm{L}}^{M}\right]\\ \times\int\mathcal{D}a\exp\left[-\frac{\mathrm{i}}{2\pi}\int_{M_ {3}}K_{IJ}(A_{\mathrm{L}}^{I}-A_{\mathrm{R}}^{I})da^{J}\,\right]\,.\end{split} \tag{4.12}\] Then, we can obtain a _non-commutative_ fusion subalgebra concerning \(\widehat{\mathcal{D}}_{\mathbf{t}_{1},\mathbf{t}_{2}}\), \(\widehat{\mathcal{C}}_{\mathbf{t}_{1},\mathbf{t}_{2}}\) and the \((\mathbb{Z}_{2N^{\prime}}^{[1]})_{\mathrm{diag}}\) symmetry generator \(\eta_{\vec{p}}\) as follows;12 Footnote 12: We have checked that the obtained fusion subalgebra satisfies the associativity condition. Figure 5: The result of the fusion rule \(\mathcal{D}\times\eta_{\vec{p}}\). Here, we imagine that the duality defect \(\mathcal{D}\) (orange plane) is sitting on this paper. In the left (right) diagram, the electric (magnetic) symmetry defect \(\eta_{\vec{p}}\) (\(\widetilde{\eta}_{K\vec{p}}\equiv\exp(\frac{\mathrm{i}\,p^{J}K_{IJ}}{2N^{ \prime}}\int dA^{I})\)) is living in the front (back) side of the duality defect. Both symmetry defects serve like the 1-morphisms, mapping from the below duality defect \(\mathcal{D}\) to the above one \(\mathcal{D}\). Non-commutative fusion subalgebra in pure U(1)\(\times\)U(1) gauge theory. \[\begin{split}&\widehat{\mathcal{D}}_{\boldsymbol{t}_{1},\boldsymbol{t} _{2}}\times\overline{\widehat{\mathcal{D}}}_{\boldsymbol{t}_{3},\boldsymbol{t }_{4}}=\widehat{\mathcal{C}}_{-\boldsymbol{t}_{2}+\boldsymbol{t}_{4}, \boldsymbol{t}_{1}-\boldsymbol{t}_{3}}\,\\ &\overline{\mathcal{D}}_{\boldsymbol{t}_{1},\boldsymbol{t}_{2}} \times\widehat{\mathcal{D}}_{\boldsymbol{t}_{3},\boldsymbol{t}_{4}}=\widehat{ \mathcal{C}}_{-\boldsymbol{t}_{1}+\boldsymbol{t}_{3},-\boldsymbol{t}_{2}+ \boldsymbol{t}_{4}}\,\\ &\overline{\widehat{\mathcal{D}}}_{\boldsymbol{t}_{1},\boldsymbol{t }_{2}}=\widehat{\mathcal{D}}_{-\boldsymbol{t}_{2},\boldsymbol{t}_{1}}\times \mathcal{U}=\mathcal{U}\times\widehat{\mathcal{D}}_{\boldsymbol{t}_{2},- \boldsymbol{t}_{1}}\,\\ &\widehat{\mathcal{D}}_{\boldsymbol{t}_{1},\boldsymbol{t}_{2}} \times\eta_{\vec{p}}=\eta_{\vec{p}}\times\widehat{\mathcal{D}}_{\boldsymbol{ t}_{1},\boldsymbol{t}_{2}}=\widehat{\mathcal{D}}_{\boldsymbol{t}_{1}, \boldsymbol{t}_{2}}\,\\ &\widehat{\mathcal{D}}_{\boldsymbol{t}_{1},\boldsymbol{t}_{2}} \times\widehat{\mathcal{C}}_{\boldsymbol{t}_{3},\boldsymbol{t}_{4}}=\mathcal{ Z}\,\widehat{\mathcal{D}}_{\boldsymbol{t}_{1}+\boldsymbol{t}_{3},\boldsymbol{t}_{2}+ \boldsymbol{t}_{4}}\,\\ &\widehat{\mathcal{C}}_{\boldsymbol{t}_{1},\boldsymbol{t}_{2}} \times\widehat{\mathcal{D}}_{\boldsymbol{t}_{3},\boldsymbol{t}_{4}}=\mathcal{ Z}\,\widehat{\mathcal{D}}_{\boldsymbol{t}_{2}+\boldsymbol{t}_{3},-\boldsymbol{t}_{1}+ \boldsymbol{t}_{4}}\,\\ &\widehat{\mathcal{C}}_{\boldsymbol{t}_{1},\boldsymbol{t}_{2}} \times\widehat{\mathcal{C}}_{\boldsymbol{t}_{3},\boldsymbol{t}_{4}}=\mathcal{ Z}\,\widehat{\mathcal{C}}_{\boldsymbol{t}_{1}+\boldsymbol{t}_{3},\boldsymbol{t}_{2}+ \boldsymbol{t}_{4}}\,\\ &\eta_{\vec{p}}^{2N^{\prime}}=1\,\end{split} \tag{4.13}\] where \(\mathcal{Z}\) is the decoupled topological field theory defined by; \[\mathcal{Z}(M_{3})\equiv\int\mathcal{D}a\mathcal{D}b\exp\left[-\frac{\mathrm{i }}{2\pi}\int_{M_{3}}K_{IJ}\,a^{I}db^{J}\right]. \tag{4.14}\] Here, \(a\) and \(b\) are dynamical U(1) gauge fields living in \(M_{3}\), decoupled from the bulk theory. Finally, we give some comments on the fusion rule between the duality defect \(\mathcal{D}\) and the \((\mathbb{Z}_{2N^{\prime}}^{[1]})_{\rm diag}\) symmetry generator \(\eta_{\vec{p}}\). The fusion rule \(\mathcal{D}\times\eta_{\vec{p}}\) can be evaluated as follows; \[\begin{split}\mathcal{D}(M_{3})\times\eta_{\vec{p}}(\Sigma_{2})& =\exp\left(-\frac{p^{I}}{2N^{\prime}}\int_{\Sigma_{2}}\mathcal{G}_ {IJ}\star dA_{\rm R}^{J}-\frac{\mathrm{i}}{2\pi}\int_{M_{3}}\,K_{IJ}\,A_{\rm L }^{I}\,dA_{\rm R}^{J}\right)\\ &=\exp\left(-\frac{\mathrm{i}\,p^{J}K_{IJ}}{2N^{\prime}}\int_{ \Sigma_{2}}\,dA_{\rm L}^{I}-\frac{\mathrm{i}}{2\pi}\int_{M_{3}}\,K_{IJ}\,A_{ \rm L}^{I}\,dA_{\rm R}^{J}\right)\,.\end{split} \tag{4.15}\] From the right-hand side, the electric and magnetic one-form symmetry defects look like the 1-morphisms which map the duality defect \(\mathcal{D}\) to the same one in the context of the higher category [33, section 2]. Our "1-morphism" is, however, different from the standard 1-morphism in the higher category. This is because the different 1-morphisms, namely electric and magnetic one-form symmetry defects, appear on the left and right sides of the duality defect \(\mathcal{D}\). It may be an interesting open question to explore the mathematical structures of our "1-morphism". ## 5 Conclusion and outlook In this paper, we explored the non-invertible symmetries by using the half-space gauging associated with the diagonal sub-group \((\mathbb{Z}_{2N^{\prime}}^{[q]})_{\rm diag}\). In the \(c=2\) bosonic torus CFT, we showed that the diagonal gauging produces the non-invertible symmetry on the irrational CFT point, and derived the fusion algebra. In order for the algebra to close, we need to dress the duality defect with various global symmetry generators, and the resulting fusion algebra is non-commutative. Also, we apply the half-space gauging to the pure U(1)\(\times\)U(1) gauge theory in four dimensions and discuss the fusion algebra in a very similar manner to the \(c=2\) bosonic torus CFT. The consequent fusion algebra in four dimensions is also non-commutative. We conclude this paper by mentioning some future directions; * In this paper, we mainly focused on the diagonal gauging, yet we can consider other gaugings. For instance, we can consider the gauging condition \(p^{1}n_{1}+p^{2}n_{2}=0\mod N\), which is more general compared with the diagonal gauging. Also, gauging both shift and winding symmetries in two dimensions (correspondingly, electric and magnetic one-form symmetries in four dimensions) typically produces an 't Hooft anomaly, hence in this case, we cannot naively implement the half-space gauging. Even in that case, however, if we choose the nice discrete subgroup, we are free from 't Hooft anomalies and can proceed with the half-space gauging13[87]. The half-space gauging via these other gaugings may result in new non-invertible symmetries which are not captured in this paper. Footnote 13: We thank Kantaro Ohmori for pointing out this possibility. * In more general, we can add the topological terms by turning on the B-field and theta angle in two and four dimensions, respectively. In this paper, we only consider the case where the charge lattice after gauging is a rectangular type. However, if we include such topological terms in the actions, the charge lattice is not limited to the rectangular one. This is because the B-field or theta angle makes the axis of the charge lattice tilted. It is interesting to investigate this generalization. Addressing these future directions would help with completing the landscape of non-invertible symmetries in the \(c=2\) bosonic torus CFT and the pure U(1)\(\times\)U(1) gauge theory, and we leave them to intriguing avenues for future works. ###### Acknowledgments. We are particularly grateful to Justin Kaidi, Kantaro Ohmori and Satoshi Yamaguchi for many enlightening comments and discussions. We are also grateful to Takamasa Ando, Yuma Furuta, Yui Hayashi, Hiroki Imai, Hayato Kanno, Kohki Kawabata, Ryutaro Matsudo, Tatsuma Nishioka for valuable discussions. Discussions during the YITP workshop on "Strings and Fields 2023" were useful to complete this work. This work of Y. N. was supported by JST SPRING, Grant Number JPMJSP2138. The work of S. S. was supported by JSPS fellowship for young students, Grant Number 23KJ1533. Derivations of the selected fusion algebras in section 3.3 In this appendix, we provide concrete calculations, mainly focusing on the fusion algebras, which are omitted in section 3.3. Our methodology to derive the fusion rules closely follows the work [52, section 6]. * \(\eta_{\vec{p}}\times\mathcal{D}=\mathcal{D}\) (3.41) In order to derive the fusion rule \(\eta_{\vec{p}}\times\mathcal{D}\), it is sufficient to consider the left bulk and the defect actions, which are given by \[\frac{1}{4\pi}\int_{x<0}G^{*}_{IJ}\,d\phi^{I}_{\rm L}\wedge\star d \phi^{J}_{\rm L}+\frac{\rm i}{2\pi}\int_{x=0}K_{IJ}\,\phi^{I}_{\rm L}\,d\phi^{ J}_{\rm R}+\frac{p^{I}}{2N^{\prime}}\int_{x=0}\,G_{IJ}\star d\phi^{J}_{\rm L}\,.\] (A.1) By changing the path integral variable \(\phi^{J}_{\rm L}\); \[\phi^{J}_{\rm L}=\phi^{\prime J}_{\rm L}-\frac{2\pi p^{J}}{2N^{\prime}}\,\theta (-x)\,,\] (A.2) where \(\theta(x)\) is a step function defined as \(\theta(x)=0\) in \(x\leq 0\) and \(\theta(x)=1\) in \(x>0\), the above combined action (A.1) becomes; \[\frac{1}{4\pi}\int_{x<0}G^{*}_{IJ}\,d\phi^{\prime I}_{\rm L}\wedge\star d\phi ^{\prime J}_{\rm L}+\frac{\rm i}{2\pi}\int_{x=0}\,K_{IJ}\,\phi^{\prime I}_{ \rm L}\,d\phi^{J}_{\rm R}-\frac{\rm i{p}^{I}K_{IJ}}{2N^{\prime}}\int_{x=0}\,d \phi^{J}_{\rm R}\,.\] (A.3) Note that \(p^{I}K_{IJ}=0\mod 2N^{\prime}\), therefore the last term can be dropped from the action. Then, we get the following fusion rule; \[\eta_{\vec{p}}\times\mathcal{D}=\mathcal{D}\,.\] (A.4) * \(\mathcal{D}\times\eta_{\vec{p}}\) (3.43) As stressed in the main text, the fusing the \((\mathbb{Z}_{2N^{\prime}}^{[0]})_{\rm diag}\) shift symmetry defect \(\eta_{\vec{p}}\) to the duality defect \(\mathcal{D}\) from the right shows a different behavior from the case of \(\eta_{\vec{p}}\times\mathcal{D}\). To see this, we only have to consider the right bulk and the defect actions; \[\frac{1}{4\pi}\int_{0<x}G^{*}_{IJ}\,d\phi^{I}_{\rm R}\wedge\star d\phi^{J}_{ \rm R}+\frac{\rm i}{2\pi}\int_{x=0}\,K_{IJ}\,\phi^{I}_{\rm L}\,d\phi^{J}_{\rm R }+\frac{p^{I}}{2N^{\prime}}\int_{x=0}G_{IJ}\star\,d\phi^{J}_{\rm R}\,.\] (A.5) We perform the following field redefinition: \[\phi^{J}_{\rm R}=\phi^{\prime J}_{\rm R}+\frac{2\pi p^{J}}{2N^{\prime}}\theta (x)\,.\] (A.6) Then, the composite action (A.5) can be calculated as follows; \[\frac{1}{4\pi}\int_{0<x}G^{*}_{IJ}\,d\phi^{\prime I}_{\rm R}\wedge\star d\phi ^{\prime J}_{\rm R}+\frac{\rm i}{2\pi}\int_{x=0}\,K_{IJ}\,\phi^{I}_{\rm L}\,d \phi^{\prime J}_{\rm R}-\frac{\rm i{p}^{J}K_{IJ}}{2N^{\prime}}\int_{x=0}\,d \phi^{I}_{\rm L}\,,\] (A.7) which shows that the non-trivial \(\mathbb{Z}_{2N^{\prime}}\) winding symmetry generator appears on the left side of the duality defect \(\mathcal{D}\) due to \(p^{J}K_{IJ}\neq 0\mod 2N^{\prime}\). The following fusion rules are related to our main result (3.56). * \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{D}}_{s_{3},s_{4}}= \widehat{\mathcal{C}}_{s_{2}+s_{3},s_{1}+s_{4}}\) By using the associative property of symmetry generators, the fusion rule \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{D}}_{s_{3},s_{4}}\) can be reduced to as follows; \[\begin{split}\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{ \mathcal{D}}_{s_{3},s_{4}}&=(\eta_{\mathsf{S}\bar{p}})^{s_{1}} \times(\mathcal{D}_{s_{2}}\times\mathcal{S})\times(\eta_{\mathsf{S}\bar{p}})^{ s_{3}}\times(\mathcal{D}\times\mathcal{S})\times(\eta_{\mathsf{S}\bar{p}})^{s_{4}}\\ &=(\eta_{\mathsf{S}\bar{p}})^{s_{1}}\times\mathcal{D}_{s_{2}+s_{3} }\times(\mathcal{S}\times\mathcal{D}\times\mathcal{S})\times(\eta_{\mathsf{S} \bar{p}})^{s_{4}}\\ &=(\eta_{\mathsf{S}\bar{p}})^{s_{1}}\times\mathcal{D}_{s_{2}+s_{3} }\times\overline{\mathcal{D}}\times(\eta_{\mathsf{S}\bar{p}})^{s_{4}}\.\end{split}\] (A.8) In the first line, we used the definition of the dressed duality defect \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\) and (3.38). We also made use of (3.38) and (3.37) in the second and final lines, respectively. As easily can be checked, the fused defect action reads; \[\mathcal{D}_{s_{2}+s_{3}}\times\overline{\mathcal{D}}\ :\ \frac{\mathrm{i}(s_{2}+s_{3})p^{J}K_{IJ}}{2N^{ \prime}}\int_{x=0}d\phi_{\mathrm{L}}^{I}+\frac{\mathrm{i}}{2\pi}\int_{x=0}\ K_{ IJ}\left(\phi_{\mathrm{L}}^{I}-\phi_{\mathrm{R}}^{I}\right)d\varphi^{J}\,.\] (A.9) The first term is nothing but the \(\mathbb{Z}_{2N^{\prime}}\) winding symmetry generator, which originates from the dressed duality defects \(\widehat{\mathcal{D}}_{s_{2}}\) and \(\widehat{\mathcal{D}}_{s_{3}}\). The second term is the projection operator associated with the \(\mathbb{Z}_{2N^{\prime}}\) shift symmetry. To see this, we decompose the charge matrix \(K\) into the Smith normal form; \[\begin{pmatrix}N^{\prime}&-1\\ N^{\prime}&1\end{pmatrix}=\begin{pmatrix}1&-1\\ 0&1\end{pmatrix}\begin{pmatrix}2N^{\prime}&0\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ -N^{\prime}&1\end{pmatrix}.\] (A.10) This shows that the second term can be split into the \(\mathbb{Z}_{2N^{\prime}}\) BF theory and the trivial one, and we can write it as a sum of \(\mathbb{Z}_{2N^{\prime}}\) generators (See [53, Appendix E] for the derivation). Also, as we attached the \(\mathbb{Z}_{2N^{\prime}}\) element to the duality defect, we define the dressed projection operator \(\widehat{\mathcal{C}}_{s_{1},s_{2}}\) (\(s_{1},s_{2}=0,1,\cdots 2N^{\prime}-1\)) as follows; \[\begin{split}\widehat{\mathcal{C}}_{s_{1},s_{2}}(\Sigma)\equiv& \exp\left[-\frac{\mathrm{i}s_{1}p^{J}K_{IJ}}{2N^{\prime}}\int_{\Sigma}d\phi_{ \mathrm{L}}^{I}\right]\\ &\times\int\mathcal{D}\varphi\exp\left[-\frac{\mathrm{i}}{2\pi} \int_{\Sigma}K_{IJ}\left(\phi_{\mathrm{L}}^{I}-\phi_{\mathrm{R}}^{I}-\frac{2 \pi s_{2}}{N}\mathsf{S}^{IK}p_{K}\right)d\varphi^{J}\,\right]\,.\end{split}\] (A.11) By using this dressed projection operator, the fusion rule (A.8) can be evaluated as follows; \[\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{D}}_{s_{3},s_{4}}=( \eta_{\mathsf{S}\bar{p}})^{s_{1}}\times\widehat{\mathcal{C}}_{s_{2}+s_{3},0} \times(\eta_{\mathsf{S}\bar{p}})^{s_{4}}=\widehat{\mathcal{C}}_{s_{2}+s_{3},s _{1}+s_{4}}\.\] (A.12) * \(\eta_{\bar{p}}\times\widehat{\mathcal{D}}_{s_{1},s_{2}}=\widehat{\mathcal{D}}_{s _{1},s_{2}}\) and \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\eta_{\bar{p}}=\widehat{\mathcal{D}}_{s _{1},s_{2}}\) Here, we derive the fusion rules \(\eta_{\vec{p}}\times\widehat{\mathcal{D}}_{s_{1},s_{2}}\) and \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\eta_{\vec{p}}\) by using the associativity. The fusion \(\eta_{\vec{p}}\times\widehat{\mathcal{D}}_{s_{1},s_{2}}\) can be easily evaluated as follows; \[\begin{split}\eta_{\vec{p}}\times\widehat{\mathcal{D}}_{s_{1},s_{ 2}}&=\eta_{\vec{p}}\times(\eta_{\mathsf{S}\vec{p}})^{s_{1}} \times\mathcal{D}_{s_{2}}\times\mathcal{S}\\ &=(\eta_{\mathsf{S}\vec{p}})^{s_{1}}\times\eta_{\vec{p}}\times \mathcal{D}_{s_{2}}\times\mathcal{S}\\ &=(\eta_{\mathsf{S}\vec{p}})^{s_{1}}\times\mathcal{D}_{s_{2}} \times\mathcal{S}\\ &=\widehat{\mathcal{D}}_{s_{1},s_{2}}\,,\end{split}\] (113) where in the third line, we used the fusion rule (101). Likewise, we can derive the fusion rule \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\eta_{\vec{p}}\) as follows; \[\begin{split}\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\eta_{ \vec{p}}&=(\eta_{\mathsf{S}\vec{p}})^{s_{1}}\times\mathcal{D} \times\eta_{\vec{p}}^{s_{2}}\times\mathcal{S}\times\eta_{\vec{p}}\\ &=(\eta_{\mathsf{S}\vec{p}})^{s_{1}}\times\mathcal{D}\times\eta_{ \vec{p}}^{s_{2}}\times\eta_{\mathsf{S}\vec{p}}\times\mathcal{S}\\ &=(\eta_{\mathsf{S}\vec{p}})^{s_{1}}\times\mathcal{D}\times\eta_{ \mathsf{S}\vec{p}}\times\eta_{\vec{p}}^{s_{2}}\times\mathcal{S}\\ &=(\eta_{\mathsf{S}\vec{p}})^{s_{1}}\times\mathcal{D}\times\eta_{ \mathsf{S}\vec{p}}^{s_{2}}\times\mathcal{S}\\ &=\widehat{\mathcal{D}}_{s_{1},s_{2}}\.\end{split}\] (114) In the fourth line, we used the following formula; \[\mathcal{D}\times\eta_{\mathsf{S}\vec{p}}=\mathcal{D}\,.\] (115) which can be derived by a similar way to (100) and (101). * \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{C}}_{s_{3},s_{4}}\) We can rewrite \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{C}}_{s_{3},s_{4}}\) as follows; \[\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{C}}_{s_{3},s_{4}} =(\eta_{\mathsf{S}\vec{p}})^{s_{1}}\times\mathcal{D}\times \mathcal{S}\times(\eta_{\mathsf{S}\vec{p}})^{s_{2}}\times\widehat{\mathcal{C}} _{s_{3},s_{4}}\] (116) \[=(\eta_{\mathsf{S}\vec{p}})^{s_{1}}\times\mathcal{S}\times \overline{\mathcal{D}}\times\widehat{\mathcal{C}}_{s_{3},0}\times(\eta_{ \mathsf{S}\vec{p}})^{s_{2}+s_{4}}\,.\] (117) In the last line, we used the following relation; \[(\eta_{\mathsf{S}\vec{p}})^{s_{2}}\times\widehat{\mathcal{C}}_{s_{3},s_{4}}= \widehat{\mathcal{C}}_{s_{3},s_{2}+s_{4}}=\widehat{\mathcal{C}}_{s_{3},0} \times(\eta_{\mathsf{S}\vec{p}})^{s_{2}+s_{4}}\,.\] (118) Therefore, in order to calculate the fusion rule \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{C}}_{s_{3},s_{4}}\), we firstly need to derive the fusion rule \(\overline{\mathcal{D}}\times\widehat{\mathcal{C}}_{s_{3},0}\,\). The defect action is given by \[\overline{\mathcal{D}}\times\widehat{\mathcal{C}}_{s_{3},0}\,:\,\frac{{\rm i }s_{3}p^{J}}{2N^{\prime}}\int_{x=0}K_{IJ}d\phi^{I}_{\rm M}+\frac{{\rm i}}{2\pi }\int_{x=0}K_{IJ}\,(\phi^{I}_{\rm M}-\phi^{I}_{\rm R})d\varphi^{J}+\frac{{\rm i }}{2\pi}\int_{x=0}K_{JI}\,\phi^{I}_{\rm L}d\phi^{J}_{\rm M}\,.\] (119) By changing the path integral variables as follows: \[\phi^{\prime I}_{\rm M}=\phi^{I}_{\rm M}-\phi^{I}_{\rm R}\qquad,\qquad\varphi ^{\prime I}=\varphi^{I}-\phi^{I}_{\rm L}\,\] (120) and the defect action becomes; \[\frac{\mathrm{i}}{2\pi}\int_{x=0}K_{IJ}\,\phi^{\prime I}_{\mathrm{M}} d\varphi^{\prime J}+\frac{\mathrm{i}s_{3}p^{J}}{2N^{\prime}}\int_{x=0}K_{IJ}\,d \phi^{\prime I}_{\mathrm{M}}+\frac{\mathrm{i}}{2\pi}\int_{x=0}K_{JI}\,\phi^{I} _{\mathrm{L}}d\phi^{J}_{\mathrm{R}}+\frac{\mathrm{i}s_{3}p^{J}}{2N^{\prime}} \int_{x=0}\ K_{IJ}d\phi^{I}_{\mathrm{R}}\,.\] (A.21) The first two terms represent the decoupled TQFT \(\mathcal{Z}\), and this can be written as the sum of \(\mathbb{Z}_{2N^{\prime}}\) symmetry generators. Hence, we have \(\mathcal{Z}=2N^{\prime}\). Also, the last two terms are the defect actions of \(\eta^{s_{3}}_{\vec{p}}\times\overline{\mathcal{D}}\) since the winding symmetry generator is changed to the shift one across the duality defect. Combining the above results, the fusion rule \(\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{C}}_{s_{3},s_{4}}\) can be derived as follows; \[\widehat{\mathcal{D}}_{s_{1},s_{2}}\times\widehat{\mathcal{C}}_{s_{3},s_{4}} =2N^{\prime}(\eta_{\mathsf{5}\vec{p}})^{s_{1}}\times\mathcal{S} \times\eta^{s_{3}}_{\vec{p}}\times\overline{\mathcal{D}}\times(\eta_{\mathsf{ 5}\vec{p}})^{s_{2}+s_{4}}\] (A.22) \[=2N^{\prime}(\eta_{\mathsf{5}\vec{p}})^{s_{1}+s_{3}}\times \mathcal{D}\times\mathcal{S}\times(\eta_{\mathsf{5}\vec{p}})^{s_{2}+s_{4}}\] \[=2N^{\prime}(\eta_{\mathsf{5}\vec{p}})^{s_{1}+s_{3}}\times \mathcal{D}\times\eta^{s_{2}+s_{4}}_{\vec{p}}\times\mathcal{S}\] \[=2N^{\prime}\widehat{\mathcal{D}}_{s_{1}+s_{3},s_{2}+s_{4}}\,.\] We can also calculate the fusion rule \(\widehat{\mathcal{C}}_{s_{1},s_{2}}\times\widehat{\mathcal{D}}_{s_{3},s_{4}}\) in a similar manner to the above derivation.
2309.15087
Privacy-preserving and Privacy-attacking Approaches for Speech and Audio -- A Survey
In contemporary society, voice-controlled devices, such as smartphones and home assistants, have become pervasive due to their advanced capabilities and functionality. The always-on nature of their microphones offers users the convenience of readily accessing these devices. However, recent research and events have revealed that such voice-controlled devices are prone to various forms of malicious attacks, hence making it a growing concern for both users and researchers to safeguard against such attacks. Despite the numerous studies that have investigated adversarial attacks and privacy preservation for images, a conclusive study of this nature has not been conducted for the audio domain. Therefore, this paper aims to examine existing approaches for privacy-preserving and privacy-attacking strategies for audio and speech. To achieve this goal, we classify the attack and defense scenarios into several categories and provide detailed analysis of each approach. We also interpret the dissimilarities between the various approaches, highlight their contributions, and examine their limitations. Our investigation reveals that voice-controlled devices based on neural networks are inherently susceptible to specific types of attacks. Although it is possible to enhance the robustness of such models to certain forms of attack, more sophisticated approaches are required to comprehensively safeguard user privacy.
Yuchen Liu, Apu Kapadia, Donald Williamson
2023-09-26T17:31:35Z
http://arxiv.org/abs/2309.15087v1
# Privacy-preserving and Privacy-attacking Approaches for Speech and Audio - A Survey ###### Abstract. In contemporary society, voice-controlled devices, such as smartphones and home assistants, have become pervasive due to their advanced capabilities and functionality. The always-on nature of their microphones offers users the convenience of readily accessing these devices. However, recent research and events have revealed that such voice-controlled devices are prone to various forms of malicious attacks, hence making it a growing concern for both users and researchers to safegu against such attacks. Despite the numerous studies that have investigated adversarial attacks and privacy preservation for images, a conclusive study of this nature has not been conducted for the audio domain. Therefore, this paper aims to examine existing approaches for privacy-preserving and privacy-attacking strategies for audio and speech. To achieve this goal, we classify the attack and defense scenarios into several categories and provide detailed analysis of each approach. We also interpret the dissimilarities between the various approaches, highlight their contributions, and examine their limitations. Our investigation reveals that voice-controlled devices based on neural networks are inherently susceptible to specific types of attacks. Although it is possible to enhance the robustness of such models to certain forms of attack, more sophisticated approaches are required to comprehensively safeguard user privacy. privacy; audio; speech; attacks; defenses; machine learning; signal processing + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Information systems + FootnoteFootnote †: journal: Information systems + Footnote †: journal: Information systems + Footnote †: journal: Journal: Information systems + Footnote †: journal: Information systems + FootnoteFootnote †: journal: Information systems + Footnote †: journal: Journal: Information systems privacy risks than previously seen, and supports these claims with extensive experimentation and discussion on privacy implications. With the rise in the number of voice-controlled device users, safeguarding audio privacy has become a major concern. It is important that users, researchers and the general public are aware of the attack and defense mechanisms that exist today. Therefore, this paper presents a comprehensive survey of recent techniques that aim to protect or attack speech and audio privacy. Our focus is on four main attack categories, including impersonation attacks, operating system attacks, ultrasonic attacks, and adversarial attacks. In addition, we classify defense mechanisms as either detect-only or complete defense. Notably, voice-controlled system stages are susceptible to different types of attacks and defenses. The subsequent parts of this paper are structured as follows: Firstly, we will provide an overview of the threat model and taxonomy, and define important terminology related to privacy-protecting and privacy-attacking strategies. Next, we will delve into each attack and defense mechanism, discussing them in detail. Lastly, we will conclude the paper with a discussion on the topic and suggest potential directions for future research. ## 2. The Threat Model The basic flow of a voice-controlled system is illustrated in Fig. 1. Speech is first captured using a microphone and then converted into a digital signal before being provided to a deep learning model. The model performs automatic speech recognition and translates the signal into a computer-readable command, which is then executed. The attack and defense strategies can be implemented at any point in this process, and we have categorized them accordingly. We first define the threat model that describes the goal of the attacker and defender. We assume malicious attackers are interested in attacking a voice-controlled device of a target user. The attacker can target the device at any stage during the user's interaction, from when the user starts speaking to when the device executes the command. The goal of the attacker, also known as the adversary, is to confuse the original voice-controlled system and make the system to execute a malicious command without the target user noticing. The defenders are aware of possible attacks, so they try to either detect the incoming attack or reinforce the system to disable the attack. The basic assumption of attackers and defenders can be refined in several categories as follows: * **Attacker's Knowledge**: This category specifies how much an attacker knows about the system. _White-box attacks_ assume the attacker has full knowledge of the target system, including its setup and parameters. Attackers can replicate the model setup and parameters. In contrast, _black-box attacks_ assume that the attacker does not possess any information about the system, and must develop a general attack that can affect all types of systems. * **Attacker's Goal**: The goal of the attacker depends on the type of attack. A _targeted attack_, also known as a source-targeted attack (Kumar et al., 2017), aims to misclassify the input into a specific label or category. For instance, in the context of speaker verification, the attacker may want the system to recognize their voice as belonging to the target user. An _untargeted attack_, on the other hand, does not require a specific label output. The adversary's aim is to misclassify the input into any incorrect class. * **Physical/Logical Attacks**: A _logical attack_ involves the attacker injecting perturbations into the input speech in a simulated manner, such as through additive manipulation in software. This type of attack poses a limited real-world threat as the perturbation cannot play over-the-air. In contrast, a _physical attack_ occurs away from the device and allows the perturbation to play over-the-air, which means that these attackers must consider room acoustics. * **Input specific/Universal Attacks**: An _input specific attack_ is dependent on the audio and targets each audio input specifically. Recently, _universal attacks_ have emerged (Zhao et al., 2018), whereby a single attack or perturbation can be applied to all inputs to the voice-controlled system. These attacks are potent because the attacker does not need any prior information about what the user is saying, and the attack occurs in real-time. * **Defender's Goal**: From the defender's perspective, defense mechanisms can be classified into detection defenses or complete defenses. A _detection defense_ entails developing a classifier that detects whether the input has been modified or not, alerting the user if the input has been modified. On the other hand, a _complete defense_ not only detects the attack but also disables the attack by reducing its effectiveness. ## 3. Categories of Attacks In this section, we will examine recent techniques used to compromise privacy. The attacks can occur at any stage depicted in Fig 1. For instance, during the human voice stage, individuals can deceive the system by impersonating someone else's speech. At the voice-driven software stage, hackers can breach the operating system and commandeer the software to accept their orders. During the signal-to-digital converter stage, attackers can employ an ultrasonic signal to conceal their malicious commands. Finally, in the deep learning stage, individuals can employ well-crafted speech adversarial examples to deceive the deep learning model. A comprehensive list of papers on this topic can be found in Table 1. We will also provide an interpretation of these algorithms and discuss the advantages and disadvantages of each technique. The performance metrics presented in this section are based on the findings outlined in the original publications. At the initial stage of voice-controlled execution, an attacker may carry out an impersonation spoofing attack on the voice-controlled system. This type of attack can be executed by employing a replay system (Zhao et al., 2018), a synthetic speech system, or a converted speech Figure 1. The flow path of a typically voice-controlled system. system (140; 145). By doing so, the attacker can prompt the voice-controlled system to execute their desired command. We discuss these attacks in section 3.1. In the second stage, the operating system can be targeted, where attackers may take over the system and force it to execute incorrect commands from the speaker (34; 61). Additionally, the attacker may also gain access to the microphone in this stage (10). We describe these attacks in section 3.2. More recent attacks focus on the third and fourth stages. In contrast to the previous two stages, malicious actions in stage three and four are more difficult to detect and resolve. In the third stage, attacks typically utilize the non-linearity characteristic of the speech signal (115; 125; 158). See section 3.3 for more information. The final stage in the system involves feeding the signal into a deep learning model. In 2013, Szegedy discovered that certain adversarial attacks (128), which had previously been effective in other domains (50; 105; 106), could also be employed to target deep learning models. Based on these findings, targeted audio adversarial attacks (23) have been developed, which are particularly potent because human listeners cannot distinguish between the real audio and the adversarial example. Goodfellow argues that these adversarial examples exist due to the excessive non-linearity present in deep learning models (50). Please refer to section 3.4 for more information on deep-learning-based attacks. ### Impersonation Attacks Impersonation attacks, also referred to as spoofing attacks (135), are the most fundamental type of attack on a voice-controlled system. In such attacks, the attacker creates a voice command that resembles the voice of the user of the smart voice assistant. Impersonation attacks can be classified into three types: synthetic speech attacks, converted speech attacks, and replayed speech attacks. #### 3.1.1. Replay attacks The replay speech attack is the most common form of attack. Attackers use recorded speech of the target user to mimic their voice. For instance, the attacker can easily download the user's voice from their social media page 7. Alternatively, the attacker can create a spam call that tricks the target user into saying a particular word or phrase that they desire. They can then use the recording of this phrase to launch an attack on the voice-controlled system 8. To execute a replay speech attack, the attacker must obtain a large amount of speech data from the target user. Footnote 8: [https://www.varnumlaw.com/newsroom-publications-recording-](https://www.varnumlaw.com/newsroom-publications-recording-) conversations-with-your-cellphone-with-great-power-comes-potential #### 3.1.2. Synthetic speech attacks Synthetic speech attacks use text-to-speech synthesis (TTS) to create simulated human voice commands that sound as if they originated from the target user (12; 44; 95; 110; 126; 129; 142). Traditional TTS techniques primarily focus on concatenative synthesis (19; 58) and parametric speech synthesis (136; 157). Once the attacker obtains enough recordings, they can extract the victim's acoustic model (90). By using the acoustic model, the attacker can reconstruct any desired commands through speech synthesis techniques9. However, the resulting audio clips often sound artificial and unnatural due to the noise and reverberation present in the recorded speech. Footnote 9: [https://www.gro-tools-expert.com/home-page/2016/11/16/adobe-voco-should-we-affald](https://www.gro-tools-expert.com/home-page/2016/11/16/adobe-voco-should-we-affald) Modern TTS methods use conventional source-filter vocoders (70; 100) or a WaveNet-based vocoder (104) to produce more natural-sounding speech. A vocoder is an electronic device or software that is used to analyze and synthesize speech or other sounds. It works by breaking down the incoming sound signal into its spectral components and then re-synthesizing it using a carrier signal to produce an output that sounds like the original sound but with different characteristics. In Arik _et al._(12), a real-time text-to-speech system called 'Deep Voice' is proposed that uses a WaveNet-based vocoder. The detailed model structure is shown in Fig 2. In this system, the text data is first converted into phonetic information, which is then fed into a segmentation model that identifies where each phoneme begins and ends. The duration model predicts the duration of each phoneme, and the fundamental frequency module predicts whether a phoneme is voiced or not. The audio synthesis model combines the outputs from each module and the vocoder to generate a synthesized audio signal. This audio can be used for a synthetic speech attack. More recent approaches focus on end-to-end TTS conversion, as demonstrated in Ping _et al._(110). End-to-end TTS uses an encoder to convert text into an internal latent representation and a decoder decodes this representation into an audio spectrogram. A vocoder then transforms the predicted features into a speech signal. Similar architectures are also found in Tacron (142) and Tacotron 2 (124). End-to-end TTS models differ from traditional methods, where \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l} \hline Category & Type & \multicolumn{12}{c|}{paper} & \multicolumn{12}{c|}{paper} & \multicolumn{12}{c|}{} \\ \hline \multirow{4}{*}{Privacy-attacking} & \multirow{3}{*}{Impersonation Attack} & (90)(58)(19)(136)(157)(121) & (26)(44)(12)(110)(142)(126) & (129)(95)(133)(67)(137)(132) & (15)(41) \\ \cline{3-12} & & \multirow{3}{*}{Impersonation Attack} & (57)(16)(65)(98)(25) & (25)(124)(40) & with traditional approaches each module is trained separately and each component requires prior knowledge about the text to make speech synthesis possible. The synthetic speech attack is considered to be a black-box attack. The attacker does not need any information or knowledge from the user. However, this type of attack may be disabled if the voice-controlled system has a speaker verification (SV) system that can detect if the speech is from the original user or not. #### 3.1.3. Converted speech attack Converted speech attacks are similar to synthetic speech attacks in that they aim to generate speech that mimics the target user's voice. However, converted speech attacks differ in that they use voice conversion (VC) techniques to make the resulting speech sound like that of the target user (Kumar et al., 2017). Many techniques for performing VC have been proposed in recent years. In multiple studies (Zhao et al., 2018; Li et al., 2019; Li et al., 2019), the authors use Gaussian-mixture models (GMMs) within the VC system to capture the statistical properties of the acoustic features for both source and target speakers. More recent approaches use neural networks (Zhao et al., 2018; Li et al., 2019). An overview of a VC system can be found in Mohammadi _et al._(Mohammadi et al., 2019). VC systems typically use an encoder-decoder architecture, similar to that used in text-to-speech systems. For example, Jia et al. (Jia et al., 2019) proposed a VC system that generates speech for a specific speaker using only five seconds of their speech. Fig 3 shows the model structure of the Google VC system. The system uses a separately trained speaker encoder to extract the speaker encoding from the speaker reference utterance, and then the encoder of the synthesizer extracts the speaker-independent information from the original speech to concatenate it with the speaker encoding data to form a speaker-dependent audio representation. The audio representation is then fed into the decoder to generate a log-mel spectrogram feature, which is transformed into an audio utterance by a vocoder, where the resulting audio sounds like the target user and it contains a malicious command. Converted speech attacks are considered the strongest type of impersonation attack. Additionally, some VC systems can even simulate the accent of the speaker, making it more difficult to detect and defend against such attacks (Zhao et al., 2018; Li et al., 2019). ### Operating System Attacks Operating system (OS) attacks, as the name implies, exploit vulnerabilities within the OS to execute attacks. These attacks are self-triggered and more difficult to detect compared to impersonation attacks. In this section, we discuss two notable OS attacks: _A11y attacks_(Zhao et al., 2018) and _Google Voice Search (GVS) attacks_(Zhao et al., 2018). #### 3.2.1. A11y attacks In 1998, the Rehabilitation Act of 197310 was amended by the United States Congress with the objective of making it easier for individuals with disabilities to use electronic devices \begin{table} \begin{tabular}{l|l l l l l} \hline \hline Attack Name & Attack Type & Attack & Attack Goal & Physical/Logical & Attack \\ & & Knowledge & Attack & Generality \\ \hline Replayed speech(Kumar et al., 2017) & Impersonation Attack & Black Box & Targeted & Physical/Logical & Specific \\ \hline Synthetic speech(Kumar et al., 2017) & Impersonation Attack & Black Box & Targeted & Physical/Logical & Specific \\ \hline Converted speech(Kumar et al., 2017) & Impersonation Attack & Black Box & Targeted & Physical/Logical & Specific \\ \hline A11y attack(Zhao et al., 2018) & Operating System Attack & White Box & Targeted & Physical/Logical & Specific \\ \hline GVS attack(Zhao et al., 2018) & Operating System Attack & White Box & Targeted & Physical & Specific \\ \hline Dobin attack(Domin et al., 2018) & Ultrasonic Attack & White Box & Targeted & Physical & Specific \\ \hline Inaudible attack(Kumar et al., 2017) & Ultrasonic Attack & White Box & Targeted & Physical & Specific \\ \hline C\&W attack (Kumar et al., 2017) & Adversarial Example & White Box & Targeted & Logical & Specific \\ \hline CommanderSong attack(Domin et al., 2018) & Adversarial Example & White Box & Targeted & Physical/Logical & Specific \\ \hline Imperceptible attack(Kumar et al., 2017) & Adversarial Example & White Box & Targeted & Physical/Logical & Specific \\ \hline Robust physical attack(Kumar et al., 2017) & Adversarial Example & White Box & Targeted & Physical & Specific \\ \hline Black box adversarial attack(Kumar et al., 2017) & Adversarial Example & Black Box & Targeted & Logical & Specific \\ \hline Universal audio attack(Kumar et al., 2017) & Adversarial Example & White Box & Untargeted & Logical & Universal \\ \hline \hline \end{tabular} \end{table} Table 2. Threat model of representative privacy-attacking techniques. Figure 3. Model Structure of Google Voice Conversion system (Jia et al., 2019) Figure 2. Model Structure of Deep Voice from Baidu Inc. (Kumar et al., 2017) and information technology. As a result, recent operating system developers have integrated various accessibility features, such as voice commands, speech recognizers, and on-screen keyboards, among others, into their OS. 111213 However, these accessibility technologies have also brought security concerns. In Jang _et al._(Jang et al., 2017), the author introduced malware that can be used to exploit commonly-used operating systems. In total, Jang presented 12 different OS attack approaches that leverage different accessibility tools. This include Windows attacks: 1. privilege escalation through Speech Recognition, 2. privilege escalation with Explorer.exe, 3. stealing passwords using Password Eye and a screenshot, 4. stealing sudoer passwords from authentication dialogism; Ubuntu Linux attacks: 5. bypassing the security boundaries of Ubuntu; IOS attacks: 6. bypassing passcode lock using Siri, 7. bypassing the iOS sandbox, 8. privilege escalation with remote view, 9. bypassing password protection on iOS; and Android attacks: 10. bypass Touchless Control's voice authentication, 11. bypassing Android sandboxing, and 12. keylogger on Android. In this section, we will discuss three speech- and audio-related approaches that are used in voice-based attacks. Footnote 11: [https://www.google.com/sites/accessibility.html](https://www.google.com/sites/accessibility.html) In the case of Windows OS, attackers can take advantage of the speech recognition accessibility feature to obtain higher privileges. The speech recognition system in Windows always grants administrative privileges at a high integrity level (High IL). The attack scenario is illustrated in Fig 4. Firstly, the attacker uses CreateProcess(0) with the argument sapsivr.exe -SpeechUX to launch the speech recognition accessibility tool. Then, the malware is employed to open the msconfig.exe application via CreateProcess(0). By default, the application runs in High IL. Next, the attacker can issue a voice command with the transcript "Tools, Page down, Command prompt, Launch" to open the command shell in Windows. Finally, the opened command shell inherits the High IL, providing command line access with administrative privileges, which allows the attackers to execute any desired command. Accessibility features in mobile operating systems, such as iOS and Android, can also be exploited for attacks. In the case of iOS, an attacker can use Siri to bypass the password lock screen and access user-sensitive data or perform security-related commands, even when the screen is locked 14. iOS allows Siri to bring user-sensitive data or make security related commands even when the screen is locked. Therefore, this can be done without any knowledge of the user's password. Similarly, for Android devices, an attacker can use malware as a background service to record the user's voice constantly. When the phrase "OK Google Now" is detected, the attacker can perform a replay speech attack using the device's microphone. Following this, the attacker can use a synthetic speech attack and TTS speech commands to deceive the Google Now based voice-controlled system. Footnote 14: [https://www.businessinsider.com/password-security-flaw-in-ios-7-lets-siri-control-your-iphone-2013-9](https://www.businessinsider.com/password-security-flaw-in-ios-7-lets-siri-control-your-iphone-2013-9) #### 3.2.2. Google Voice Search (GVS) attack In Diao _et al._(Diao et al., 2017), the author generated malware using a zero-permission Android application called VoicEmployer to attack the Android built-in voice assistant tool Google Voice Search. Based on Diao's research, Google voice search has two different modes: voice dialect mode and velvet mode. The author shows that a third-party app using the Bluetooth module can pass an ACTION_VOICE_COMMAND based intent to the Android operating system and trigger the voice dialer mode of Google Voice Search even though the device is locked. Fig 5 shows the inter-application communication of the GVS attack. The malware VoicEmployer first uses the speaker in the device to produce the attack audio that activates Google voice search in the voice dialect mode. VoicEmployer then continuously analyzes the environment and places the attack, if the user is not nearby or sleeping. The malware then places a low sound impersonation Figure 4. Windows A11y attack using speech recognition commander accessibility tool (Jang et al., 2017) Figure 5. GVS attack using VoicEmployer malware (Diao et al., 2017) attack lower than 55dB so that the user cannot notice the attack (Kumar et al., 2017). ### Ultrasonic Attack The upper bound frequency of human hearing is 20 kHz, where human speech often occurs at frequencies much lower than this. As a result, most voice-controlled systems use low-pass filters to remove signal components that occur above 20 kHz (Kumar et al., 2017), since the spoken command will be contained in lower frequencies. Unfortunately, attackers have developed sophisticated workarounds that enable them to generate commands in frequency ranges above 20 kHz. In particular, ultrasonic sounds have frequencies higher than 20kHz, so they are inaudible to humans. With ultrasonic attacks, the attackers leverage the non-linearity of the microphone and speaker in the voice controlled system and use it to hide inaudible commands in the ultrasonic frequency range, so that the target device still receives the commands. These attacks can be really damaging as they are difficult to detect. Next, we introduce two representative ultrasonic attacks: _dolphin attacks_(Kumar et al., 2017) and _inaudible voice command attacks_(Kumar et al., 2017). #### 3.3.1. Dolphin attacks In Zhang _et al._(Zhang et al., 2017), the authors exploit the non-linearity feature of a Micro Electro Mechanical System (MEMS) microphone and an Electret Condenser microphone (ECM) to generate inaudible ultrasonic signals that can carry malicious commands. The experiment device setup is shown in Fig. 6 and the detailed model architecture for a Dolphin attack is shown in Fig. 7. The attack starts with voice command generation, which consists of two parts: activation command generation and general control command generation. Activation command generation refers to the process of generating a specific phrase or word that triggers the voice-controlled device to start listening for a user's command. General control commands refer to the voice commands given by the user to control the device's various functions. Since the activation command needs to pass a speaker verification process and the general control command does not, the attacker generates the activation command by using a concatenative synthesis technique that generates the wake words by concatenating different phonemes from other recordings from the target user similar to the impersonation attack we mentioned earlier. For example, the wake-up words "Hey Siri" can be generated from "he is a boy", "eat a cake", "in the city", "read after me". The general control command can be simply generated by any state-of-art TTS system. After the voice command is successfully generated, the author uses amplitude modulation (AM) to generate the ultrasonic signal. Amplitude modulation is a modulation technique used to transmit information through a carrier wave by varying the amplitude of the carrier wave in accordance with the information to be transmitted. An ultrasonic carrier is chosen based on the modulation depth \(m\) and carrier frequency \(f_{\text{c}}\). These two parameters are hardware dependent and they decide the amplitude and frequency of the final ultrasonic signal. After the ultrasonic signal has been generated, a powerful transmitter then transmits the signal to the target voice-controlled device. The result shows that the longest attack distance with an inaudible signal is about 175cm. #### 3.3.2. Inaudible Voice Commands attack In the dolphin attack, the attack range has a limit of approximately 175cm. If we want to increase the attack range by using a more powerful ultrasonic transmitter, then audio leakage may occur that makes the signal audible to the target user, e.g., part of the ultrasonic signal is leaked into the human-audible frequency range. In Roy _et al._(Roy et al., 2017), the author proposes an approach that can place a long-range ultrasonic attack, without ultrasonic leakage. This kind of attack is more dangerous than the previous one (Zhao et al., 2017) since the attack can occur from outside of a person's home. To solve this problem, Roy _et al._ found that not only the microphone has the non-linearity feature, but the loudspeaker also has similar characteristics. Therefore, the author utilizes the loudspeakers non-linearity feature and formulates this question as an optimization problem. This is done in order to hide the audible audio leakage spectrum, \(L(f)\), into the human hearing threshold \(T(f)\) as Fig. 8 shows, so that human listener will not hear the signal. The approach uses an ultrasonic loudspeaker array with 61 loudspeakers, as shown in Fig 9. Each microphone helps to segment the input signal into a small ultrasonic signal piece. The result shows that the attack can be successfully placed when the loudspeaker is 12ft away from the device. The author also proposes several detection-based defense mechanisms, and these techniques are introduced in section 4.1. Ultrasonic attacks are strong but they also have drawbacks. These attacks require specific ultrasonic transducers to produce the ultrasonic signal. Furthermore, for long-range attacks, a big and powerful ultrasonic speaker is needed, if you want to attack outside of the target user's home. Hence, the attacker needs to be close to the device in order for this attack to occur. Figure 6. Device setup for Dolphin attack (Zhao et al., 2017) Figure 7. Model Architecture for Dolphin attack (Zhao et al., 2017) ### Adversarial Attacks Modern voice-controlled systems are often equipped with a state-of-art automatic speech recognition (ASR) system, such as Deep Speech (Zeng et al., 2017), Lingvo (Li et al., 2018), Kaldi (Kaldi, 2018), to name a few. These deep learning based approaches have great performance on recognition tasks with about 5% word error rate (WER). However, it has been shown that DNN models have vulnerabilities, as is discussed in Szegedy _et al._(Szegedy et al., 2015). Fig. 10 shows the basic idea of an adversarial attack in both the image and audio domains. This type of attacks aims to mis-classify the original label to a different label by adding a certain perturbation to the input. For images, this may result in the classifier incorrectly identifying the object in the image. For speech recognition, the resulting audio signal may sound (and look like) the original input, but the ASR system may transcribe the audio incorrectly, where it is transcribed as a malicious command of the attacker. Cocaine Noodles (Luo et al., 2018) and Hidden Voice Command (Luo et al., 2018) are the first two approaches that discuss the vulnerabilities associated with automatic speech recognition. They found that the ASR systems are highly reliant on acoustic features, such as Mel-frequency cepstral coefficients (MFCCs). Using this feature, they successfully generate adversarial examples that contain enough acoustic features that the ASR system accepts it. However, these attack approaches do have limitations. The output speech, after adversarial noise is added, is not understandable, so the user may notice the attack and take effective defense measures. Recent adversarial approaches, however, successfully produce utterances that are still fully understandable by humans, but that are mis-classified by state-of-art ASR systems. In the following subsections, we introduce different adversarial example approaches based on the threat model of attackers. #### 3.4.1. C&W attack In 2018, Carlini and Wagner (Carlini and Wagner, 2018) first successfully placed an end-to-end targeted adversarial attack on the Deep-speech ASR system (Carlini and Wagner, 2018). They found that for any input \(x\), it is possible to find a small \(\delta\) to generate \(x^{\prime}\) where \(x^{\prime}=x+\delta\) so that \(x\) and \(x^{\prime}\) sound nearly identical. However, when \(x^{\prime}\) is provided as the input to the ASR system, i.e., \(f\left(x^{\prime}\right)\), the ASR outputs \(y\), which is a malicious command. In order to make \(x\) and \(x^{\prime}\) sound similar, Carlini and Wagner use the decibel (dB) as a distortion metric. The relative loudness of an audio sample is calculated as: \[dB(x)=\max_{i}20\cdot\log_{10}\left(x_{i}\right) \tag{1}\] The distortion level between the original waveform, \(x\), and the added perturbation, \(\delta\), can be calculated as: \[dB_{x}(\delta)=dB(\delta)-dB(x) \tag{2}\] The problem now can be formulated as the following optimization problem: \[\begin{array}{l}\text{minimize}\ dB_{x}(\delta)\\ \text{such that}\ f(x+\delta)=t\\ x+\delta\in[-M,M]\end{array} \tag{3}\] \(M\) here means the maximum representable value for the adversarial example which can be accomplished by clipping. Here, \(t\) is the malicious command transcript, which is the target label the attacker wants to achieve. Due to the non-linearity of the constraint \(f(x+\delta)=t\), the optimization problem requires an additional loss term: \[\text{minimize}\ dB_{x}(\delta)+c\cdot\ell(x+\delta,t) \tag{4}\] Here \(\ell(\cdot)\) represents the additional loss term. Smaller values for \(\ell(x+\delta,t)\) indicate that the predicted transcript is closer to target transcript. \(c\) here helps to control the weights of the distribution level and adversarial performance. In this paper, the author uses the connectionist temporal classification (CTC) loss (Carlini and Wagner, 2018). This is a commonly used loss function in speech recognition tasks. A lower Figure 8. Main idea of Roy _et al._(Roy et al., 2018) to minimize the gap between the human hearing threshold and speaker leakage. Figure 10. Adversarial example in image and audio domain Gong _et al._(Gong et al., 2018). Figure 9. (a) Device setup in Roy _et al._(Roy et al., 2018). (b) The ultrasonic microphone array for the attack. CTC loss indicates that the output transcription is closer to the ground truth label. The final difficulty is that the system may fail to converge when the inserted perturbation is too small. Therefore, the author sets a constant \(\tau\) and forces the system to converge when \(dB_{x}(\delta)\leq\tau\). If the system successfully converges, \(\tau\) can be reduced iterating until no solution can be found. Therefore, the final optimization problem can be described as: \[\begin{array}{l}\begin{array}{l}\text{minimize }|\delta|_{2}^{2}+ \epsilon\cdot\ell(x+\delta,t)\\ \text{such that }dB_{x}(\delta)\leq\tau\end{array}\end{array} \tag{5}\] By using the above method, the author reaches a 100% success attack rate with a 99.9% similar adversarial example compared to the original audio clip. Since this is a white box, non-universal, logical access threat model, C&W attacks pose a limited threat to voice-controlled systems. #### 3.4.2. Over-the-air attacks In this section, we introduce some representative audio adversarial over-the-air attacks. These attacks are more dangerous than logical attacks, because they can attack the target device physically and from a long distance. The representative publications are [112; 148; 154]. In Yuan _et al._[154], the author proposes a white box attack that generates adversarial music that contains a malicious command to attack a Kaldi ASR system [111]. The attack can be either logical (WAV-To-API, WTA), which is simulated digitally or physical (WAV-AIR-API, WAA), which can be deployed through the air. A Kaldi ASR system contains multiple components such as an acoustic model and a language model. The acoustic model in it can be trained with a DNN and it represents the probability between input features and phonemes. The language model then represents the probability distribution over the sequence of words. For WTA attacks, the author tried to use a probability density function index (pdf-id) sequence matching method to hide the command audio into the song audio. This method involves creating a targeted command by replacing certain phonetic units in the original command with other units that have similar acoustic features, but a different meaning. In order to make the attack over-the-air and accomplish WAA attack, the author added a noise model to simulate the background noise and electronic noise of speakers to the pdf-id sequence matching model \(x^{\prime}(t)=x(t)+\mu(t)\), where \(x(t)\) is the result from a WTA attack and \(\mu(t)\) is the random noise model. The author reports 100% success rate on WTA attacks and achieves a 96% success rate when using the JBL speaker with a 1.5m distance between the speaker and microphone. In Qin _et al._[112], the author improved the C&W attack so that the attack can play over-the-air. The author proposed an attack scenario to attack the Lingvo ASR system [123]. The author made the perturbation further imperceptible by using frequency masking, where a softer sound becomes inaudible since it is obscured by a louder sound. The power spectral density, \(\rho_{\delta}\), of the perturbation is calculated in each iteration to make sure it falls below the masking threshold, \(\theta_{x}\), of the original utterance. The loss function is formulated as: \[\ell(x,\delta,t)=\ell_{net}(f(x+\delta),t)+\alpha\cdot\ell_{\theta}(x,\delta) \tag{6}\] The first part of the equation is from the C&W attack to make the audio produce the target label where \(x\) is the original speech, \(\delta\) is the added pertubation and \(t\) is the target transcript. The second part of the equation controlled by weight \(\alpha\) aims to make the perturbation imperceptible. The optimization problem can be then formulated as: \[\min_{\delta}\ell(x+\delta,t)+\alpha\sum_{k=0}^{\left\lfloor\frac{N}{2} \right\rfloor}\max\left\{p_{\delta}(k)-\theta_{x}(k),0\right\} \tag{7}\] N here is the STFT window size. \(p_{\delta}(k)\) is known as the power spectral density (PSD) of the perturbation and \(\theta_{x}(k)\) is the frequency masking threshold of the original audio where \(k\) represents the kth bin of the spectrum of frame x. Besides the above contribution, the author also tries to make the attack physical. The author first simulates the room impulse, \(r\), and convolves the speech with it to produce the reverberant signal, \(C(x)=x*r,t\sim\text{T}\). Then the loss function becomes: \[\ell(x,\delta,y)=\text{E}_{t\sim\text{T}}\left[\ell_{\text{net}}\left(f(C(x+ \delta)),t)\right]+\alpha\cdot\ell_{\theta}(x,\delta) \tag{8}\] where the first part of the equation is the robustness loss and the second part of equation is for imperceptibility as before. In terms of results, the author first uses three experiments with Amazon Mechanical Turk users to evaluate the effectiveness of adversarial examples in audio manipulation, finding that users had difficulty distinguishing between clean and adversarial examples. These adversarial examples are sent to the Lingvo ASR system [123] and reach 49.65% over-the-air accuracy and 22.98% word error rate (WER) while keeping the perturbation imperceptible. In Yakura _et al._[148], the author successfully placed an over-the-air attack to the Deepspeech ASR system. In order to aid the robustness of the audio adversarial example and make the over-the-air attack possible, the author introduces three techniques to simulate the transformations caused by playback and recording, into the generation process. The three components are band-pass filtering, room impulse response, and white Gaussian noise. The author started by using the original loss function from C&W attack to generate the audio adversarial audio as follow: \[\begin{array}{l}\begin{array}{l}\text{argmin }\ell(MFCC(x+\delta),t)+\epsilon\|\delta\|\\ \text{\boldmath$v$}\end{array}\end{array} \tag{9}\] Here \(x\) and \(\delta\) represent the original speech signal and the added perturbation. \(MFCC(\cdot)\) indicates the MFCC feature extraction from the mixed signal \(x+\delta\). After the logical audio adversarial example is successfully generated, the author adds robustness to the function. First, the author uses a band-pass filter to limit the frequency range of the perturbation. As was introduced before, modern microphones are often made to automatically cut off the inaudible range of the signal. Therefore, the author limits the frequency bands to 1k to 4k Hz. The loss function is updated as follow: \[\begin{array}{l}\begin{array}{l}\text{argmin }\ell(MFCC(\tilde{x}),t)+\epsilon\|\delta\|\\ \text{where }\tilde{x}=\text{\boldmath$x$}+\underset{1k\sim 4k\text{ Hz}}{\text{ \boldmath$B$}PF}(\delta)\end{array}\end{array} \tag{10}\] An impulse response represents the reaction obtained when an audio system is presented with a brief input signal, called an impulse. In the second step, the author aims to make the generated adversarial examples robust against reverberation by incorporating impulse responses from various environments into the generation process (Kang et al., 2018). Similar as (Kang et al., 2018), the author computes the expectation value over impulse responses recorded in diverse environments. Therefore, the loss function further updates as: \[\begin{array}{l}\underset{\mathbf{\delta}}{\operatorname{argmin}}\;\mathbb{E}_{h \sim\mathcal{H}}[\ell(MFC(\tilde{\mathbf{x}}),t)+\epsilon\|\mathbf{\delta}\|]\\ \text{where}\;\tilde{\mathbf{x}}=\operatorname{Conv}_{h}(\mathbf{x}+\underset{1k\sim 4k \text{ Hz}}{BPF}(\mathbf{\delta}))\end{array} \tag{11}\] where \(\mathcal{H}\) indicates the set of collected impulse responses. The convolution step using impulse response \(h\) is shown as \(\operatorname{\mathit{Conv}}_{h}\). The last technique the author uses is to add random white Gaussian noise into the generation process. Adding white Gaussian noise to the training process has been proven to help the adversarial example become robust to background noise (Zhu et al., 2017). Therefore, the final loss function can be described as: \[\begin{array}{l}\underset{\mathbf{\delta}}{\operatorname{argmin}}\;\mathbb{E}_{ h\sim\mathcal{H},\mathbf{w}\sim N(0,\sigma^{2})}\;\big{[}\ell(\text{MFCC}(\tilde{\mathbf{x}}),t)+ \epsilon\|\mathbf{\delta}\|]\\ \text{where}\;\tilde{\mathbf{x}}=\operatorname{Conv}_{h}(\mathbf{x}+\underset{1k\sim 4k \text{ Hz}}{BPF}(\mathbf{\delta}))+\mathbf{w}\end{array} \tag{12}\] where \(w\) is a \(\mathcal{N}\left(0,\sigma^{2}\right)\) white Gaussian noise. The result shows the attacker reaches a 100% attack rate by attack JBL CLIP2 and Sony ECM-PCV80 microphone from 0.5 meters away. #### 3.4.3. Black box attack Until now, all adversarial example generation techniques we introduced are white box attacks, which require the attacker to have complete knowledge of the model architecture and parameters so that the author can compute the gradient of the model and apply the attacks. Recent studies show that black box attacks are also possible in the speech domain (Kang et al., 2018; Kang et al., 2018; Kang et al., 2018). With black box attacks, the attacker does not need information about the speech recognition model so that they can apply the attacks to proprietary systems, such as Google and Amazon APIs. In Taori _et al._(Taori et al., 2018), the author introduces a black box adversarial example method using the CTC loss and Deep Speech. They did not use any gradient information from the model so that this can be treated as a black box attack. The attack scenario contains two stages. In the first stage, the attacker uses a genetic algorithm approach to generate the adversarial audio example, as Fig. 11 shows. The genetic algorithm approach for adversarial attacks on speech-to-text systems involves iteratively perturbing benign audio samples by applying evolutionary methods like crossover and mutation, using a scoring function based on CTC-Loss to determine the best samples and refining the population over time until the desired target is reached or the maximum number of iterations is completed. Given a original input \(x\) and target phrase \(t\), the algorithm first duplicates the input by the population size the author selects. The author chose 100 as the population size here. Then, on each iteration, the top 10 elite samples with the lowest CTC loss were chosen using a sorting function. These elites then performed crossover and momentum mutation to generate better adversarial examples. The author also added a high pass filter to add noise to the system. After the output of the adversarial sample is close to the target, the attack shifts into the second stage. In the second stage, the author applies a gradient estimation method at 100 random indices of the audio to further fine tune the adversarial example. The result is satisfying but not perfect. 35% accuracy and 89.25% similarity score is reported for attacking the Deepspeech ASR system. Therefore, even though a black box attack is much more powerful than a white box attack, efforts are still needed in this topic for higher accuracy. #### 3.4.4. Universal perturbation attack The Universal perturbation attack means the attacker generates a perturbation that can be added to different input audio and cause a misclassify by the ASR system. Universal perturbations have been proven to be effective in image-domain adversarial examples (Zhu et al., 2017). Recent studies show that universal perturbations also exist in the audio domain. One thing worth noting is that all the universal perturbation attacks are untargeted, which means they cannot control the resulting transcript. In most cases, the resulting transcription is not a meaningful sentence. Therefore, this type of attack is a limited threat. In Neekhara _et al._(Neekhara et al., 2018), the author applies an untargeted universal perturbation attack on DeepSpeech. The goal of the attack is to find a a quasi-imperceptible universal perturbation \(\delta\) that can mis-transcribe most data points sampled from a certain distribution. In order to accomplish this task, the author borrowed an idea from the image domain (Zhu et al., 2017). The author first went over the data points in the original signals \(x\) iteratively and gradually builds the perturbation vector \(\delta\). At each iteration, the author finds the minimum perturbation \(\Delta\delta_{i}\) that can cause the maximum character error rate (CER). Then they add this perturbation to the desired universal perturbation \(\delta\). In order to make the perturbation quasi-imperceptible, the author needs to check \(\|\delta\|_{\infty}<\epsilon\) after each iteration where \(\epsilon\) is the maximum allowed \(l_{\infty}\) of the perturbation. The result reported a 88.24% success rate and 1.07 mean CER when maximum allowed \(\|\delta\|_{\infty}\) equals 400, where mean \(dB_{x}(\delta)=-30.18\). The success rate and mean CER lower to 72.42% and 0.82 when the maximum allowed \(\|\delta\|_{\infty}\) equals 100, where the mean \(dB_{x}(\delta)=-42.03\). The attack method can also transfer to Wavenet based ASR system with a 63.28% success rate and 0.6 mean CER when the maximum allowed \(\|\delta\|_{\infty}\) equals 400. ## 4. Defense approaches In this section, we discuss privacy-defending mechanisms to protect user privacy from the previously mentioned attacks. There are two main privacy-defending research directions: detection only and Figure 11. Genetic algorithm approach from Taori _et al._(Taori et al., 2018). complete defense. In detection only approaches, the defender develops a classifier that determines whether the input from the target user has been attacked or not. For complete defense approaches, the goal is the lower the effectiveness of the attack. ### Detect Only Defense A detect only defense is still the most widely used approach for addressing audio privacy attacks. This type of defense is effective to all known attacks and easy to implement. The detect only defense aims to detect when a malicious voice command is inputted to the voice-controlled system. This can help to alarm the user to avoid the incoming adversarial attack. #### 4.1.1. Add-on classifier The most common detect only defense is to add a classifier to identify whether the input signal is adversarial or not. The defender provides a classifier with a large number of adversarial examples and real speech signals. The classifier then identifies the difference between real and fake speech signals. For impersonation attacks, a challenge from The Interspeech conference named Automatic Speaker Verification Spoofing and Countermeasures (ASVspoof) Challenge has provided exciting results. The challenge has been held three times, in 2015 [(146)], 2017 [(73)] and 2019 [(135)]. In the most recent 2019 challenge, both logical access (LA) and physical access (PA) attack scenarios are considered. LA attacks suggest that the attack signal is directly injected into the voice-controlled system. For this scenario, the participants are asked to build a classifier to identify whether the audio is generated by a text-to-speech synthesis (TTS) approach or by voice conversion (VC) technology. For training and validation datasets, 6 known attacks from 2 VC systems and 4 TTS systems are used. For the test dataset, 11 unknown systems with 2 VC, 6 TTS and 3 hybrid TTS-VC systems were chosen to test the generality of the model when they saw unknown data [(135)]. For the PA scenario, the participants need to build a classification model that distinguishes between human spoken speech and replayed speech. Replayed audio used in the challenge is recorded by 27 different acoustic configurations and 9 different replay configurations. The 27 acoustic configurations include the combination of 3 categories of room sizes, 3 categories of reverberation and 3 categories of talker-to-microphone distance. The 9 replay configurations include 3 attacker-to-talker recording distances and 3 categories of loudspeaker quality. Two evaluation metrics are used, where the first one is the tandem detection cost function (t-DCF) [(72)] and the second one is the equal error rate (EER). The EER and t-DCF are both performance measures used to evaluate the trade-off between false acceptances and false rejectations. The transitional EER is defined as the point at which the false acceptance rate (FAR) and the false rejection rate (FRR) are equal. The t-DCF is defined as: \[\text{t-DCF}=C_{miss}\times P_{miss}\times\text{EER}_{act}+C_{fa}\times P_{ fa}\times\text{EER}_{spoof} \tag{13}\] where \(C_{miss}\) and \(C_{fa}\) are the costs associated with a missed detection. \(P_{miss}\) and \(P_{fa}\) are the prior probabilities of a missed detection and false acceptance, respectively. EER\({}_{act}\) is the actual equal error rate (EER) of the system on genuine speech, and EER\({}_{spoof}\) is the EER of the system on spoofed speech. The t-DCF can be seen as a weighted combination of the actual EER and the EER on spoofed speech, where the weights are determined by the costs and the prior probabilities. The goal is to minimize the t-DCF, which corresponds to finding a balance between false acceptances and false rejections that is optimal for the given costs and prior probabilities. From the results [(135)], the top teams use both neural network based classifiers and an ensemble of classifiers. One representative paper from this challenge is [(9)]. In this paper, Alzantot _et al._ use 3 different features: Mel-frequency Cepstral Coefficients (MFCCs), Constant-Q Cepstral Coefficients(QCCs) and the logarithmic magnitude of the Short-time Fourier transform (STFT). CQCC uses a constant-Q transform and geometrically spaced frequency bins to get a higher frequency resolution at lower frequencies and higher temporal resolution at higher frequencies. More details about CQCC can be found in Todisco _et al._[(134)]. Three different models based on the input were then generated. Three models shared the same structure of a classic classifier from the image domain called ResNet [(56)]. A 6 residual block ResNet was chosen. The detailed structure for each residual block can be found in Fig. 12. Two fully connected layers and a softmax layer are connected at the end of the network to produce the probability of whether the audio is fake or not. A fusion mechanism is used to ensemble all MFCC-ResNet, Spec-ResNet and CQCC-ResNet together. A weight is assigned to each model based on its performance on the validation dataset. The result shows that the fusion model reached a 0.1569 t-DCF and 6.02 EER on logical access task, 0.0693 t-DCF and 2.78 EER on physical access task. For ultrasonic attacks, in Zhang _et al._[(158)], the authors develop a support vector machine (SVM) based classifier. The strategy involves analyzing the frequency range from 500 to 1000 Hz, where the attack signal shows differences from both the original signal and the recorded one. To validate the approach, the authors generated 12 voice commands from two different text-to-speech engines, NeoSpeech and Selvy, and obtained both recorded and recovered samples. Using a simple SVM classifier, the approach was able to distinguish recovered audio from recorded ones with 100% true positive and true negative rates. The results demonstrate the feasibility of using a software-based defense strategy to detect Dolphin attacks [(158)]. In the study by Roy _et al._[(1)], the researchers attempt to classify attack signals produced by utilizing three primary characteristics Figure 12. Detailed structure of residual block used in Alzantot _et al._[(9)] of the created ultrasonic attack voice. Firstly, the attack signal consistently falls below the sub-50Hz band, unlike the human voice. Secondly, there is a strong correlation between the inaudible leaked signal and the ultrasonic attack signal. Thirdly, human voices typically oscillate above or below an amplitude of 0, while the attack signal's amplitude consistently remains above 0. As a result, the amplitude skewness from constructing the ultrasonic attack signal can be employed as an additional feature to discern whether a signal originates from an ultrasonic source. The findings indicate that this detection technique achieves 99% accuracy in identifying counterfeit voices, as reported by Roy _et al._(Roy et al., 2017). For an adversarial example, (Roy et al., 2017) develops a CNN-based classifier to detect C&W attacks, as we introduced earlier in section 3.4. The study presents the design and creation of two separate datasets, A and B, based on white-box and black-box attack methods, respectively. Dataset A is crafted using the C&W white-box attack method. The audio signals are divided into three categories - short, medium, and long - based on length, and corresponding targets for attacks are composed. Using these categories, 900 adversarial examples and 900 normal examples are generated. Dataset B is created with mutual targeting on Google Speech Command dataset with 10 different commands. The same number of adversarial and normal examples (1800 each) are generated. Both datasets maintain a 16kHz sampling rate. The results are reported under a 95% confidence interval for all testing scenarios. Matched training and testing conditions achieved over 98% accuracy, and multi-condition training achieved over 96% accuracy. These results indicate that the CNN model can effectively learn adversarial perturbation, even with noise present in some normal speech examples. Also, the CNN model performed better in detecting white-box examples than black-box ones. In Daubener _et al._(Daubener et al., 2017) and Jayashankar _et al._(Jayashankar et al., 2017), the authors use different uncertainty quantifications (UQ) to detect the adversarial examples. Uncertainty quantification (UQ) is the process of characterizing, modeling and analyzing the uncertainties in a system or model. In Daubener _et al._(Daubener et al., 2017), the author uses a feed-forward neural network and three neural networks specifically designed for uncertainty quantification, namely a Bayesian neural network, Monte Carlo dropout, and a deep ensemble and reached 99% accuracy on detection adversarial examples. Jayashankar _et al._(Jayashankar et al., 2017) employed dropout uncertainty and a SVM to detect a variety of adversarial examples. By using a defense dropout rate of 0.1 and training the SVM on the first four moments of the character-sequence-based uncertainty distribution, they achieved optimal results. For the C&W attack, their accuracy was 96.5%. For the Noise Reduction Robust (NRR) attack, they attained an accuracy of 88.5%. In the case of the Imperceptible Audio attack, they reached a 92.0% accuracy, and for the Universal Perturbation, they achieved 100% accuracy. The add-on classifier defenses are useful and easy to implement. They did not change any parameters of the original model. However, they also have limitations. First, as the classifiers are built by deep neural networks, they are also vulnerable to adversarial attacks. Also, these classifiers require large number of adversarial data to train. Lastly, their performance still needs to improve for unseen attacks. #### 4.1.2. Human motion detection Since all the attacks we introduced in section 4 use speakers to play a signal or simply inject noise into the voice-controlled system, another interesting detection only approach for voice-controlled system is that defenders can try to detect whether the signal is from a live human or not. In Chen _et al._(Chen et al., 2017), the author uses the magnetic field emitted from the loudspeakers to detect impersonation attacks for voice-controlled system. The defense mechanism tried to detect whether the source of the voice command is from a speaker by using a magnetometer. If the command is from an electronic speaker, the system will reject the incoming command. The results show that the system reaches 100% accuracy and 0 EER on detection. However, the system achieved this performance when distances between the sound source and the smartphone were less than or equal to 6 cm. In Lei _et al._(Lei et al., 2017), the researchers develop a Virtual Security Button (VSButton) that uses WiFi signals to detect indoor human motion. When motion is detected, the voice-controlled system becomes receptive to voice commands. However, there may be instances where a person speaking a voice command does not exhibit detectable motion, resulting in a low true negative rate. The author evaluates the VSButton prototype in three different space settings: a square room, a rectangular room, and a real-world apartment. In the square room experiment, two configurations were tested at four indoor (A, B, C, D) and six outdoor (A', B', C', D', M', N') locations. In Configuration 1, the Echo Dot laptop (RX) is central in the room, with the WiFi router (TX) at the edge. In Configuration 2, RX and TX sit equidistantly between Locations N' and M', dividing the distance into thirds. The rectangle room is similar to square room Configuration 2 but in a rectangle room with brick walls. The real-world apartment is a 75\(m^{2}\) apartment with two bedrooms. The performance is measured by the system's ability to correctly identify three cases: no motion, indoor motion, and outdoor motion. Six volunteers participate in the experiment, performing three different motions - waving a hand, sitting down and standing up, and jumping inside and outside a room, representing weak, medium, and strong human motions, respectively. In the experiment, the receiver (laptop with an Echo dot) sends 50 Internet Control Message Protocol (ICMP) Echo Request messages per second to the transmitter (WiFi router), enabling the continuous collection of channel state information (CSI) for motion detection. In a square room with configuration 1, all indoor motions could be differentiated from no-motion cases and outdoor motions at all locations except location M'. The Mahalanobis distance for the WAVE-HAND motion ranged from 0.191 (location D) to 0.218 (location A) for indoor locations, and from 0.079 (location C') to 0.156 (location M') for outdoor locations. In the same square room with configuration 2, the Mahalanobis distance of each indoor motion was higher than the maximum distance (i.e., 0.241 from JUMP at Location M') of all the outdoor motions. In a rectangular room with brick walls, the minimum Mahalanobis distance among all indoor motions (i.e., 0.147 from WAVE-HAND at Location A) was higher than the maximum distance (i.e., 0.042 from JUMP at Location M') of all outdoor motions. Finally, in the real-world apartment setting, the VSButton was able to differentiate between indoor and outdoor locations with a threshold \(t\) set to 0.1. In a 100-minute experiment, the Alexa device was accurately activated by indoor motions and was not activated by outdoor motions. The experimental findings demonstrate that the VSButton is capable of accurately distinguishing between indoor motions and instances of no motion or outdoor motion. In Feng _et al._[(42)], a new system called VAuth is introduced, which offers continuous authentication for voice-controlled systems. VAuth collects body-surface vibrations from the user and matches them with the voice command captured through microphones on widely-used wearable devices. The researchers implemented a VAuth prototype using a commodity accelerometer and an off-the-shelf Bluetooth transmitter, integrating it with the Google Now system in Android, making it easily extendable to other platforms like Cortana, Siri, or phone banking services. They conducted experiments with 18 participants who issued 30 different voice commands using VAuth in three wearable scenarios: eyeglasses, earbuds, and necklaces. The results showed that VAuth achieved over 97% detection accuracy and nearly 0 false positives, indicating successful command authentication. It worked effectively across different accents, mobility patterns (still vs. jogging), and languages (Arabic, Chinese, English, Korean, Persian). VSAuth successfully blocked unauthenticated voice commands replayed by an attacker or impersonated by other users and it incurred minimal latency (average of 300ms) and energy overhead (requiring recharging only once a week). However, VAuth's reliance on users wearing these devices can be inconvenient for everyday use. Exploring human motion detection as a potential avenue for detecting adversarial attacks holds significant promise for future studies. However, recent work has been limited by hardware constraints and has not yet achieved satisfactory real-world protection. As such, it is necessary to conduct additional research that utilizes alternative features for detecting human motion in order to address this limitation. ### Complete Defense Compared to detect only defense techniques, complete defenses are more powerful. This type of defense uses different techniques to modify the original model setup and help the system become robust to certain types of adversarial attacks. When the attack command reaches the voice-command system, the target network could achieve its original goal and provide the correct output. However, complete defenses also have their limitations. When unknown attacks arrive, the modified system may fail. There are two advantages of this type of defense. Firstly, it can be widely used in different types of attacks [(113; 83; 141)]. Secondly, this type of defense is easy to implement, as the defense does not require changes to be made or the system to be retrained. Detection only defenses also have their weaknesses, since they can only detect if the attack occurs but they cannot resolve the problem. The complete defense approach addresses this, by lowering the effectiveness of the attack. Complete defense approaches are normally used to defend against adversarial attacks. This type of defense has been proven to be useful in the image domain [(155; 43; 4; 81)]. More audio privacy-defending approaches also aim to strengthen the deep learning model. The complete defense approaches also have their weakness. Firstly, unlike detection only defense mechanisms which usually have 90% or high accuracy, the complete defense can only lower the effectiveness of the adversarial examples in a certain extent. Secondly, most complete defense algorithms only work for known attacks. When unknown attacks appear, the modified model may crash. #### 4.2.1. Hardware-based defense In Zhang _et al._[(158)], the author provided two hardware based defenses to avoid ultrasonic attacks. The first and straight forward way is to enhance the microphone. Most MEMS microphones now still allow signals in ultrasonic range (>20kHZ) 15 16. Therefore, the author suggested these type of microphones should be enhanced and apply filters to eliminate any signals within the ultrasonic range. The second defense mechanism the author proposed is to add a module before the low pass filter to detect the ultrasonic signal and cancel the baseband of it. These two methods can efficiently defend against weak ultrasonic attacks. However, for stronger ultrasonic attacks such as [(115)], these type of defense may not work. Footnote 15: [http://www.mouser.com/ds/2/720/DS37-1.01%120AKU143320Datasheet-552974.pdf](http://www.mouser.com/ds/2/720/DS37-1.01%120AKU143320Datasheet-552974.pdf) Footnote 16: [https://www.mouser.com/datasheet/2/720/PB24-1.0%20-%20AKU242%220Product%20Brief-770082.pdf](https://www.mouser.com/datasheet/2/720/PB24-1.0%20-%20AKU242%220Product%20Brief-770082.pdf) #### 4.2.2. Increasing security level In Petracca _et al._[(109)], the author proposed a security level increasing method to prevent adversarial attacks on audio channels in mobile devices. The authors design and implement AuDroid, an extension to the SELinux reference monitor integrated into the Android OS. AuDroid enforces lattice policies on the dynamic use of system audio resources and gathers input from system apps and services to evaluate options for resolving unsafe communication channels. The system is specifically integrated into the Android Media Server, controlling access to the microphone and speaker to protect system apps from third-party apps and external attackers. AuDroid is evaluated on six types of attack scenarios and on 17 widely-used apps that utilize audio. The results show that AuDroid effectively prevents exploits without impairing the normal functionality of system apps and services, with defense times of less than 4 us for the speaker and less than 25 \(\upmu\)s for the microphone, resulting in insignificant overhead during app usage. This type of defense allows the system to defend the operating system attack by using the built-in speakers [(34)]. However, the limitation of this defense is that it is not robust to other types of attacks, such as adversarial examples. #### 4.2.3. Modify input data Modify input data for adversarial training has proven its effectiveness in the image domain [(117; 50)]. This defense uses adversarial training to regularize the network, reduce over-fitting and add robustness to the network. In Sun _et al._[(127)], the authors incorporate adversarial examples directly into the training dataset to train the ASR system. During the training process, the Fast Gradient Sign Method (FGSM) is employed to generate adversarial examples as training data. The underlying concept is to create adversarial examples that maximize the loss function. The perturbation can then be calculated as: \[\delta_{FGSM}=\epsilon\,\text{sign}\left(\nabla_{\mathbf{x}}J(\mathbf{\theta},\mathbf{x },t)\right) \tag{14}\] where \(\epsilon\) is a constant that controls the amplitude of the perturbation, \(x\) represents the input, \(t\) denotes the target and \(\theta\) signifies the model's parameters. \(J(\mathbf{\theta},\mathbf{x},t)\) is the average cross-entropy loss function. The \(sign\) operation in FGSM yields the direction of the gradient, either positive or negative, instead of its actual value, facilitating better control over the quantity of perturbation introduced.The perturbation is dynamically generated during each iteration, which helps to improve the robustness of the ASR system. By setting \(\epsilon\) to 0.3, the results demonstrate an average 14.1% WER reduction. While this method effectively enhances robustness to defend against known attacks, it has its limitations. When an unknown adversarial example is inputted into the ASR system, it may still be susceptible to deception. In Mendes _et al._(Mendes et al., 2017), the authors proposed a complete defense method to protect speech from adversarial examples. The overview of this defense can be found in Fig. 13. The raw audio signal is first transformed into the time-frequency domain using a Short-time Fourier transform (STFT). Then the frequency masking threshold \(\theta_{x}\) is calculated by using the method from Qin _et al._(Qin et al., 2018). A corresponding defensive perturbation is calculated by \(\delta_{D}=\max(0,\mathcal{N}(\mu,\sigma))\) where \(\mu\coloneqq 3k\times\theta_{x}\) and \(\sigma\coloneqq k\times\theta_{x}\). \(k\) here denotes the proportionality. The last step of this defense is to add the perturbation to the original input and feed into ASR model. #### 4.2.4. Modify the network Besides modifying the input and using data augmentation techniques to make the model more robust to adversarial examples, we can also directly modify the network. In Yang _et al._(Yang et al., 2019), the author provided an additional term to the network based on the temporal dependency between a real speech and an adversarial example. For a given audio, the author first selected the prefix of length \(p\) and provided x to a ASR system to generate the transcript \(S_{p}\). Then The complete audio signal was inputted into the ASR system, and the transcribed result prefix of length \(p\) was selected as \(S_{\{\text{whole},p\}}\). Due to the temporal dependency, \(S_{p}\) and \(S_{\{\text{whole},p\}}\) should have consistent results. However, if the speech has been attacked with an added perturbation, they will not produce the same result. The study evaluated the proposed Temporal Dependency (TD) detection method on speech-to-text attacks - Commander Song and Optimization based attack(Opt) 3.4. For the Commander Song attack, the TD method with p=1/2 successfully detected all generated adversarial samples. In the Opt attack, the TD method achieved an AUC score of 0.936 on Common Voice and 0.93 on LIBRIS when using WER as the detection metric. When p=4/5 and using CER, the AUC score reached 0.969, indicating that the TD-based method is promising in distinguishing adversarial instances. The results suggest that the TD-based method is an easy-to-implement and effective approach for characterizing adversarial audio attacks. #### 4.2.5. Audio compression Data compression has emerged as a popular method of adversarial defense in the image domain (Mendes et al., 2017), and similar ideas have been implemented in the audio domain. In Das _et al._(Das et al., 2018), the study explores the use of compression techniques, such as Adaptive Multi-Rate audio codec(AMR) and MP3 compression, to mitigate adversarial perturbations in the audio domain. The researchers tested these techniques on adversarially manipulated audio samples and evaluated their effectiveness in defending an ASR model. They created targeted adversarial instances from the first 100 test samples of the Mozilla Common Voice dataset and preprocessed them using the compression techniques. The proposed system significantly reduces the attack success rate from 92.5% to 0%. The results indicate that the word error rate (WER) of the ASR system without any defense increased from 0.369 to 1.287. The WER slightly increased to 0.666 from 0.488 when using AMR compression, and to 0.78 from 0.4 when using MP3 compression. Likewise, in Andronic _et al._(Andronic et al., 2018), the authors use MP3 compression to eliminate adversarial noise for the ASR system, resulting in a 21.31% reduction in the relative character error rate of adversarial examples and MP3-compressed adversarial examples. These compression techniques, based on psychoacoustic principles, were found to be effective in removing adversarial components from the audio that are imperceptible to humans but confuse the model. #### 4.2.6. Adversarial Defense The tactics used for attacking others can also serve to protect them. Adversarial examples are increasingly being employed by researchers to safeguard users' privacy from voice-controlled assistants. For instance, Liu _et al._(Liu et al., 2018) have devised "MyBabble," which uses the user's own voice to generate personalized noise that thwarts speech hijacking by voice-controlled assistants. Similarly, Liu _et al._(Liu et al., 2018) have implemented an end-to-end approach that produces utterance-specific perturbations that obscure a set of words considered sensitive. In Xu _et al._(Xu et al., 2019), the authors utilize adversarial noise based on MFCCs to defend users against malicious speech recognition (ASR) systems, thereby raising the system's word error rate. Additionally, in Chen _et al._(Chen et al., 2019), the authors created a wearable microphone jammer that emits ultrasonic sounds to protect people's conversations from being overheard by voice-controlled devices. ## 5. Discussion In the previous sections, we conducted a comprehensive list of privacy-attacking and privacy-defending techniques. We introduced the theory behind each mechanism. We also talked about the limitations and advantages about certain techniques. In this section, we discussed our key findings and make recommendations for future research. **Combination of attacks may become a big threat.** Until now, we define each attack based on when it occurs in the voice-controlled system pipeline. However, different attacks may combine these attacks, resulting in stronger attacks. For example, an operating system attack can be combined with an adversarial example so that the malware causes the built-in microphone to play the inaudible adversarial example. In this case, this type of attack is much harder to detect and defend. Figure 13. Detailed structure of the defense mechanism from Mendes _et al._(Mendes et al., 2017) Reasons for adversarial vulnerability need more investigation.The literature shows that all kinds of deep learning networks are vulnerable to certain attacks. In our previous explanation, non-linearity is the key feature of the existence for those attacks. However, more investigations are needed for exploring the common feature of these attacks. Current complete defense methods still cannot fully defend certain known attacks. More investigations are needed to find new features for those attacks and build more robust ASR defense models. Also, many counter measurements are based on DNNs. Theses defenses are also vulnerable to adversarial attacks. A more general and stable defense may be needed. ## 6. Conclusion Modern voice-controlled systems are vulnerable to privacy attacks. In this paper, we proposed a categorization for privacy-attacking and privacy defending mechanisms. We carefully introduced each attacking and defending technique with their threat model. These privacy attacks can happen in each stage of the voice-controlled system and they pose real-world threats to our daily life. Privacy-defending techniques can make the system more robust but still cannot completely solve the problem. More studies are needed on voice-controlled systems to ensure privacy is preserved for users.
2309.08968
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference
Large language models (LLMs) have revolutionized natural language processing (NLP) by excelling at understanding and generating human-like text. However, their widespread deployment can be prohibitively expensive. SortedNet is a recent training technique for enabling dynamic inference by leveraging the modularity in networks and sorting sub-models based on computation/accuracy in a nested manner. We extend SortedNet to generative NLP tasks, making large language models dynamic without any Pre-Training and by only replacing Standard Fine-Tuning (SFT) with Sorted Fine-Tuning (SoFT). Our approach boosts model efficiency, eliminating the need for multiple models for various scenarios during inference. We show that this approach can unlock the power of intermediate layers of transformers in generating the target output. Our sub-models remain integral components of the original model, minimizing storage requirements and transition costs between different computational/latency budgets. The efficacy of our proposed method was demonstrated by applying it to tune LLaMA 2 13B on the Stanford Alpaca dataset for instruction following and TriviaQA for closed-book question answering. Our results show the superior performance of sub-models in comparison to Standard Fine-Tuning and SFT+ICT (Early-Exit), all achieved with efficient tuning and without additional memory usage during inference.
Parsa Kavehzadeh, Mojtaba Valipour, Marzieh Tahaei, Ali Ghodsi, Boxing Chen, Mehdi Rezagholizadeh
2023-09-16T11:58:34Z
http://arxiv.org/abs/2309.08968v2
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference Using Sorted Fine-Tuning (SoFT) ###### Abstract The rapid advancement of large language models (LLMs) has revolutionized natural language processing (NLP). While these models excel at understanding and generating human-like text, their widespread deployment can be prohibitively expensive. SortedNet is a recent training technique for enabling dynamic inference for deep neural networks. It leverages network modularity to create sub-models with varying computational loads, sorting them based on computation/accuracy characteristics in a nested manner. We extend SortedNet to generative NLP tasks, making large language models dynamic without any pretraining and by only replacing standard Supervised Fine-Tuning (SFT) with Sorted Fine-Tuning (SoFT) at the same costs. Our approach boosts model efficiency, eliminating the need for multiple models for various scenarios during inference. We show that using this approach, we are able to unlock the potential of intermediate layers of transformers in generating the target output. Our sub-models remain integral components of the original model, minimizing storage requirements and transition costs between different computational/latency budgets. By applying this approach on LLaMa 2 13B for tuning on the Stanford Alpaca dataset and comparing it to normal tuning and early exit via PandaLM benchmark, we show that Sorted Fine-Tuning can deliver models twice as fast as the original model while maintaining or exceeding performance. University of Waterloo 2Huawei Noah's Arc Lab {mojtaba.valipour, ali.ghodsi}@uwaterloo.ca, {parsa.kavehzadeh, mehdi.rezagholizadeh, marzieh.tahaei, boxing.chen}@huawei.com ## 1 Introduction Large language models are revolutionizing the way we interact with information in today's world (Hoffmann et al., 2022; Brown et al., 2020; Penedo et al., 2023; Scao et al., 2022). New models are continually emerging, demonstrating their capabilities not only in understanding but, more importantly, in generating human-like text. Notably, models such as ChatGPT, LLaMA 2 70B (Touvron et al., 2023), and Falcon 180B (Almazrouei et al., 2023) have had a profound impact on the applicability of large language models (LLMs). However, deploying these expansive language models can become prohibitively expensive. What distinguishes this new era of ChatGPT-like models is their ability to perform an extraordinarily wide array of tasks in natural language processing (NLP), reasoning, and more, all through behavior cloning (Wei et al., 2021; Wang et al., 2022). In fact, a single model can leverage the strong contextual learning ability offered by Supervised Fine-Tuning to address numerous tasks, spanning from language comprehension to complex reasoning. While this unified usage simplifies the deployment of these models as general assistants, it remains highly inefficient. Enabling dynamic inference, where the computational resources allocated to a given query vary at inference time, can significantly enhance the practicality of employing such models in real-time scenarios. This enables the use of smaller models when the budget is limited, or latency is critical. It's important to emphasize that, given the substantial number of parameters in these large models, a viable dynamic inference strategy should not need loading different models during inference. Previous research has explored methods for training dynamic models capable of adapting to evolving resource constraints (Cai et al., 2019; Hou et al., 2020; Xin et al., 2020; Fan et al., 2019). However, existing approaches often rely on complex training procedures or necessitate modifications to the original model architecture. SortedNet (Valipour et al., 2023) introduces a novel approach to training deep neural networks that leverages the inherent modularity of these networks to construct sub-models with varying computational loads. This method sorts sub-models hierarchically based on their computation/accuracy characteristics, facilitating efficient deployment during inference. Furthermore, it employs an efficient updating scheme that combines random sampling of sub-models with gradient accumulation to minimize the training cost. Consequently, with a single round of training, numerous models can be obtained within a single model. While the SortedNet approach has primarily been applied to vision and language understanding tasks, given the significant impact of generative language models in today's AI landscape, the efficacy of this method for generative tasks in NLP is of considerable interest. In fact, being able to make a large language model dynamic without the need for pretraining and only at the cost of a round of Supervised Fine-Tuning can open doors to efficient inference of these models without incurring additional expenses associated with common model compression methods like knowledge distillation and pruning, among others. Moreover, since all the resultant models are components of the original model, the storage requirements and the cost associated with transitioning between different computation demands become minimal. Otherwise, managing multiple models for various scenarios during inference becomes impractical. In this study, we challenge the conventional approach of relying solely on the last layer's contextual embeddings and use Sorted Fine-Tuning (SoFT) in place of Supervised Fine-Tuning to enhance the performance of these models across multiple layers. By doing so, we aim to provide new insights into the efficiency and effectiveness of middle layers in producing high-quality results for specific downstream tasks. Our proposed approach has the potential to optimize the usage of these models, ultimately enhancing their overall performance. In this paper, we seek to answer the following questions through systematic evaluation: i) Do the intermediate layers resulting from Supervised Fine-Tuning of a large language model generate accurate and meaningful outputs? ii) Does Supervised Fine-Tuning exhibit a sorted behavior, meaning that later layers produce more accurate and meaningful results than earlier layers? If so, to what extent? iii) How can we enhance this sorted behavior with minimal cost? To answer these questions, we employ LLaMa 2 13B and perform both standard Supervised Fine-Tuning (SFT) and Sorted Fine-Tuning on the Stanford Alpaca dataset (Taori et al., 2023), while maintaining equivalent costs for the two approaches. For Sorted Fine-Tuning, we target 8 sub-models and share the LLM head among them to ensure cost parity. We utilize the PandaLM benchmark (Wang et al., 2023) and human evaluation to assess the performance of the sub-models. Our findings demonstrate that with Sorted Fine-Tuning, models that are twice as fast as the original model can still outperform the normally tuned original model. The contributions of this paper can be summarized as follows: * Extending the SortedNet method for tuning auto-regressive language models for generative tasks by sharing a single LLM head layer among sub-models. * Generating 8 nested sub-models, ranging from 12 to 40 layers, from LLaMa2 13B by applying Sorted Fine-Tuning on the Stanford Alpaca dataset and at a cost equivalent to Supervised Fine-Tuning. * Evaluating the performance of the sub-models of a LLaMA 2 and demonstrating the effectiveness of SortedNet tuning in enhancing the ability of intermediate layers for text generation through extensive evaluation. ## 2 Related Work In this section, we briefly introduce the most relevant papers to our work. Many-in-One ModelsDeep neural networks (DNNs) are usually overparameterized which motivates researchers to see how to use the parameters of the models more efficiently. More number of parameters lead to higher costs of deployment for neural networks. Moreover, in practice, these overparametrized DNNs are supposed to serve customers with different requirements and computational budgets. To address these diverse demands, one can think of training models of different sizes, which will be very costly (in terms of training and memory), or another alternative is to train many-in-one networks (Cai et al., 2019). The target of many-in-one solutions is to train a network with some of its sub-networks at the same time on a particular task. For example, we can refer to the _Early Exit_ method (Xin et al., 2020) in which a prediction head is tuned on top of targeted intermediate layers of a network. Another case in point is _Layer Drop_(Fan et al., 2019), which trains a network in any depth by randomly dropping the layers during training. While both Early Exit and Layer Drop are simple solutions, they are not state-of-the-art in terms of performance. In Early Exit, we only train the output prediction layer on top of each intermediate layer, and this layer might not have enough capacity to retain a good performance. Layer Drop, however, suffers from the abundant number of possible sub-models in training, which makes the training process very exhaustive and sub-optimal. Moreover, in this solution, we need to adjust the layer drop rate, which determines the extent of dropping layers during training. This layer drop rate during training determines the best size and setting of the model at the inference time, which means that deviating from the training drop rate at the inference time can lead to a significant drop in performance. Cai et al. (2019) in _Once for All (OFA)_ proposed an alternative solution to neural architecture search (NAS). OFA requires training the model and all possible sub-models in a progressive way and then a separate search phase. Dyna-BERT Hou et al. (2020) is another work that targets training Dynamic pre-trained many-in-one BERT models in two stages: first, distilling from the main network to the width adaptive networks and then distilling from the width adaptive networks to depth adaptive networks. For both width adaptive and depth adaptive networks, we have a pre-defined set of width and depth for the sub-models such as 25%, 50%, 75%, and 100%. While both OFA and DynaBERT have shown successful results, their solution is not scalable to large language models because of their complicated multi-stage training process and their requirement for search and knowledge distillation. In the most recent effort, we have SortedNet Valipour et al. (2023), which forms and trains sub-models of a network in a sorted manner, and it does not require any search during training or during inference. SortedNet has shown superior performance compared with other mentioned methods in terms of simplicity, performance, scalability, and generality. Considering these benefits, we target deploying the SortedNet training algorithm for building many-in-one LLMs. **Many-in-One Large Language Models (LLMs)** Large language models have gained a lot of attention in the literature recently Touvron et al. (2023); Brown et al. (2020); OpenAI (2023); Chowdhery et al. (2022); Ouyang et al. (2022). In practice, these LLMs serve users with different tasks, expectations, and computational budget requirements Sun et al. (2022). There are two types of adaptation approaches to make LLMs suitable for customer requirements: first is the so-called parameter efficient tuning (PEFT), and second is model compression. In PEFT, the core backbone model remains the same and we just update some adapter parameters (e.g. LoRA Hu et al. (2021), KRONA Edalati et al. (2022), Adapter Houlsby et al. (2019); Pfeiffer et al. (2020), DyLoRA Valipour et al. (2022), Ladder Side-Tuning Sung et al. (2022)) and Compacter Karimi Mahabadi et al. (2021). In model compression, the larger model is compressed using any model compression solutions such as knowledge distillation, pruning Bansal et al. (2023), and quantization Prato et al. (2019), a good related survey can be found in Zhu et al. (2023). Even though PEFT solutions are pretty popular with LLMs, they do not provide dynamic-size LLMs. Model compression solutions are able to provide models with different sizes, but they need to train each compressed model separately, and they are not many-in-one models. To the best of our knowledge, we do not have any many-in-one LLM model released yet. Considering the benefits of many-in-one networks and the growing application of LLMs, we are going to release the first many-in-one LLM by applying the SortedNet training to the LLaMA 13B model. ## 3 Methodology The methodology of this paper concerns making LLMs many-in-one inspired by the SortedNet approach Valipour et al. (2023). Let's consider a language model \(f(x;\theta)\) with the parameters \(\theta\) and the input x and review the training procedure: Figure 1: Zero-Shot SoFT vs. Early-Exit SFT Forming Sub-NetworksFirst, we need to form the sub-networks of the LLM. For the sake of simplicity and without loss of generality, we focus on the depth-wise sub-networks. Supposed that the sub-network \(f_{n}(x;\theta_{n})\) refers to the first \(n\) layers of \(f(x;\theta)\). In this paper, the language model is considered to be LLaMA2 13B. Since LLaMA2 is comprised of 40 layers, we define the sub-networks to be \(n\in\mathbf{B}=\{12,16,20,24,28,32,26,40\}\). Calculating the Output of Sub-NetworksThe output of each sub-model will be predicted by using the shared output prediction head from the last layer (original network). Bear in mind that in the LLaMA model, there is an RMSNorm layer (Zhang and Sennrich, 2019) before the output prediction head. This RMSNorm is added before the shared prediction head of every sub-model. We believe this normalization is a key factor that helps Sorted LLama to generalize better for all sub-models. Objective FunctionTo train the network, we define the loss \(L_{n}(x;\theta_{n})\) for the \(n^{\text{th}}\) sub-model. \[\begin{split}&\mathcal{L}=\sum_{n\in\mathbf{B}}L_{n}(x;\theta_{n}) \\ &\theta^{+}=\theta^{-}-\eta\nabla_{\theta}\mathcal{L}\end{split} \tag{1}\] In our case, \(|\mathbf{B}|=8\) which means the total loss is the summation of 8 different losses corresponding to the sub-models and the main model. Training DatasetWe utilized the Stanford Alpaca dataset (Taori et al., 2023), which includes demonstrations of 52K instruction-following examples. EvaluationIn this paper, in addition to the produced embedding of the last layer, we evaluate the quality of the produced embedding of middle outputs from block 1 to n. Panda-LM benchmark (Wang et al., 2023) is used for the comparison of the output of different sub-models. Panda-LM deploys a large language model (Fine-Tuned LLaMA 7b) to judge the quality of generated text from two sources. PandaLM provides a validation set consisting of 170 instructions 1, denoted as \(T\), to evaluate target models for instruction-following tasks. To make sure that the order of the model responses has no impact on the judgment of the trained PandaLM evaluator, we reported an average score in both Model 1 first and Model 2 first scenarios. The output of the PandaLM evaluation is the number of wins, denoted as \(W\), the number of losses, denoted as \(L\), and the number of ties in the validation set. The final reported score has been calculated using the following formula: Footnote 1: github.com/WeOpenML/PandaLM/blob/main/data/testset-inference-v1.json \[Score=(W-L)/T \tag{2}\] The final score is a number between -1 and 1, in which 1 represents a strong win rate and -1 means a final poor performance of the model. BaselineThe primary objective of an LLM in this paper is to follow the provided instructions for a query. Therefore, following the setup of Alpaca (Taori et al., 2023), we fine-tuned LLama2 13B on the Stanford Alpaca Dataset with two setups: Figure 3: Early Exit SoFT vs. Early Exit SFT Figure 2: Zero-Shot SoFT vs. Zero-Shot SFT Figure 4: A comparison of sub-models based on output logits and hidden state cosine similarity. (1) Regular Supervised Fine-Tuning (SFT) as the baseline, focusing only on the training of the last layer of the network as the common practice in the literature; (2) Sorted Fine-Tuning (SoFT), calculating loss for multiple outputs from layer 12 to layer 40 (last layer) with four intervals, and training multiple models simultaneously as explained in the previous section. ## 4 Experiments This section delves into the specifics of the experiments conducted, and the analysis provided to better understand the effect of Sorted Fine-Tuning over the performance of a large language model like LLama2 (Touvron et al., 2023). ### What is the effect of sorting information across layers of a generative model? As mentioned before, we generated responses for all the layers \(n\in\mathbf{B}\) for both SFT and SoFT-based trained models. Then we conducted a pair-wise comparison between all the sub-models using the PandaLM evaluator. As the results suggest in Figure 1, sorted training has a significant impact on transforming the learned knowledge to intermediate layers. Sorted LLaMA (aka SoFT) is outperforming regular fine-tuning (SFT) in nearly all layer comparisons by a meaningful margin, in this automated evaluation. It is important to note that this is despite the fact that we evaluated the performance of SoFT in a zero-shot manner, but to make the result of SFT layers more meaningful, inspired by Early-Exit (Xin et al., 2020), we needed to train the classifier layer for all sub-models separately for one additional epoch. It might be noted that the Layer 12 performance of SFT is slightly better compared to Layer 12 of Sorted LLaMA. We argue this is happening because the evaluator has not been trained to robustly work on gibberish as well as fair comparisons. As can be seen in Table 1, the generated text of earlier layers in SFT is mostly gibberish. As we go to higher layers in SFT, the generated text becomes more and more meaningful, which makes the comparison with the Sorted LLaMA layer counterpart more reasonable. Also, another area in which the SFT model performs better compared to Sorted LLaMA is in the last two columns, where different layers of Sorted LLaMA are compared to the last two sub-models of the Fine-Tuned model. The fine-tuned model performs better only with a larger number of parameters. For example, 32 layers in Sorted LLaMA won't beat the 40 layers SFT. According to the results, the sub-model in Sorted LLaMA with 32 layers performs almost as well as regular fine-tuning of the full-size model. This showcases the impressive ability of our proposed paradigm to generate powerful, small sub-models that have similar performance to the original model. In figures 2 and 3, we evaluated both SFT and SoFT in an equal situation either Zero-Shot or Early-Exit, respectively. As depicted, the outcome remains almost unchanged. ### Analysis #### 4.2.1 A comparison between the learned probability distribution of SoFT versus SFT Figure 4 (Left) compares the probability distribution of Sorted LLaMA and SFT sub-models at different output positions. We used Kullback-Leibler(KL)-divergence as a metric to measure the similarity between two probability distributions. The comparison between the last layer and the layers from 12 to 36 in the SFT model is shown in Figure 3(a) (Left). As it is obvious, the output distribution diverges quickly compared to the last layer after generating initial tokens, even in higher layers like 36 and 32. It is important to note that this evaluation was generated without adjusting the classifier head in a zero-shot manner. Figure 3(b) (Left) demonstrates that in Sorted LLaMA, as we get closer to the last layer, the likelihood distribution of the produced outcome turns out to be increasingly similar to the full-size sub-model, at least in the initial positions of the generated text. Figure 3(c) (Left) shows the comparison among different SFT layers and the last Sorted LLaMA layer. The figure shows only SFT's full-size output distribution is close to the sorted full-size model, while the other layers' distribution diverges fast in the initial steps compared to the SoFT. Finally, Figure 3(d) (Left) compares the output distribution of all sorted layers to the last SFT layer. Compared to Figure 3(c) (Left), Figure 3(d) (Left) Sorted LLaMA can preserve the output distribution close to the SFT full-size model even in lower layers for initial output tokens. #### 4.2.2 A comparison between the learned representation of SoFT versus SFT Figure 4 (Right) compares the learned hidden state representation of SFT and Sorted LLaMA sub-models at various positions in the output. This will make the analysis independent of the language model head. We used cosine similarity to measure the difference between the two representations. As shown through the heatmaps, the cosine similarity heatmaps are highly correlated to the KL-Divergence comparison heatmaps explained in the previous section. In Figure 3(a) (Right), the heatmap of hidden states cosine similarity among different SFT sub-models compared to the SFT last layer is depicted. Similar to its left plot, the similarity quickly diminishes after a few tokens, and this fade is more considerable in earlier layers. On the other hand, Figure 3(b) (Right) shows that the representations of Sorted sub-models stay similar to the Sorted last layer even after generating multiple initial tokens. Figure 3(c) (Right) shows the comparison of all SFT sub-models with the Sorted last layer in terms of hidden representation similarity. Again similar to probability distribution analysis, the similarity between the SFT sub-model and Sorted last layer tends to fade immediately after generating the first few tokens, while Figure 3(d) demonstrates the capability of Sorted LLaMA sub-models in preserving the learned representations closely similar to the SFT last layer hidden states. #### 4.2.3 Case Specific Analysis Table 1 shows two samples of instructions from the PandaLM benchmark and the generated responses by SFT Early Exit and Sorted LLaMA sub-models. In the first example, Sorted LLaMA demonstrates superior performance in preserving and transferring the last layer performance to earlier sub-models based on the information made visible by green (related to the query) and red (hallucinations, irrelevant, etc.) colors. Sorted sub-models generate almost correct answers from the 20 layers sub-model, while the first meaningful result from SFT sub-models appears in layer 28. In the second example, although both SFT and Sorted LLaMA generate nearly perfect responses in the last three sub-models, there is still a noticeable difference between the performance of Sorted and SFT in sub-models of early layers. While SFT sub-models struggle to complete the answer without providing any meaningful definition of the given word in early layers (12 to 28), we can see that Sorted sub-models start to generate brief explanations about the query. In Appendix A, we further provided more details and experiments. ## 5 Conclusion In this work, we present sorted LLaMA, many-in-one LLaMA models for dynamic inference obtained by using Sorted fine-tuning instead of supervised fine-tuning. Sorted LLaMA unlocks the potential representation ability of intermediate layers, offering dynamic adaptation without pre-training or additional expenses related to model compression. It presents a promising avenue for optimizing generative language models in NLP. Our approach makes the deployment of these models far more efficient. As all sub-models remain integral components of the original model, the burden of storage requirements and transition costs between different computational demands is minimized, making the management of multiple models during inference a practical reality. Our systematic evaluation challenged conventional wisdom by focusing on the effectiveness of middle layers in producing high-quality results for specific downstream tasks. Through Sorted fine-tuning, we unlocked the potential of these intermediate layers to enhance performance, ultimately optimizing the usage of LLMs. ## 6 Limitations Despite showing the effectiveness of the Sorted-Net approach for large language models, further research is necessary to better understand the scope of its applicability in LLMs. For example, applying this method during pre-training, sorting other model dimensions such as attention heads and hidden dimensions, and investigating the impact of choosing a specific architecture could offer potential avenues for future research. Our study might be biased by automated evaluation, requiring further investigation through human evaluation.
2309.06951
TransNet: A Transfer Learning-Based Network for Human Action Recognition
Human action recognition (HAR) is a high-level and significant research area in computer vision due to its ubiquitous applications. The main limitations of the current HAR models are their complex structures and lengthy training time. In this paper, we propose a simple yet versatile and effective end-to-end deep learning architecture, coined as TransNet, for HAR. TransNet decomposes the complex 3D-CNNs into 2D- and 1D-CNNs, where the 2D- and 1D-CNN components extract spatial features and temporal patterns in videos, respectively. Benefiting from its concise architecture, TransNet is ideally compatible with any pretrained state-of-the-art 2D-CNN models in other fields, being transferred to serve the HAR task. In other words, it naturally leverages the power and success of transfer learning for HAR, bringing huge advantages in terms of efficiency and effectiveness. Extensive experimental results and the comparison with the state-of-the-art models demonstrate the superior performance of the proposed TransNet in HAR in terms of flexibility, model complexity, training speed and classification accuracy.
K. Alomar, X. Cai
2023-09-13T13:34:22Z
http://arxiv.org/abs/2309.06951v1
# TransNet: A Transfer Learning-Based Network for Human Action Recognition ###### Abstract Human action recognition (HAR) is a high-level and significant research area in computer vision due to its ubiquitous applications. The main limitations of the current HAR models are their complex structures and lengthy training time. In this paper, we propose a simple yet versatile and effective end-to-end deep learning architecture, coined as _TransNet_, for HAR. TransNet decomposes the complex 3D-CNNs into 2D- and 1D-CNNs, where the 2D- and 1D-CNN components extract spatial features and temporal patterns in videos, respectively. Benefiting from its concise architecture, TransNet is ideally compatible with any pretrained state-of-the-art 2D-CNN models in other fields, being transferred to serve the HAR task. In other words, it naturally leverages the power and success of transfer learning for HAR, bringing huge advantages in terms of efficiency and effectiveness. Extensive experimental results and the comparison with the state-of-the-art models demonstrate the superior performance of the proposed TransNet in HAR in terms of flexibility, model complexity, training speed and classification accuracy. ## I Introduction The computer vision community has studied video analysis for decades, including action recognition [1] and activity understanding [2]. Human action recognition (HAR) analyses and detects actions from unknown video sequences. Due to the rising demand for automated behaviour interpretation, HAR has gained dramatic attention from academics and industry and is crucial for many applications [3]. Good action recognition requires extracting spatial features from the sequenced frames (images) of a video and then establishing the temporal correlation (i.e., temporal features) between these spatial features. Thus, action recognition models analyse two types of features, establish their relationship, and classify complex patterns. This makes these models vulnerable to a number of significant challenges, including i) the limited ability to transfer learning exploiting advanced models from other fields in computer vision, ii) the need for large volumes of data due to the model complexity, iii) the need for accurate temporal analysis of spatial features, and iv) the overlap of moving object data with cluttered background data [4]. The improvement process across generations of these models is inconsistent [5]. This results in a wide range of works that may face difficulty of transferring learning ability between generations, especially when these models are constructed differently and/or developed in different fields for extracting specific spatial features in HAR. Temporal modeling presents a big challenge in action recognition. To address this, researchers often employ 3D-CNN models, which excel at interpreting spatio-temporal characteristics but suffer from much larger size compared to 2D-CNN models [6]. Moreover, optimising 3D networks becomes difficult when dealing with insufficient data [7], since training a 3D convolutional filter necessitates a substantial dataset encompassing diverse video content and action categories [8]. Unlike recurrent neural networks (RNNs) that emphasise temporal patterns [9], 3D networks analyse videos as 3D images, potentially compromising the sequential analysis of temporal data. Both 3D-CNNs and RNNs are challenged by the increased model size and lengthy training time [10]. The presence of cluttered backgrounds presents another challenge in HAR. Indoor environments with static and constant backgrounds are typically assumed to yield high performance for HAR models, whereas performance could significantly diminish in outdoor contexts [11, 12]. Cluttered backgrounds introduce interruptions and background noise, encoding problematic information in the extraction of global features and leading to a notable decline in performance. To address this challenge, a practical approach is to design models that focus on the human object rather than the background. Scholarly literature consistently indicates that incorporating multiple input modalities, including optical flow and body part segmentation, shows promise in enhancing HAR performance. This conclusion is substantiated by a range of survey studies conducted in the field of action recognition, providing robust evidence for the effectiveness of leveraging diverse input modalities [13, 14, 15]. However, there are several issues with these types of models, including their various modelling steps, preprocessing stages, lengthy training time, and significant demands on resources such as memory and processing power. These models are also difficult to implement in real-world applications. In this paper, we propose an end-to-end deep learning architecture called _TransNet_ for HAR, see Figure 1. Rather than using complex 3D-CNNs, TransNet consists of 2D- and 1D-CNNs that extract spatial features and temporal patterns in videos, respectively. TransNet offers the following multiple benefits: i) a single network stream using only RGB frames; ii) transfer learning ability and flexibility because of its compatibility with any pretrained state-of-the-art 2D-CNN models for spatial feature extraction; iii) a customisable and simpler architecture compared to existing 3D-CNN and RNN models; and iv) fast learning speed and state-of-the-art performance in HAR. These benefits allow TransNet to leverage the power and success of transfer learning for HAR, bringing huge advantages in terms of efficiency and effectiveness. An additional contribution of this paper is that we introduce the strategy of utilising autoencoders to form the TransNet's 2D component, i.e., named _TransNet+_, see Figure 2. Transit+ employs the encoder part of the autoencoder trained on computer vision tasks like human semantic segmentation (HSS) to conduct HAR. Extensive experimental results and the comparison with the state-of-the-art models demonstrate the superior performance of the proposed TransNet/TransNet+ in HAR. ## II Related Work ### _HAR with background subtraction_ Most research on HAR focuses on human detection and motion tracking [16]. Background subtraction has been suggested in a number of methods and proven to be viable for HAR. For example, a background updating model based on a dynamic optimisation threshold method was developed in [17] to detect more complete features of the moving object. The work in [18] introduced a basic framework for detecting and recognising moving objects in outdoor CCTV video data using background subtraction and CNNs. Jaouedi et al. [16] employed a Gaussian mixture model and Kalman filter [19] techniques to detect human motion by subtracting the background. ### _HAR with multimodality_ Since video comprehension requires motion information, researchers have integrated several input modalities in addition to RGB frames to capture the correlation between frames in an effort to enhance model performance. **Optical flow.** Optical flow [20], which effectively describes object or scene motion flow, is one of the earliest attempts to capture temporal patterns in videos. In comparison to RGB images, optical flow may successfully remove the static background from scenes, resulting in a simpler learning problem than using RGB images as input [21, 22]. Simonyan et al. [23] began the trend of using multiple input modalities, including optical flow, with CNNs. However, when compared to the latest deep learning techniques, optical flow has a number of disadvantages, including being computationally complex and highly noise-sensitive [24, 25], which make its use in real-time applications less feasible. **Semantic segmentation.** Semantic segmentation is a technique that may be used to separate either the entire body or particular body parts from the background [26]. It is a pixel-wise labelling of a 2D image, offering spatial features describing the shape of the object of interest [27]. Zolfaghari et al. [28] presented a chained multi-stream model that pre-computes and integrates appearance, optical flow, and human body part segmentation to achieve better action recognition and localisation. Benitez et al. [29] offered an alternative to the costly optical flow estimates used in multimodal hand gesture Fig. 1: TransNet architecture for HAR. The given video frames are input into the time-distributed layer, which employs a 2D-CNN model (e.g., MobileNet, MobileNetV2, VGG16, or VGG19) several times based on the number of video frames, allowing the architecture to analyse multiple frames without expanding in size. Then the spatial features corresponding to the individual input frames are generated, which are subsequently analysed by the 1D-CNN layers, extracting the spatio-temporal features. The SoftMax layer finally classifies the action according to the spatio-temporal pattern. Fig. 2: An illustration of TransNet+ for HAR. TransNet+ inherits the architecture of TransNet. It uses the autoencoder’s encoder to form the TransNet’s 2D component. recognition methods. It was built using RGB frames and hand segmentation masks, with better results achieved. Although semantic segmentation approaches have shown promising outcomes in action recognition, the majority of them are computationally demanding. In fact, real-world action recognition methods involving semantic segmentation of video content are still in their infancy phase [30]. In sum, most of the aforementioned research focused on creating synthetic images that reflect different input modalities and then analysing them using action recognition models. Pre-computing multiple input modalities such as optical flow, body part segmentation, and semantic segmentation can be computationally and storage-intensive, making them unsuitable for large-scale training and real-time deployment. Since research in the subject of semantic segmentation may still be in its early stage, one of the objectives of this study is to enhance its potential in HAR. ### _3D-CNNs decomposition_ Video can be conceptually simplified by viewing it as a 3D tensor with two spatial dimensions and one time dimension. As a result, 3D-CNNs are adopted to model the spatial and temporal data in video as a processing unit [31, 32, 33]. Ji et al. [32] proposed the pioneer work in the application of 3D-CNNs in action recognition. Although the model's performance is encouraging, the network's depth is insufficient to demonstrate its potential. Tran et al. [1] extended the work in [32] to a 3D network with more depth, called C3D. C3D adopts the modular architecture, which can be viewed as a 3D version of the VGG16 network. It is worth noting that training a sufficiently deep 3D-CNN from scratch will result in much higher computational cost and memory requirements compared to 2D-CNNs. Furthermore, 3D networks are complex and difficult to optimise [7]; therefore, a big dataset with diverse video data and activity categories is required to train a 3D-CNN effectively. In addition, it is not straightforward for 3D-CNNs to transfer learning from state-of-the-art pretrained 2D-CNN models since kernel shapes are completely different. Carreira et al. [34] proposed 13D, a 3D-CNN architecture that circumvents the dilemma that 3D-CNNs must be trained from scratch. A strategy was employed to transform the weights of pretrained 2D models, e.g. on ImageNet, to their 3D counterparts. To understand this intuitively, they repeated the weights of the trained 2D kernels along the time dimension of the 3D kernels. Although I3D was successful in overcoming the challenge of spatial transfer-learning, its 3D kernels require enormous quantities of action recognition datasets to capture temporal features. Moreover, the way that I3D stretches 2D-CNN models into 3D-CNNs remains computationally expensive. P3D [35] and R2+1D [36] investigate the concept of decomposing the 3D CNN's kernels into 2D and 1D kernels. They differ in their arrangement of the two factorised operations and their formulation of each residual block. This kind of approach to 3D network decomposition acts at the kernel level. The notion of kernel-level factorisation restricts the ability to switch models (e.g., ResNet50 and VGG16) based on implementation requirements and hinders transfer learning from the current state-of-the-art models. ## III Proposed TransNet In this section, we first present our motivations and then introduce the proposed TransNet and its variants. ### _Preliminary_ Video data analysis in deep learning commonly involves two types of approaches: 2D-CNN-RNN [37, 38, 39, 40] and 3D-CNN [41, 42, 43]. The CNN-RNN approach comprises a spatial component based on 2D-CNN and a temporal component based on RNN, offering customisation in the 2D-CNN part. However, it often requires longer training time due to the complexity of RNN compared to CNN [44]. On the other hand, 3D-CNN is faster and simpler to implement but struggles with convergence and generalisation when dealing with limited datasets [45]. Alternatively, the implementation of 1D-CNN in temporal data analysis holds promise for developing more accurate and efficient models [46, 47]. The other main motivation is transfer learning, applying well-designed and well-trained models learnt from one task (i.e., the source task, generally with large data available) to another (i.e., the target task, generally with limited data available) for performance enhancement [48]. The underlying essential assumption is that the source and target tasks are sufficiently similar [48, 49]. In the data scarcity scenario, models may be prone to overfitting, and data augmentation may not be enough to resolve the issue [50]. Therefore, transfer learning could play a key role in this regard. Since HAR requires 3D data analysis, obtaining an optimised model requires training on a large amount of data Fig. 3: Data samples. First row: samples of UCF101 actions (left) and HMDB51 actions (right); second row: samples of the supervised person dataset (left) and a frame sequence of the action class “walking” from the KTH dataset (right). compared to 2D data [51, 8]. This calls for the use of transfer learning, e.g., pre-training state-of-the-art models first to classify 2D images using large datasets such as ImageNet. However, it is important to study and verify the assumption that the HAR task shares sufficient similarities with the image classification task. Previous research in [52] has shown disparities between CNNs trained on ImageNet and human observers in terms of shape and texture cues, with CNNs exhibiting a strong preference for texture over shape. Additionally, several studies suggest that object shape representations hold greater importance in action recognition tasks [53, 54, 55, 56]. ### _Methodology_ **TransNet.** We propose to construct a paradigm of utilising the synergy of 2D- and 1D-CNNs, see Figure 1 for the end-to-end _TransNet_ architecture. TransNet provides flexibility to the 2D-CNN component in terms of model customisability (i.e., using different state-of-the-art 2D-CNN models) and transferability (i.e., involving transfer learning); moreover, it benefits from the 1D-CNN component supporting the development of faster and less complex action recognition models. TransNet includes the time-distributed layer wrapping the 2D-CNN model. In particular, the 2D component is customisable, and any sought-after 2D-CNN models (e.g., MobileNet, MobileNetV2, VGG16 or VGG19) can be utilised. The time-distributed layer is followed by three 1D convolutional layers for spatio-temporal analysis. In detail, the first one's kernels process the feature map vectors over \((n-1)\) steps, where each kernel has a size of 2, capturing the correlations between a frame and its neighbour, and \(n\) is the number of frames in a video clip; the second one's kernels have a size of \((n-1)\), analysing all feature vectors in one step to capture the whole temporal pattern of the frame-sequence; and the third one uses the SoftMax function for the final classification, followed by the flatten layer. More details are given below. We first define the symbols used for semantic segmentation. Let \(\mathbf{X}\) represent the input image, and \(\mathbf{z}=p_{\theta}(\mathbf{X})\in\mathbb{R}^{L}\) be the output vector (i.e., latent representation) of the encoder function \(p_{\theta}\) (e.g. MobileNet or VGG16) with parameters \(\theta\). The decoder function is defined analogously. The formed autoencoder can then be trained with the ground truth images. Let \(\mathcal{X}\) be a collection of \(n\) frames \(\mathcal{X}=\{\mathbf{X}^{i}\}_{i=1}^{n}\), which is fed into the 2D component (spatial component) of the TransNet architecture in Figure 1. The trained encoder \(p_{\theta}\) is then used \(n\) times to process \(\mathcal{X}\) frame by frame and create a set of \(n\) spatial feature vectors \(\mathcal{Z}=\{\mathbf{z}^{i}\}_{i=1}^{n}\), where \(\mathbf{z}^{i}=p_{\theta}(\mathbf{X}^{i})\). Let \(\{\mathbf{w}^{j,1},\mathbf{w}^{j,2}\}_{i=1}^{K}\) be a set of weights, where \(\mathbf{w}^{j,1},\mathbf{w}^{j,2}\in\mathbb{R}^{L}\). The first of the three 1D layers (i.e., the temporal component) processes every two adjacent spatial vectors of \(\mathcal{Z}\), i.e., \(\{\mathbf{z}^{i},\mathbf{z}^{i+1}\}\), to generate the corresponding spatio-temporal feature vectors \(\mathbf{h}^{i}=(h_{1}^{i},\cdots,h_{K}^{i})\in\mathbb{R}^{K},i=1,\ldots,n-1\), where \[h_{j}^{i}=f(\sum_{l=1}^{L}\sum_{k=i}^{i+1}z_{l}^{k}w_{l}^{j,k-i+1}+b_{i}^{j}), \ \ j=1,\ldots,K,\] \(b_{i}^{j}\) are the biases and \(f\) is the activation function (i.e., Relu \(f(x)=\max(0,x)\) is used here). Let \(\{\hat{\mathbf{w}}^{j,1},\hat{\mathbf{w}}^{j,2},\cdots,\hat{\mathbf{w}}^{j,n-1}\}_{j=1}^{C}\) be another set of weights, with \(\hat{\mathbf{w}}^{j,k}\in\mathbb{R}^{K},k=1,\ldots,n-1\). The second 1D layer processes the set of spatio-temporal vectors \(\{\mathbf{h}^{i}\}_{i=1}^{n-1}\) to generate a single spatio-temporal vector \(\mathbf{v}=(v_{1},\cdots,v_{C})\in\mathbb{R}^{C}\), where \[v_{j}=f(\sum_{l=1}^{K}\sum_{k=1}^{n-1}h_{k}^{k}\hat{w}_{l}^{j,k}+\hat{b}^{j}), \ \ j=1,\ldots,C,\] and \(\hat{b}^{j}\) are the biases. Finally, the Softmax layer is used on \(\mathbf{v}\) to classify action classes. **TransNet+.** Except for using a sought-after 2D-CNN for TransNet's 2D component, below we present a way of leveraging transfer learning for it. To do so, we construct an autoencoder where TransNet's 2D component serves as its encoder. The autoencoder is then trained on a specific computer vision task such as HSS to extract specific features such as human shape, e.g., see the left of the second row in Figure 3. After training, the encoder's parameters become saturated with weights that are capable of describing the features of the required task, such as HAR, see Figure 2. In this way, the features like object shape that TransNet's 2D component needs to learn can be predetermined by training the autoencoder. We name this way of executing TransNet as _TransNet+_. Note that autoencoders have been used in action recognition challenges e.g. [28]. However, there are a number of disadvantages in their use of autoencoders, including the multiplicity of modelling steps, the need for a large amount of memory, and the lengthy training time due to the high computational cost of training the autoencoder network and action recognition network. In contrast, TransNet+ is a huge step further in contributing to the development of an end-to-end HAR model with potential in real-time implementation, since it simplifies the process by just integrating the trained encoder rather than the entire autoencoder in TransNet, with the premise that the trained encoder carries weights capable of describing important features (see Figure 2). On the whole, the traditional method of using autoencoders in HAR differs from TransNet+ in that the traditional method uses the entire autoencoder and its output as the next stage's input, whereas TransNet+ just employs the trained encoder of the autoencoder for spatial feature extraction. **Model complexity.** The proposed TransNet model is customisable, and thus its size varies depending on the 2D-CNN model being used in the spatial component. In particular, it is quite cost-effective since it uses a time-distributed layer, allowing the 2D-CNN to be used repeatedly without expanding in size. Table I gives the number of parameters regarding different choices of the 2D-CNN models. ## IV Data In our study, we use two primary groups of benchmark datasets. The first consists of ImageNet and the supervisely person dataset used for transfer learning, while the second consists of the KTH, HMDB51 and UCF101 datasets used for method evaluation (with a split ratio of 80% and 20% for training and test, respectively); see below Figure 3 for a brief description and for some samples from these datasets. ### _Transfer leaning datasets_ **ImageNet.** ImageNet [57] is a famous database consisting of 14,197,122 images with 1000 categories. Since 2010, it has been used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). **Supervised person dataset.** This dataset [58] is publicly available for human semantic segmentation, containing 5,711 images and 6,884 high-quality annotated human instances. It is available for use in academic research with the purpose of training machines to segment human bodies. ### _Human action recognition datasets_ **KTH.** In 2004, the Royal Institute of Technology introduced KTH, a non-trivial and publicly available dataset for action recognition [59]. It is one of the most standard datasets, including six actions (i.e., walking, jogging, running, boxing, hand-waving, and hand-clapping). Twenty-five different people did each activity, allowing for variation in performance; moreover, the setting was systematically changed for each actor's action, i.e., outdoors, outdoors with scale variation, outdoors with varied clothing, and indoors. KTH includes 2,391 sequences. All sequences were captured using a stationary camera with 25 fps over homogeneous backgrounds. **UCF101.** In 2012, UCF101 [60] was introduced as a follow-up to the earlier UCF50 dataset. It is a realistic (not staged by actors) HAR dataset, containing 13,320 YouTube videos representing 101 human actions. It provides a high level of diversity in terms of object appearance, significant variations in camera viewpoint, object scale, illumination conditions, a cluttered background, etc. These video clips are, in total, over 27 hours in duration. All videos have a fixed frame rate of 25 fps at a resolution of \(320\times 240\). **HMDB51.** HMDB51 [61] was released in 2011 as a realistic HAR dataset. It was primarily gathered from movies, with a small portion coming from public sources such as YouTube and Google videos. It comprises 6,849 videos organised into 51 action categories, each with at least 101 videos. ## V Experimental Results ### _Settings_ Our model is built using Python 3.6 with the deep learning library Keras, the image processing library OpenCV, matplotlib, and the scikit-learn library. A computer with an Intel Core i7 processor, an NVidia RTX 2070, and 64GB of RAM is used for training and test. Four CNN models with small sizes (i.e., MobileNet, MobileNetV2, VGG16, and VGG19) are selected as the backbones of TransNet/TransNet+, with parameter numbers of 3,228,864, 2,258,984, 14,714,688, and 20,024,388 (without the classification layers), respectively. TransNet with each different backbone is implemented in three different ways: i) untrained; ii) being trained on ImageNet; and iii) being trained on HSS using the supervisely person datasetas as encoders. Note that the last way is described in TransNet+. For ease of reference, we drop the '+' sign in the following. The number of 200 epochs is used to train all autoencoders, with a batch size of 24. The models are first trained and evaluated on the KTH dataset. Then the one with the best performance is selected to be evaluated on all the datasets, and compared with the current state-of-the-art HAR models. Each video clip consists of a sequence of 12 frames, and the input modality is RGB with a size of \(224\times 224\times 3\). ### _Results and discussion_ In a nutshell, we conduct experiments below with three main objectives: i) determining whether or not the proposed TransNet architecture can offer a reliable mechanism by leveraging transfer learning; ii) evaluating if the HSS-trained TransNet provides superior spatio-temporal characteristics for HAR than the ImageNet-trained TransNet; and iii) validating if the performance of the TransNet architecture can achieve state-of-the-art performance in comparison to current state-of-the-art methods in HAR. Initially, we subject TransNet to an evaluation using the KTH dataset, which serves as a suitable choice due to its primary emphasis on human action detection while excluding the presence of additional objects in the background, in contrast to the UCF101 and HMDB51 datasets. The purpose of this evaluation is to validate the viability of employing HSS as a means of pretraining to improve the performance of the model in similar tasks. The results presented in Table II demonstrate the superior performance of the TransNet model which was trained using HSS in comparison to its untrained and ImageNet-trained counterparts. Specifically, the untrained MobileNet, MobileNetV2, VGG16, and VGG19-based TransNet models achieved an average accuracy of 88.21%, and the ImageNet-trained models achieved an average accuracy of 95.09%. In contrast, the HSS-trained TransNet models achieved an average accuracy of 97.20%, indicating a significant improvement of \(\sim 8.99\%\) and \(\sim 2.11\%\) over the untrained and ImageNet-trained models, respectively. These findings underscore the effectiveness of the pretraining strategy employing autoencoders in enhancing the performance of the TransNet model. Additionally, the findings show the significance of incorporating transfer learning as a means of enhancing performance, thereby bestowing a substantial advantage to the 2D-1D-CNN architecture and enabling us to leverage transfer learning within the 2D-CNN component. Tables III, IV and V present the quantitative comparisons between TransNet and the current state-of-the-art methods being applied to the HAR datasets, i.e., KTH, UCF101 and HMDB51. In these comparisons, a MobileNet-based TransNet pretrained on ImageNet is used. The findings demonstrate the exceptional performance achieved by the proposed TransNet, surpassing the existing state-of-the-art results by a considerable margin. Additionally, these findings solidify the 2D-1D-CNN architecture as a highly effective approach for HAR.
2309.03829
Stable-fixed-point description of square-pattern formation in driven two-dimensional Bose-Einstein condensates
We investigate pattern formation in two-dimensional Bose-Einstein condensates (BECs) caused by periodic driving of the interatomic interaction. We show that this modulation generically leads to a stable square grid density pattern, due to nonlinear effects beyond the initial Faraday instability. We take the amplitudes of two waves parametrizing the two-dimensional density pattern as order parameters in pattern formation. For these amplitudes, we derive a set of coupled time evolution equations from the Gross--Pitaevskii (GP) equation with a time-periodic interaction. We identify the fixed points of the time evolution and show by stability analysis that the inhomogeneous density exhibits a square grid pattern, which can be understood as a manifestation of a stable fixed point. Our stability analysis establishes the pattern in BECs as a nonequilibrium steady state.
Keisuke Fujii, Sarah L. Görlitz, Nikolas Liebster, Marius Sparn, Elinor Kath, Helmut Strobel, Markus K. Oberthaler, Tilman Enss
2023-09-07T16:42:06Z
http://arxiv.org/abs/2309.03829v3
# Square Pattern Formation as Stable Fixed Point in Driven Two-Dimensional Bose-Einstein Condensates ###### Abstract We investigate pattern formation in two-dimensional Bose-Einstein condensates (BECs) caused by temporal periodic modulation of the interatomic interaction. Temporal modulation of the interaction causes the so-called Faraday instability in the condensate, which we show generically leads to a stable square grid density pattern. We take the amplitudes in each of the two directions spanning the two-dimensional density pattern as order parameters in pattern formation and derive a set of simultaneous time evolution equations for those order parameters from the Gross-Pitaevskii (GP) equation with a time-periodic interaction. We identify the fixed points of the time evolution and show by stability analysis that the inhomogeneous density exhibits a square grid pattern as a stable fixed point. _Introduction._--Spontaneous pattern formation is a phenomenon in which a uniform state loses its stability and becomes inhomogeneous as external parameters are varied. As a spontaneous translational symmetry breaking in nonequilibrium dynamics, pattern formation appears in nature at diverse scales, not only in physics [1; 2] but also in chemical reactions [3] and biology [4]. Understanding what kinds of patterns are formed and how they are formed is a fundamental interdisciplinary question common to these studies. In BECs, temporal modulations of system parameters, such as the magnitudes of trapping potentials and interactions, cause parametric instability. This instability, called the Faraday instability as an analogy to a similar phenomenon in classical fluids [5; 6], results in spontaneous pattern formation [7]. Since ultracold atomic systems provide BECs with experimentally controllable parameters, one-dimensional patterns were observed in one-dimensional BECs [8; 9] and as surface waves in two-dimensional BECs [10]. Theoretically, parametric instabilities appearing in driven quantum gases, including simple BECs, have been intensively studied [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. For two-dimensional Faraday patterns, it was observed that the symmetry of selected patterns can be engineered by combining multiple temporal modulations of the interactions [46]. Furthermore, in our companion work [47], we have observed two-dimensional Faraday patterns, clearly exhibiting a square grid, with a single-frequency temporal modulation of the interaction. The realization of two-dimensional Faraday patterns opens the stage for investigating nonlinear effects, such as the correlations that emerge between instability-induced excitations in different directions in the plane. In this Letter, we derive time-evolution equations for the amplitudes of density waves in two directions spanning two-dimensional density patterns in two-dimensional BECs due to the Faraday instability. We show that inhomogeneous condensates exhibit a square grid as a stable pattern. In our model, we consider density modulations in two directions \(\mathbf{k}\) and \(\mathbf{p}\) in the plane with amplitudes \(R_{k}\) and \(R_{p}\), respectively. When the angle between the two directions is near \(\pi/2\), both amplitudes grow to finite values and we find that the system exhibits a stable grid pattern. Conversely, for small angles, only one of the amplitudes grows to a finite value while the other is suppressed, and the BEC exhibits a stripe pattern. This result is clearly seen in Fig. 1, where the global stability of the patterns differs significantly depending on the angle between the two directions. In the experiment, many modes at different angles will initially grow due to the instability, and our analysis shows that two of these modes with an angle close to \(\pi/2\) between them will reinforce each other and grow into a grid pattern, while other modes at small angles are suppressed. Figure 1: Global stability of the patterns formed by two standing waves in a planar BEC in directions \(\mathbf{k}\) and \(\mathbf{p}\). Schematic figures at the fixed points represent corresponding stationary solutions, i.e., the grid-pattern, stripe-pattern, and uniform solutions. The angle \(\theta\) between the two excited modes is \(\pi/6\) (left) and \(\pi/2\) (right). In the latter case, the square grid pattern emerges as a stable fixed point. Parameters are \(\omega/\mu=2\) and \(A=0.6\) with small dissipation \(\Gamma=0.1\alpha\), where \(\omega,\mu\) and \(\alpha\) are the driving frequency, the chemical potential, and the drive amplitude for the Bogoliubov modes, respectively. _BEC with a time-periodically modulated interaction._--We consider a two-dimensional BEC with an interaction strength \(g(t)=\bar{g}[1-A\sin(\omega t)]\), which is modulated periodically in time with drive amplitude \(|A|<1\) around its mean value \(\bar{g}\). The dynamics of a BEC with wave function \(\Psi(t,\mathbf{x})\) is described by the Gross-Pitaevskii (GP) equation [48]: \[i\frac{\partial}{\partial t}\Psi(t,\mathbf{x})=\biggl{[}-\frac{\nabla^{2}}{2m}+g(t )|\Psi(t,\mathbf{x})|^{2}\biggr{]}\Psi(t,\mathbf{x}). \tag{1}\] We assume that the BEC is confined to a sufficiently large flat potential and dropped the trapping potential term; in experiment [47] the potential has absorptive boundaries, which mimics an infinitely extended system. Equation (1) has a uniform solution, \(\Psi_{\text{uni}}(t)=\Psi_{0}\exp[-i\mu t-i(\mu/\omega)A\cos(\omega t)]\) with the chemical potential \(\mu=\bar{g}|\Psi_{0}|^{2}\), but this solution becomes unstable due to the Faraday instability induced by the oscillating interaction [7]. This instability can be understood to be caused by an anomalous amplification of excited modes with wavevector \(\mathbf{k}\) satisfying the resonance condition \(n\omega/2=E_{\mathbf{k}}\) for \(n\in\mathbb{N}\). Here, \(E_{\mathbf{k}}=\sqrt{\epsilon_{\mathbf{k}}(\epsilon_{\mathbf{k}}+2\mu)}\) and \(\epsilon_{\mathbf{k}}=\mathbf{k}^{2}/(2m)\) represent the Bogoliubov quasiparticle and single-particle dispersions, respectively. The resonance condition \(n\omega/2=E_{\mathbf{k}}\) comes from the fact that the energy quantum \(n\omega\), injected into the system by the oscillation, is split between two quasiparticle excitations with wavevectors \(\pm\mathbf{k}\). Within a linear analysis, one can indeed derive Mathieu's differential equation from Eq. (1), which shows the amplification of modes with wavenumbers around the resonance condition [49]. _The amplitude equation._--In two-dimensional systems, the density distribution resulting from the instability exhibits a grid pattern spanned by two non-parallel wavevectors. The realized grid pattern is determined by the competition between the linear instability due to the parametric resonance and the nonlinear suppression from the interaction between the excited modes. We determine the magnitude of the amplitude and the angle of the realized grid, assuming that the drive amplitude \(A\) is small. The small amplitude \(A\) weakens the instability and slows the amplitude growth of the density pattern. As a result, the time evolution of the pattern amplitude is systematically obtained as slow-timescale dynamics. Using the multiple-scale method [1, 2], we derive the time-evolution equation for the pattern amplitude from Eq. (1). To investigate two-dimensional patterns spanned by two wavevectors, we expand the wave function as [50] \[\Psi(t,\mathbf{x})=\Psi_{\text{uni}}(t)\biggl{[}1+\phi_{k}(t)\cos(\mathbf{k}\cdot\mathbf{x })+\phi_{p}(t)\cos(\mathbf{p}\cdot\mathbf{x})\biggr{]}. \tag{2}\] The small drive amplitude \(A\) maintains the excitation \(\phi_{k/p}\) in the form of the Bogoliubov basis: \[\phi_{k/p}(t) =\biggl{(}1-\frac{\epsilon_{\mathbf{k}/\mathbf{p}}+2\mu}{E_{\mathbf{k}/\mathbf{p} }}\biggr{)}R_{k/p}(t)e^{i\omega t/2}\] \[\quad+\biggl{(}1+\frac{\epsilon_{\mathbf{k}/\mathbf{p}}+2\mu}{E_{\mathbf{k}/ \mathbf{p}}}\biggr{)}R_{k/p}^{*}(t)e^{-i\omega t/2}, \tag{3}\] where the complex amplitudes \(R_{k/p}(t)\) obey the complex Ginzburg-Landau equation (for details, see the supplement [51]): \[i\frac{\mathrm{d}}{\mathrm{d}t}R_{k}(t)=\Delta R_{k}(t)-i\alpha R _{k}^{*}(t)+\lambda\Bigl{(}|R_{k}(t)|^{2}R_{k}(t)\] \[\quad+c_{1}(\theta)|R_{p}(t)|^{2}R_{k}(t)+c_{2}(\theta)R_{p}(t)^ {2}R_{k}^{*}(t)\Bigr{)}, \tag{4}\] with detuning \(\Delta=\omega/2-E\), drive amplitude for the Bogoliubov mode \(\alpha=\mu A\epsilon/(2E)\), and nonlinearity \(\lambda=\mu(5\epsilon+3\mu)/E\). The same equation holds for \(R_{p}(t)\) after exchange of the \(\mathbf{k}\) and \(\mathbf{p}\) labels. The \(c_{2}(\theta)\) term would violate momentum conservation for traveling wave patterns in the Ginzburg-Landau equation [2], but in our case of standing waves (2), which are agnostic to the sign of \(\mathbf{p}\), \(c_{2}(\theta)\) arises as a new coupling. In order to focus on the angle of the realized pattern, we assume that the absolute values of the wavevectors \(\mathbf{k}\) and \(\mathbf{p}\) are equal, as determined by the resonance condition \(\Delta=0\) for \(n=1\), and set \(\epsilon_{\mathbf{k}}=\epsilon_{\mathbf{p}}=\epsilon\) and \(E_{\mathbf{k}}=E_{\mathbf{p}}=E\). The coupling coefficients \(c_{1}(\theta)\) and \(c_{2}(\theta)\) between modes in different directions are then given as functions of the angle \(\theta\in[0,\pi/2]\) between \(\mathbf{k}\) and \(\mathbf{p}\), \[c_{1}(\theta)=\frac{\mu}{5\epsilon+3\mu}\biggl{[}4\frac{\epsilon ^{2}-\mu^{2}}{\mu\epsilon}+\biggl{(}\frac{2\epsilon+\mu}{\epsilon}\frac{2 \epsilon+\mu}{\epsilon_{+}/2+\mu}\] \[-\frac{(2\epsilon-\mu)(\epsilon+2\mu)+(2\epsilon^{2}+\mu^{2}) \epsilon_{+}/(2\epsilon)}{E^{2}-E_{+}^{2}/4}+(\epsilon_{+}\to\epsilon_{-}) \biggr{)}\biggr{]}, \tag{5a}\] \[c_{2}(\theta)=\frac{\mu}{5\epsilon+3\mu}\biggl{[}-2\frac{\epsilon ^{2}+3\mu\epsilon+\mu^{2}}{\mu\epsilon}\] \[+\frac{2\epsilon+\mu}{\epsilon}\biggl{(}\frac{2\epsilon+\mu}{ \epsilon_{+}/2+\mu}+(\epsilon_{+}\to\epsilon_{-})\biggr{)}\biggr{]}, \tag{5b}\] where we introduced \(E_{\pm}=\sqrt{\epsilon_{\pm}(\epsilon_{\pm}+2\mu)}\) with \(\epsilon_{+}=\epsilon_{\mathbf{k}+\mathbf{p}}=4\epsilon\cos^{2}\frac{\theta}{2}\) and \(\epsilon_{-}=\epsilon_{\mathbf{k}-\mathbf{p}}=4\epsilon\sin^{2}\frac{\theta}{2}\)[52]. The coefficient \(c_{1}(\theta)\) diverges at the singular angle satisfying \(2E=E_{+}\), but it is an artifact of the ansatz (2) considering only two modes. We present the complete theory without divergence later and find that no other (e.g., triangular) patterns appear at the singular angle. But first, we focus on angles away from the singular angle where this model provides analytical and quantitatively reliable results. _The fixed points and their stability analysis._--The time-dependent solutions of the evolution (amplitude) equation trace out trajectories in the four-dimensional space of the two complex amplitudes. In the following, we analyze their fixed points and stability. We focus only on the excited modes satisfying the resonance condition at zero detuning \(\Delta=0\), and introduce dissipation \(\Gamma>0\) to capture the suppression from interaction with other modes besides the \(\mathbf{k}\)- and \(\mathbf{p}\)-modes: \[i\frac{\mathrm{d}}{\mathrm{d}t}R_{k}(t)=-i\Gamma R_{k}(t)-i\alpha R _{k}^{*}(t)+\lambda\Big{(}|R_{k}(t)|^{2}R_{k}(t)\] \[\quad+c_{1}(\theta)|R_{p}(t)|^{2}R_{k}(t)+c_{2}(\theta)R_{p}(t)^ {2}R_{k}^{*}(t)\Big{)}. \tag{6}\] Setting \(\mathrm{d}R_{k}(t)/\mathrm{d}t=0\) in Eq. (6) and similarly for \(R_{p}(t)\), we find four possible fixed-point values of \(R_{k}\) and \(R_{p}\): \[\mathrm{(U)} R_{k}=R_{p}=0, \tag{7a}\] \[\mathrm{(S_{k})} R_{k}=\bar{R}e^{i\bar{\eta}},\quad R_{p}=0,\] (7b) \[\mathrm{(S_{p})} R_{k}=0,\quad R_{p}=\bar{R}e^{i\bar{\eta}},\] (7c) \[\mathrm{(G)} R_{k}=R_{p}=\frac{\bar{R}e^{i\bar{\eta}}}{\sqrt{1+c_{1}(\theta)+c_ {2}(\theta)}}, \tag{7d}\] with \(\bar{R}^{2}=\sqrt{\alpha^{2}-\Gamma^{2}}/\lambda\) and \(\exp(i\bar{\eta})=(\sqrt{\alpha-\Gamma}+i\sqrt{\alpha+\Gamma})/\sqrt{2\alpha}\). The fixed points correspond to the following density patterns: (U) a uniform pattern, (S\({}_{k}\)) and (S\({}_{p}\)) stripe patterns for each direction, and (G) a grid pattern (see Fig. 1). We first investigate the stability of the uniform fixed point (U). For small \(R_{k}(t)\) and \(R_{p}(t)\) around (U), only the linear terms of Eq. (6) remain, and the \(\mathbf{k}\)- and \(\mathbf{p}\)-directions become independent. By dividing the linearized equation into its real and imaginary parts, we directly find the eigenvalues of the Jacobian (scaling dimensions) \(-\Gamma-\alpha\) and \(-\Gamma+\alpha\). Since both \(\alpha\) and \(\Gamma\) are positive, the fixed point (U) is unstable for \(\alpha>\Gamma\). This corresponds to the Faraday instability, in which the uniform solution becomes unstable when the drive is stronger than the dissipation. We next study the stability of the grid fixed point (G). The four eigenvalues of the amplitude equation linearized around (G) are found to be \[\Lambda_{1}^{\pm} =-\Gamma\pm i\sqrt{4\alpha^{2}-5\Gamma^{2}}, \tag{8a}\] \[\Lambda_{2}^{\pm} =-\Gamma\pm\frac{\sqrt{4(\alpha^{2}-\Gamma^{2})D(\theta)+\Gamma^{ 2}(1+c_{1}+c_{2})^{2}}}{1+c_{1}+c_{2}} \tag{8b}\] with \[D(\theta)\equiv-1+c_{1}(\theta)^{2}+2c_{2}(\theta)-c_{2}(\theta)^{2}. \tag{9}\] While the real part of the first eigenvalue \(\Lambda_{1}\) is negative for \(\alpha>\Gamma\), the second eigenvalue \(\Lambda_{2}\) has a negative real part only when \[D(\theta)<0. \tag{10}\] The inequality (10) thus provides the condition for the grid pattern to be stable, regardless of the magnitude of the dissipation \(\Gamma\). Note that the stability analysis around the fixed points (S\({}_{k}\)) and (S\({}_{p}\)) leads to the same inequality (10) as the condition for the stripe patterns to be _unstable_. As seen in Fig. 2, the grid pattern is stable around an angle of \(\theta=\pi/2\), which is consistent with the experimental results [47]. In the absence of dissipation, the eigenvalues always appear in positive and negative pairs, such as Eq. (8) with \(\Gamma=0\), because the amplitude equation without dissipation enjoys the time-reversal symmetry inherited from the GP equation. The behavior of the solution in the four-dimensional space near the fixed points can be understood separately for each two-dimensional subspace corresponding to each pair of positive and negative eigenvalues. Without dissipation, the pair of eigenvalues with a real part makes the fixed point in the corresponding two-dimensional subspace a saddle point, while the pure imaginary pair makes the fixed point a center. A small dissipation \(\Gamma<\alpha\) keeps the saddle fixed point as a saddle while turning the center into a stable focus (in-spiral). Let us investigate the global behavior of the solutions of the amplitude equation beyond the local behavior around the fixed points. When we introduce the real and imaginary parts of the phase-rotated amplitudes as \(\rho_{k/p}=\mathrm{Re}[R_{k/p}e^{-i\bar{\eta}}]\) and \(\nu_{k/p}=\mathrm{Im}[R_{k/p}e^{-i\bar{\eta}}]\), all four fixed points lie in the two-dimensional subspace spanned by \(\rho_{k}\) and \(\rho_{p}\) with \(\nu_{k}=\nu_{p}=0\). The four-dimensional flow trajectories still depart from an unstable fixed point (saddle) and approach an attractive one (in-spiral). This global behavior can be visualized by utilizing the square eigenvalue \(\Lambda^{2}\), which is positive (repulsive) for real \(\Lambda\) at the saddle, while it is negative (attractive) for imaginary \(\Lambda\) at the in-spiral. We can efficiently obtain the square eigenvalues via the second-order differential equation de Figure 2: Stability criterion for the grid pattern: \(D(\theta)\) in Eq. (9) as a function of \(\theta=\angle(\mathbf{k},\mathbf{p})\in[0,\pi/2]\) with fixed \(A=0.6\) for different values of \(\omega/\mu\). The grid pattern for angles \(\theta\approx\pi/2\) is stable where \(D(\theta)<0\). rived from the amplitude equation (6), which is given by \[\frac{\mathrm{d}^{2}}{\mathrm{d}t^{2}}\rho_{k}(t)\bigg{|}_{\nu_{k}= \nu_{p}=0}=\lambda^{2}\bigg{[}\bar{R}^{2}+\rho_{k}(t)^{2}+(c_{1}-c_{2})\rho_{p}( t)^{2}\bigg{]}\] \[\times\bigg{[}\bar{R}^{2}-\rho_{k}(t)^{2}-(c_{1}+c_{2})\rho_{p}(t )^{2}\bigg{]}\rho_{k}(t) \tag{11}\] \[+2\lambda^{2}c_{2}\bigg{[}\bar{R}^{2}-\rho_{p}(t)^{2}-(c_{1}+c_{2} )\rho_{k}(t)^{2}\bigg{]}\rho_{p}(t)^{2}\rho_{k}(t)\] and likewise for \(\rho_{p}(t)\) after exchanging the \(\rho_{k}(t)\) and \(\rho_{p}(t)\) variables. The force field described by the right-hand side of Eq. (11) captures the global behavior of the solution of the original amplitude equation, although it does not correspond to the trajectories of the solutions themselves. This global behavior is shown in Fig. 1, and it changes drastically depending on the angle between \(\mathbf{k}\) and \(\mathbf{p}\)[53]. _The amplitude equation with no divergence._--Finally, we discuss the divergence of the coefficient \(c_{1}(\theta)\) at the angles satisfying \(2E=E_{+}\). At this angle, two Bogoliubov modes with wavevectors \(\mathbf{k}\) and \(\mathbf{p}\) can resonantly scatter into a Bogoliubov mode with wavevector \(\mathbf{k}+\mathbf{p}\) without violating energy conservation, which enhances the contribution of this collision process. Therefore, the amplitude of the wavevector \(\mathbf{k}+\mathbf{p}\), which grows proportionally to both the amplitudes \(R_{k}\) and \(R_{p}\), cannot be neglected, and its omission in the two-mode ansatz (3) causes the divergence of the coefficient \(c_{1}(\theta)\). By additionally including the \(\mathbf{k}+\mathbf{p}\) mode described by the complex amplitude \(R_{+}(t)\) in the ansatz (3), we can derive the coupled amplitude equations for the three modes \(R_{k/p}(t)\) and \(R_{+}(t)\) as (for details, see [51]) \[i\frac{\mathrm{d}}{\mathrm{d}t}R_{k}(t)=-i\Gamma R_{k}(t)-i\alpha R _{k}^{*}(t)+\beta(\theta)R_{+}(t)R_{p}^{*}(t)\] \[+\lambda\Big{(}|R_{k}(t)|^{2}R_{k}(t)+\tilde{c}_{1}(\theta)|R_{p} (t)|^{2}R_{k}(t)\] \[+c_{2}(\theta)R_{p}(t)^{2}R_{k}^{*}(t)\Big{)}, \tag{12a}\] \[i\frac{\mathrm{d}}{\mathrm{d}t}R_{+}(t)=-i\Gamma_{+}R_{+}(t)+ \Delta_{+}(\theta)R_{+}(t)\] \[+\beta_{+}(\theta)R_{k}(t)R_{p}(t)+\lambda_{+}(\theta)|R_{+}(t)| ^{2}R_{+}(t), \tag{12b}\] where \(\Gamma\) and \(\Gamma_{+}\) are dissipation coefficients. The parameters are given by the detuning \(\Delta_{+}(\theta)=2E-E_{+}\) and the nonlinearities \(\lambda_{+}(\theta)=\mu[(\epsilon-\mu)/E+(\epsilon_{+}+2\mu)/E_{+}]\), \(\beta_{+}(\theta)=\mu[(\epsilon+2\mu)/E+\epsilon_{+}(\epsilon-\mu)/(\epsilon E _{+})]\), and \[\tilde{c}_{1}(\theta)=c_{1}(\theta)\] \[+\frac{\mu}{5\epsilon+3\mu}\frac{(2\epsilon-\mu)(\epsilon+2\mu)+ (2\epsilon^{2}+\mu^{2})\epsilon_{+}/(2\epsilon)}{E^{2}-E_{+}^{2}/4}. \tag{13}\] We find that the coefficient \(\tilde{c}_{1}(\theta)\) is now regular and the divergence of the coefficient \(c_{1}(\theta)\) at the singular angle satisfying \(2E-E_{+}\) is removed. We note that the \(\mathbf{k}\) and \(\mathbf{p}\) modes now interact with the \(\mathbf{k}+\mathbf{p}\) mode via quadratic terms in the amplitude equation (three-mode scattering). We first analyze the stability of the stripe-pattern fixed point, which can be found analytically, and later confirm this result by numerical analysis of the grid-pattern fixed point. Linearizing the amplitude equation (12) around the stripe-pattern fixed point given by \(R_{+}=0\) in addition to (S\({}_{k}\)) in Eq. (7b), we obtain six eigenvalues; two of them are given by \(-\Gamma\pm i\sqrt{4\alpha^{2}-5\Gamma^{2}}\), which must have negative real parts for \(\alpha>\Gamma\), and the real parts of the other four are plotted in Fig. 3. In the case of Fig. 3, i.e., when the dissipation coefficients are \(\Gamma=0.5\alpha\) and \(\Gamma_{+}=\alpha\), the stability (instability) of the stripe pattern corresponds to the instability (stability) of the grid pattern, as confirmed numerically [54]. Thus, we are able to identify the angular region where the largest eigenvalue in Fig. 3 has a positive real part (unstable stripes) as the region where the grid pattern is stable. Also in this model, the stable region in Fig. 3 is insensitive to changes in the magnitude of the dissipation, and this insensitivity is consistent with the dissipation-independent inequality (10) in the previous two-mode model. Figure 3 agrees closely with Fig. 2 in that the grid pattern is stable around \(\theta=\pi/2\); moreover, it describes the stability for all angles reliably, including the angle satisfying \(2E=E_{+}\). _Discussion and outlook._--In this Letter, we have derived the amplitude equation (4) (and (12)) for pattern formation in two-dimensional BECs caused by the Faraday instability. The amplitude equation can be considered as a complex Ginzburg-Landau equation for pattern formation with the amplitudes in the two directions as order parameters, so that it provides a simple description of the system dynamics. Our method to derive the amplitude equation is equivalent to the renormalization group theory for asymptotic analysis [55; 56; 57]. Accordingly, the amplitude equation describes the order parameter dy Figure 3: Real parts of the four eigenvalues of the amplitude equation (12) linearized around the stripe-pattern fixed point for \(\omega/\mu=2\) and \(A=0.6\) (Each line is doubly degenerate for small \(\theta\)). The two \(\theta\)-independent eigenvalues are excluded. The dissipation coefficients are set as \(\Gamma=0.5\alpha\) and \(\Gamma_{+}=\alpha\), and the dashed and dotted lines represent \(-\Gamma\) and \(-\Gamma_{+}\), respectively. The stripe pattern at angle \(\theta\) is unstable whenever \(\mathrm{Re}\,\Lambda\) is positive. Notably, there is no singularity when \(2E=E_{+}\), which occurs at \(\theta\approx 0.34\pi\) for our parameters in this figure. namics as an effective model for the only two relevant modes that remain at long times, while incorporating in a renormalization group sense the multitude of irrelevant modes of the full GP solution. The dissipation that we introduced in Eq. (6) appears effectively as a result of the renormalization onto the model for only the two relevant modes. While the microscopic derivation of dissipation from interactions and thermal fluctuations is still an open question, one may also attempt to extract its value from experiments. Using the obtained amplitude equation, we have analyzed the stability between the uniform, stripe-pattern, and grid-pattern solutions. For \(\alpha>\Gamma\), where the drive amplitude is stronger than the dissipation, the uniform solution becomes unstable, resulting in an inhomogeneous density pattern. Figures 2 and 3 show that the grid pattern becomes stable around the angle \(\pi/2\) between the two excitation directions. The global stability of the amplitude is shown in Fig. 1. Our results explain the experimental data presented in our companion paper [47]. Furthermore, the amplitude equation has been experimentally validated under various initial conditions and has been confirmed to give a good description. Finally, we briefly discuss the scattering between the three modes satisfying \(\mathbf{k}_{1}\pm\mathbf{k}_{2}\pm\mathbf{k}_{3}=\mathbf{0}\), which usually leads to a triangular pattern [2]. In the Faraday pattern formation, the leading three-mode scattering occurs at the frequency \(\omega/2\pm\omega/2\pm\omega/2\) because the excited modes have the energy \(\omega/2\) from the first resonance condition \(n=1\). Because of \(\omega/2\pm\omega/2\pm\omega/2\neq 0\), the three-mode scattering is a fast-rotating contribution in the rotation wave basis and can be neglected. If the third mode is from the second resonance condition \(n=2\) with frequency \(\omega\) instead of \(\omega/2\), the three-mode scattering becomes relevant because it has a slow-rotating contribution with \(\omega/2+\omega/2-\omega=0\). In fact, the amplitude equation (12) includes three-mode scattering. However, we found that the stability of the patterns does not change significantly between Eq. (4) and Eq. (12). Indeed, the second resonant mode is less amplified by the Faraday instability than the first one in the perturbative regime with a small drive amplitude \(|A|<1\). The authors thank F. Ziebert for useful discussions. This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project-ID 273811115 (SFB1225 ISOQUANT) and under Germany's Excellence Strategy EXC2181/1-390900948 (the Heidelberg STRUCTURES Excellence Cluster), and QUANTERA DYNAMITE PCI2022-132919. This project was funded within the QuantERA II Programme that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733. N.L. acknowledges support by the Studienstiftung des Deutschen Volkes.
2302.10299
Massive star feedback in the Magellanic Clouds and the tidal Bridge
Massive stars have far-reaching feedback effects that alter the surrounding environment on local, global, and cosmic scales. Spectral analyses of massive stars with adequate stellar-atmosphere models are important to study massive star feedback in detail. We discuss the most recent UV and optical studies of massive metal-poor stars, including those with metallicities ranging from half to one twentieth of solar, connected with large-scale ISM structures in the Magellanic Clouds and the tidal Magellanic Bridge. We present ionizing fluxes from massive stars with low metallicity along with mechanical energy, and we further compare these to the observed energetics in the ISM. The results give hints on the leakage of hot gas and ionizing photons in the Magellanic Clouds. The paper outlines feedback from individual massive stars to population-level collective feedback, the significance of various feedback mechanisms (radiation, wind, supernova), and the influence by the physical conditions of the ISM
Varsha Ramachandran
2022-11-17T14:50:53Z
http://arxiv.org/abs/2302.10299v1
[ ###### Abstract Massive stars have far-reaching feedback effects that alter the surrounding environment on local, global, and cosmic scales. Spectral analyses of massive stars with adequate stellar-atmosphere models are important to study massive star feedback in detail. We discuss the most recent UV and optical studies of massive metal-poor stars, including those with metallicities ranging from half to one twentieth of solar, connected with large-scale ISM structures in the Magellanic Clouds and the tidal Magellanic Bridge. We present ionizing fluxes from massive stars with low metallicity along with mechanical energy, and we further compare these to the observed energetics in the ISM. The results give hints on the leakage of hot gas and ionizing photons in the Magellanic Clouds. The paper outlines feedback from individual massive stars to population-level collective feedback, the significance of various feedback mechanisms (radiation, wind, supernova), and the influence by the physical conditions of the ISM. Massive stars, stellar feedback, low metallicity, etc. Massive star feedback in the Magellanic Clouds] Massive star feedback in the Magellanic Clouds and the tidal Bridge V. Ramachandran] Varsha Ramachandran\({}^{1}\) 2022 119-126 Astronomy in Focus, Focus Meeting 4 Jose Espinosa, ed ## 1 Introduction Stellar feedback is one of the largest uncertainties in star and galaxy formation. Massive stars are of great interest because of their far-reaching feedback effects that alter the surrounding environment on local, global, and cosmic scales. The combined feedback in massive young stellar clusters leads to the formation of superbubbles that can drive galactic winds and outflows, which have been frequently observed in local as well as distant galaxies. The Magellanic Clouds and the Bridge offer an outstanding opportunity to investigate low metallicity massive stars and study feedback under conditions typical for the vast majority of dwarf galaxies. ## 2 Massive star feedback in the Magellanic Clouds We carried out spectroscopic observations of \(\sim\) 500 OB stars in the Magellanic Clouds using VLT-FLAMES. When available, the optical spectroscopy was complemented by UV spectra from the HST, IUE, and FUSE archives. The two representative young stellar populations that have been studied are associated with the superbubble N 206 in the Large Magellanic Cloud (LMC) and with the supergiant shell SMC-SGS 1 in the Wing of the Small Magellanic Cloud (SMC), respectively. We performed spectroscopic analyses of the massive stars using the non-LTE Potsdam Wolf-Rayet (PoWR) model atmosphere code. We estimated the stellar, wind, and feedback parameters of the individual massive stars. The total energy feedback from OB stars, WR stars and supernovae in the N 206 and SGS complexes are compared in Table 1. Since the winds of OB stars in the LMC are much stronger than in the SMC, wind feedback in N 206 is dominated by massive Of stars (Ramachandran et al. 2018a,b). The stellar wind feedback from the two WR stars in N 206 is comparable to that from all young OB stars together. The situation is completely different in the SMC supergiant shell, where the ionizing and mechanical luminosity is dominated by one WO star (Ramachandran et al. 2019). The observed H\(\alpha\) emission reflects almost half of the ionizing flux provided by the massive stars - the other half of the LyC photons obviously escapes. The X-ray superbubble in the LMC is equally powered by stellar winds and supernovae. In contrast, the overall energy feedback of the supergiant shell in the SMC is dominated by supernovae, implying that hundreds of OB stars do not have much impact on the feedback before their final explosion. Thus, at low metallicities, feedback is chiefly governed by supernova explosions and only few very massive stars. The comparison of the total stellar input with X-ray, radio and H\(\alpha\) observations shows that only a fraction the input energy accumulated over time is currently still present in these regions. The rest might have escaped or leaked out of the complex. Our result has significant importance for feedback in low-metallicity galaxies, where we can now neglect the contribution from OB stars. However, at LMC like metallicities, young OB stars have a significant contribution in terms of ionizing and mechanical feedback. In that case, neglecting their contribution and considering only supernovae is not justified. Concluding, the metallicity decides whether the stellar winds or supernovae become the key agents of feedback. ## 3 Metallicity and ionization of the Bridge The Magellanic Bridge, stretching between SMC and LMC, is the nearest tidally stripped intergalactic environment. The Bridge has a significantly low average metallicity than SMC. For the first time we discovered three massive O stars in the Bridge using VL/FLAMES spectra (Ramachandran et al. 2021). We analyze the spectra of each star using the PoWR models, providing stellar parameters, ionizing photon fluxes, and surface abundances. The ages of the newly discovered O stars suggest that star formation in the Bridge is ongoing. The multi-epoch spectra indicate that all three O stars are binaries. Despite their spatial proximity to one another, these O stars are chemically distinct. MBO1 is a fast-rotating giant with nearly LMC-like abundances. The other two are main-sequence stars that rotate extremely slowly and are strongly metal depleted (Fig 1). Among this MBO2 the most nitrogen-poor O star known to date. Taking into account the previous analyses of B stars in the Bridge, we interpret the various metal abundances as the signature of a chemically inhomogeneous ISM, suggesting that the Bridge gas might have accreted during multiple episodes of tidal interaction between the Clouds. Attributing the lowest derived metal content to the primordial gas, the time of the initial formation of the Bridge may date back several billion years. Using the Gaia and Galex color-magnitude diagrams, we roughly estimate the total number of O stars in the Bridge and their total ionizing radiation. Comparing this with the energetics of the diffuse ISM, we find that the contribution of the hot stars to the ionizing radiation field in the Bridge is less than 10% and conclude that the main sources of ionizing photons are leaks from the LMC and SMC. This provide a lower limit for the fraction of ionizing radiation that escapes from these two dwarf galaxies.
2309.15093
Point-symmetry in SNR G1.9+0.3: A supernova that destroyed its planetary nebula progenitor
I analyze a new X-ray image of the youngest supernova remnant (SNR) in the Galaxy, which is the type Ia SNR G1.9+0.3, and reveal a very clear point-symmetrical structure. Since explosion models of type Ia supernovae (SNe Ia) do not form such morphologies, the point-symmetrical morphology must come from the circumstellar material (CSM) into which the ejecta expands. The large-scale point-symmetry that I identify and the known substantial deceleration of the ejecta of SNR G1.9+0.3 suggest a relatively massive CSM of >1Mo. I argue that the most likely explanation is the explosion of this SN Ia into a planetary nebula (PN). The scenario that predicts a large fraction of SN Ia inside PNe (SNIPs) is the core degenerate scenario. Other SN Ia scenarios might lead to only a very small fraction of SNIPs or not at all.
Noam Soker
2023-09-26T17:40:21Z
http://arxiv.org/abs/2309.15093v3
# Point-symmetry in SNR G1.9+0.3: A supernova that destroyed its planetary nebula progenitor ###### Abstract I analyze a new X-ray image of the youngest supernova remnant (SNR) in the Galaxy, which is the type Ia SNR G1.9+0.3, and reveal a very clear point-symmetrical structure. Since explosion models of type Ia supernovae (SNe Ia) do not form such morphologies, the point-symmetrical morphology must come from the circumstellar material (CSM) into which the ejecta expands. The large-scale point-symmetry that I identify and the known substantial deceleration of the ejecta of SNR G1.9+0.3 suggest a relatively massive CSM of \(\gtrsim 1M_{\odot}\). I argue that the most likely explanation is the explosion of this SN Ia into a planetary nebula (PN). The scenario that predicts a large fraction of SN Ia inside PNe (SNIPs) is the core degenerate scenario. Other SN Ia scenarios might lead to only a very small fraction of SNIPs or not at all. (stars:) supernovae: general - ISM: supernova remnants - (stars:) binaries: close - planetary nebulae - stars: jets Vol.0 (20xx) No.0, 000-000 ## 1 Introduction A point-symmetric morphology is composed of pairs of twin structural components on opposite sides of the center of the nebula. Such structures are clearly observed in many tens of planetary nebulae (PNe), as catalogues of PNe (and proto-PNe) reveal (e.g., Balick 1987; Chu, Jacoby, & Arendt 1987; Schwarz, Corradi, & Melnick 1992; Corradi & Schwarz 1995; Manchado et al. 1996; Sahai & Trauger 1998; Sahai, Morris, & Villar 2011; Parker, Bojicicic, & Frew 2016; Parker 2022). Many PNe are shaped by jets (e.g., Morris 1987; Soker 1990; Sahai & Trauger 1998), including point-symmetric morphologies (e.g., Sahai 2000; Sahai et al. 2007; Sahai, Morris, & Villar 2011; Boffin et al. 2012). Many PNe and proto-PNe (e.g., Sahai et al. 2000; Sahai, Morris, & Villar 2011), like the post-asymptotic giant branch (AGB) star IRAS 16594-4656 (Hrivnak, Kwok, & Su 2001), show that the point-symmetry is not always perfect. Namely, they might have some deviations from perfect point symmetry. In particular, the two opposite clumps/lobes/arcs/filaments of a pair might have different structures, differ in brightness, be not exactly at \(180^{\circ}\) to each other with respect to the center (bent-morphology), and have different distances from the center. As well, non-paired clumps might exist in the nebula. PNe leave white dwarf (WD) remnants, in many cases a WD in a binary system. If the WD remnant explodes as a type Ia supernova (SN Ia) before the PN is dispersed into the interstellar medium (ISM), the PN might have an imprint on the morphology of the SN remnant (SNR). An SN inside a PN is termed SNIP (e.g., Tsebrenko & Soker 2015a). Not all theoretical SN Ia and peculiar SN Ia scenarios allow for the formation of point-symmetric SNRs (for some recent reviews of the scenarios, but without reference to point-symmetry, see, e.g., Hoeflich 2017; Livio & Mazzali 2018; Soker 2018, 2019; Wang 2018; Jha et al. 2019; Ruiz-Lapuente 2019; Ruiter 2020; Liu, Ropke, & Han 2023b). The formation process of PNe, typically tens to thousands of years to form the dense shell, is much longer than the dynamical time of the AGB progenitor, about one year. Also, the launching phase of the jets by a companion to the AGB progenitor is much longer than the dynamic time of the accretion disk that launches the jets. This allows for disk precession that launches opposite pairs of jets in varying directions. In SN Ia scenarios that involve a disk with bipolar explosion morphology (e.g., Perets et al. 2019; Zenati et al. 2023), the disk explosion time is not much longer, and even shorter, than the dynamical time of the disk. No disk precession is possible during the explosion. If a SNR Ia has a point symmetry it seems that it results from a point-symmetric circumstellar material (CSM). Peculiar SNe Ia might have also peculiar morphologies, such as the unordered morphology of the peculiar SNR 3C 397 (e.g., Ohshiro et al., 2021) that might result from deflagration (Mehta et al., 2023). However, these are not expected to form point-symmetric morphologies. ISM magnetic fields might shape only one pair of twin structural features (e.g., Wu & Zhang, 2019) and might play other roles in SNRs (e.g., Xiao et al., 2022). Velazquez et al. (2023) simulate non-spherical pre-explosion mass loss into a magnetized ISM. They find that when the pre-explosion wind is axisymmetric (rather than spherical) and its symmetry axis is inclined to the ISM magnetic field then the ears in the SNR might be bent. However, point-symmetric clumps/filaments cannot be formed by this mechanism. Surrounding density inhomogeneities might also shape SNRs (e.g., Lu et al., 2021). However, these ISM effects cannot form point-symmetric structures. Zhang et al. (2023) simulated the shaping of SNR G1.9+0.3 with magnetic fields and ISM density gradients. They could form a pair of ears, but not a point-symmetry (which was not known then). Griffeth Stone et al. (2021) simulated SNR G1.9+0.3 as a highly non-spherical explosion into a uniform medium. This cannot form a point-symmetric structure. In a recent study Villagran et al. (2023) conduct three-dimensional magneto-hydrodynamic simulations to reproduce the morphology and emission of SNR G1.9+0.3 by a non-spherical pre-explosion wind into a magnetized ISM. They also obtained an axisymmetrical morphology, but not a point-symmetry. Instabilities that develop in the ejecta-ISM interaction are not expected to form point-symmetric morphologies. Furthermore, Mandal et al. (2023) demonstrate with hydrodynamical models that the instabilities that develop as SNRs interact with an ambient medium have a characteristic peak in their power spectra that is relatively large, \(>10\). This cannot account for a point-symmetric structure with only a few prominent pairs of opposite morphological features. In this study, I identify a point-symmetric morphology in the newly released X-ray image of SNR G1.9+0.3 (Enokiya et al., 2023), a young SNR Ia that exploded around 1890-1900 (e.g., Carlton et al., 2011; Chakraborti, Childs, & Soderberg, 2016; Borkowski et al., 2017; Pavlovic, 2017). I analyze the image in section 3 and conclude that the most likely explanation is that this SNR was shaped by an SN Ia inside a PN, i.e., an SNIP. Tsebrenko & Soker (2015) already suggested that SNR G1.9+0.3 is a SNIP, and simulated its shaping. However, they did not refer to point-symmetry. The present analysis put the SNIP suggestion on a very solid ground. To facilitate the analysis and discussion in section 4, I start by considering the ability of different SN Ia scenarios to account for point-symmetric morphologies (section 2). ## 2 Point symmetry in SN Ia scenarios In Table 1 I list SN Ia scenarios (first row) with some of their properties (second row). The properties are the number of stars in the system at the time of explosion \(N_{\rm exp}\), the number of surviving stars after the explosion \(N_{\rm sur}\), the mass of the exploding white dwarf (WD) where \(M_{\rm Ch}\) stands for near Chandrasekhar mass, and the morphology of the ejecta (\(E_{\rm j}\)) being spherical (S) or non-spherical (N). These properties refer to normal SNe Ia where the WD that explodes does not leave a remnant. The first two rows of the table are from a much larger table from Soker (2019) which compares the scenarios with each other and with observations. Scenarios where there is only one star at the explosion, \(N_{\rm exp}=1\), are grouped into _lonely-WD scenarios_, and might account for most, or even all, normal SNe Ia (Braudo & Soker, 2023). Here I add to Table 1 the third row that indicates whether the scenario might lead to a point-symmetric SNR, and describe below the scenarios only in their relation to a point-symmetric SNR. The _core-degenerate_ (CD) scenario predicts that a large fraction of SNe Ia occurs inside PNe or PN remnants. These are term SNIPs for SNe Ia Inside PNe. A PN remnant is an old PN shell that at the time of the explosion is mostly neutral and hence does not shine as a PN. The reason that the CD scenario predicts many SNIPs is that the core and the WD merge during or at the end of the common envelope evolution (CEE; e.g., Kashi & Soker, 2011; Ilkov & Soker, 2013; Aznar-Siguan et al., 2015), and might explode within several hundreds of thousands of years, which is the merger to explosion delay (MED) time. In Soker (2022) I estimated that the fraction of SNIPs among all normal SNe Ia in the Milky Way and the Magellanic clouds is \(f_{\rm SNIP}({\rm local})\simeq 70-80\%\), and its total fraction, including dwarfs and elliptical galaxies, is \(f_{\rm SNIP}({\rm total})\simeq 50\%\). I take two very recent studies of the CSM of SNe Ia, of Tycho's SNR (Kobashi et al., 2023) and of SN 2018evt (Wang et al., 2023), to actually support a SNIP scenario for these two SNe Ia. A point symmetry in a SNR Ia is a natural possibility of the CD scenario when the progenitor PN of a SNIP has a point-symmetry. For a recent study of SNIPs in relation to SNR properties see Court et al. (2023). In the _double degenerate_ (DD) scenario (e.g., Webbink, 1984; Iben & Tutukov, 1984) without a MED time or with a MED time (e.g., Loren-Aguilar et al., 2009; van Kerkwijk et al., 2010; Pakmor et al., 2013; Levanon, Soker, & Garcia-Berro, 2015; Levanon & Soker, 2019; Neopane et al., 2022), there is a delay from the end of the CEE to the merger itself due to gravitational wave emission by the double WD system \(t_{\rm GW}\). There are several channels of this scenario (e.g., Pakmor et al., 2011; Liu et al., 2016; Ablimit, Maeda, & Li 2016; Yungelson, & Kuranov 2017; Zenati et al. 2019; Perets et al. 2019), with some recent interest in the violent merger channel (e.g., Axen & Nugent 2023; Kwok et al. 2023; Maeda et al. 2023; Siebert et al. 2023a,b; Srivastav et al. 2023). In the DD scenario, the delay time from the end of the CEE to explosion is \(t_{\rm CEED}=t_{\rm GW}\). In the DD-MED scenario, the time from the end of the CEE to the explosion itself includes also the MED time, and therefore \(t_{\rm CEED}=t_{\rm GW}+t_{\rm MED}\) (see discussion in Soker 2022). The way to form a point-symmetric nebula is if the explosion takes place before the PN material is dispersed into the ISM, i.e., \(t_{\rm CEED}\lesssim 10^{6}\ {\rm yr}\). However, due to the generally long gravitational-wave merger time \(t_{\rm GW}\), this possibility is very rare. In the different channels of the _double-detonation_ (DDet) scenario (e.g., Woosley & Weaver 1994; Livne & Arnett 1995; Papish et al. 2015; Shen et al. 2018a,b; Ablimit 2021; Zingale et al. 2023) the explosion of a CO WD is triggered by the thermonuclear detonation of a helium layer on the WD. This ignition takes place on a dynamic timescale and cannot lead to a point-symmetric morphology. Only if the explosion takes place within hundreds of thousands of years after the CEE of the progenitor binary system, i.e., \(t_{\rm CEED}\lesssim 10^{6}\ {\rm yr}\), this scenario might lead to point-symmetric remnant as being a SNIP. My estimate (Soker 2019), based in part on the no detection of the surviving companions in SNRs (e.g., Li et al. 2019; Shields et al. 2022, 2023), is that the DDet scenario accounts for peculiar SNe Ia (e.g., Liu et al. 2023; Padilla Gonzalez et al. 2023; Karthik Yadavalli et al. 2023), but only rarely for normal SNe Ia. More rare will be normal SNe Ia through this channel that explode before the PNe are dispersed. The _single degenerate_ (SD) scenario (e.g., Whelan & Iben 1973; Han & Podsiadlowski 2004; Orio 2006; Wang et al. 2009; Meng, & Podsiadlowski 2018; Cui et al. 2022) might in principle lead to a point-symmetric SNR if the CSM formed by the wind from a giant mass-donor has a point-symmetric morphology. This is basically an SN Ia inside a symbiotic nebula. Symbiotic progenitors of SNe Ia are very rare (e.g., Laversveiler & Goncalves 2023). There are two main differences between symbiotic progenitors and SNIPs. (1) In the case of an SD scenario, the expectation is for the presence of a red giant branch star or an AGB star in the SNR. (2) The CSM mass is much smaller than in a SNIP. The large deceleration of the ejecta of SNR G1.9+0.3 makes this scenario less likely (section 4). The very rare (e.g., Toonen et al. 2018; Hallakoun & Maoz 2019; Hamers & Thompson 2019; Grishin & Perets 2022) _WD-WD collision_ (WWC) scenario, where two unbound WDs collide with each other (e.g., Raskin et al. 2009; Rosswog et al. 2009; Kushnir et al. 2013; Aznar-Siguan et al. 2014; Glanz, Perets, & Pakmor 2023) does not predict a point-symmetric SNR as I study here. The collision of two equal mass WDs can lead to a large-scale bipolar structure in case of a head-on collision (e.g., Hawley, Athanassiadou, & Timmes 2012), or to a large-scale point-symmetric ejecta with a very large departure from a large-scale elliptical shape (e.g., Glanz, Perets, & Pakmor 2023). The demand for equal-mass WDs in a scenario that is extremely rare to start with and the large departures from an elliptical shape, make this scenario unlikely to explain the point-symmetric morphology of SNR G1.9+0.3 that I study here. The overall conclusion from this discussion is that the most likely explanation for a point-symmetric SNR Ia morphology is an SNIP. The scenario that statistically has the largest fraction of SNIPs is the CD scenario. I return to this point in section 4. ## 3 Point-symmetry in SNR G1.9+0.3 In their recent study Enokiya et al. (2023) combined 26 individual X-ray observations of SNR G1.9+0.3 from 2007 \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Scenario\({}^{[1]}\)** & Core & Double & Double & Double & Single & WD-WD collision (WWC) \\ & Degenerate & Degenerate & Degenerate & Degenerate & Detonation & Degenerate & (SN-MED) \\ & (CD) & (DD) & (DD-MED) & (DDe) & (SD-MED) & \\ \hline \([{\rm N}_{\rm exp},{\rm N}_{\rm sur},{\rm M},{\rm Ej}]^{a}\) & \([1,0,\,M_{\rm Ch},{\rm S}]\) & \([2,0,\rm{sub-}]\) & \([1,0,\,M_{\rm Ch},{\rm S}]\) & \([2,1,\rm{sub-}]\) & \([2,1,\rm{sub-}]\) & \([2,0,\rm{sub-}]\) \\ & & \(M_{\rm Ch},{\rm N}]\) & & \(M_{\rm Ch},{\rm N}]\) & & \(M_{\rm Ch},{\rm N}]\) \\ \hline **Point symmetry in the SNR** & Expected in some SNPs with point-symmetric PN. & Very rare: SNP & Very rare: SNP & Extremely rare. & Possible: a symbiotic & Extremely rare; & Extremely rare; & \\ & & & & & & & large-scale \\ \hline \end{tabular} Notes: [1] Scenarios for SN Ia by alphabetical order. MED: Merger to explosion delay time. It implies that the scenario has a delay time from merger or mass transfer to explosion. MED is an integral part of the CD scenario. \([2]\)\(N_{\rm exp}\) is the number of stars in the system at the time of explosion; \(N_{\rm sur}\) is the number of surviving stars in normal SNe Ia: \(N_{\rm sur}=0\) if no companion survives the explosion while \(N_{\rm sur}=1\) if a companion survives the explosion (in some peculiar SNe Ia the exploding WD is not destroyed and it also leaves a remnant); \(M_{\rm Ch}\) indicates a (near) Chandrasekhar-mass explosion while sub-\(M_{\rm Ch}\) indicates sub-Chandrasekhar mass explosion; Ej stands for the morphology of the ejecta, where S and N indicate whether the scenario might lead to spherical explosion or cannot, respectively. \end{table} Table 1: SN Ia scenarios and their ability to form a point-symmetric SNR 2015 in the energy range of 0.5 to \(7\ \mathrm{keV}\). They obtained a detailed X-ray image that reveals detailed structures (previous X-ray studies include, e.g., Reynolds et al. 2008, 2009; Borkowski et al. 2010, 2013, 2014, 2017; Carlton et al. 2011; Zoglauer et al. 2015). In addition, they present contours of molecular emission which they use to identify molecular clouds. In this study, I refer only to the X-ray morphology. I do not consider abundance or molecular clouds. Borkowski et al. (2017) present an X-ray image very similar to that by Enokiya et al. (2023). The new one allows a better analysis of the point-symmetry. Borkowski et al. (2017) present the proper expansion velocities on the plane of the sky and find two strong properties. The first is that the closer to the center arcs on the north and south expand much slower than the ears. Following Tsebrenko & Soker (2015), I take the arcs to be part of the equatorial structure and the ears to be along the polar directions of the PN into which SNR G1.9+0.3 exploded. The second property that Borkowski et al. (2017) find is that many regions expand not exactly along radial directions. I attribute these properties of slowly expanding arcs and non-radial expansion directions to the interaction of the ejecta with a non-homogeneous PN shell (the CSM). For such an influence of the CSM on the ejecta it should be massive, \(\gtrsim 1M_{\odot}\), almost ruling out the SD scenario (see section 3) where the CSM is due to an AGB wind. Borkowski et al. (2014) find that the relative proper expansion rate (percentage per year) of the outer parts of the polar regions (that include the ears) is lower than the inner regions. This indicates substantial deceleration of the outer parts of the ejecta along and near the polar directions, again, requiring a relatively massive CSM. Some parts in the nebula have expansion velocities that are about half, and even less, than other parts. To decelerate the velocity to half its initial value requires in a momentum-conserving interaction a CSM mass that is about equal to the mass of the ejecta. In an energy-conserving case, there is a need for a larger CSM mass. Overall, the CSM mass should be about equal to the decelerated ejecta mass and more. Since a large fraction of the \(\simeq 1.4M_{\odot}\) ejecta is decelerated, I estimate the CSM mas to be \(\gtrsim 1M_{\odot}\). In Figure 1 I take an image from Enokiya et al. (2023) to which I added the marks of the two ears and added double-headed arrows. I identify six pairs of clumps, marked with double-headed arrows DHA-a to DHA-f, and one tentative, DHA-\(\tau\), that form together a point-symmetric structure around the center. I analyze later the two opposite arcs that the two white double-headed arrows point at and reveal a bent point symmetry. DHA-e and DHA-f define two opposite arcs at about the same distance from the center. This is the most symmetric point-symmetric component because the two twin arcs (coloured green) are at about the same distance from the center and about the same size. DHA-a points at a clump in the upper part of the image, and at a clump in the bottom that is at about the same distance from the center. DHA-b points to two clumps along the arrow direction on the upper part of the image, and at a faint green filament at the bottom. Along the direction of DHA-b further away from the faint filament at the bottom, i.e., at the outer part of the SNR, there is the bright arc that DHA-a approximately defines its bright edge. DHA-c and DHA-d point at two twin clumps, but those at the upper part of the image are at a larger distance from the center than the two clumps at the bottom. DHA-d points at a clump in the upper part at the same distance as the bright clump (yellow-red) on the bottom outer arc, as the red-dashed continuation lines show. Additionally to these six pairs, there is a tentative pair marked by DHA-\(\tau\). It is tentative because the two opposite clumps are smaller and fainter than the others. Overall, the point symmetric structure that the red double-headed arrows define is not perfect although very strong. Considering that the ejecta of this SNR is strongly decelerated, namely interacting with a CSM, it is expected that the point symmetry is not perfect. This is the situation also with tens of PNe (see catalogues listed in section 1). The asymmetrical interaction of the ejecta of SNR G1.9+0.3 with the CSM and the ISM is evident from the radio images of SNR G1.9+0.3 that present non-uniform brightness and large deviations from spherical symmetry (e.g., Green et al. 2008; Gomez & Rodriguez 2009; Borkowski et al. 2010; De Horta et al. 2014; Borkowski et al. 2017; Luken et al. 2020; Enokiya et al. 2023). As said, this interaction is related also to the non-radial velocity of many parts in this SNR that Borkowski et al. (2017) pointed at. I turn to consider the clumps that the white arrows point at in Figure 1. Motivated by the bent-morphology of \(\approx 10\%\) of PNe (Soker & Hadar 2002) I consider the same for the two ears of SNR G1.9+0.3 and the bright arc at the base of each ear. In the bent morphology, the symmetry axis is bent through the center, i.e., the angle between the directions to the two opposite clumps/lobes/arc/filaments is \(<180^{\circ}\). In other words, the two opposite structures are displaced in the same direction perpendicular to the symmetry axis. In Figure 2 I present the \(9^{\circ}\)-bent morphological feature of the ears of SNR G1.9+0.3. I construct it as follows. I circle the green-coloured arc at the base of the upper (western) ear with a dashed-white line. I also circle by dashed-black lines the three red-yellow peaks inside this arc. I then copy this entire structure to the bottom (eastern) ear and rotate it around itself by \(180^{\circ}\) and displace it to match the arc at the base of the eastern ear. I enlarge the bottom (eastern) arc in the inset on the lower-right of Figure 2. I find that the best match of the two twin arcs is when the angle through the center is \(171^{\circ}\) instead of \(180^{\circ}\) as marked on Figure 2. I also added to the figure two yellow arrows at \(171^{\circ}\) to each other, each arrow through the tip of an ear. The four bent double-headed arrows in Figure 2 define the \(9^{\circ}\)-bent point-symmetrical morphological component of SNR G1.9+0.3. Based on the classification of bent-morphology planetary nebulae, I consider the value of \(9^{\circ}\) bent to be significant. For example, the planetary nebula NGC 6826 is classified to have a bent morphology (Soker & Hadar, 2002) although its bending angle is only \(7^{\circ}\). The features on which I based the bent morphology are bright, namely, two opposite tangential arcs (marked by dashed-white lines), with bright clumps inside each of the two arcs. Overall, I consider the bent morphology to be observationally significant. I note that Chiotellis, Boumis, & Spetsieri (2021) consider the ears to form in the equatorial plane. This cannot account for a point-symmetry near the ears as I find here. The point-symmetry that I identify in SNR G1.9+0.3 shows in very strong terms that the ears are along the Figure 1: An X-ray image with CO contours from Enokiya et al. (2023). The ellipse and coordinates lines are in the original image. My additions are the double-headed arrows with dashed-line continuations and the marks of the two ears. The center of each double-headed arrow is at the center of the image (where the two black lines cross each other). The six red double-headed arrows DHA-a to DHA-f point at what I interpret as twin clumps of a point symmetric structure, with DHA-\(\tau\) indicating a tentative pair due to the small and relatively faint clumps. The two white double-headed arrows signify that although each double-headed arrow points at two opposite clumps to the center, I do not consider them as point-symmetry twins. My interpretation of the point-symmetric structure of these clumps is in Fig. 2. polar directions (e.g., Tsebrenko & Soker 2013, 2015b) and not in the equatorial plane. Most likely jets shaped the point-symmetrical structure of SNR G1.9+0.3 through their shaping of a PN shell. This brings me to discuss this SNR as an SNIP. ## 4 Discussion and Summary In this short study I analyzed a new X-ray image of SNR G1.9+0.3 (Enokiya et al. 2023) and revealed a clear point-symmetric morphology. I now discuss the possible implications on the SN Ia scenario that best explains the youngest SN Ia in the Galaxy. Figures 1 and 2 present the point-symmetric structural features (the point-symmetric morphology) that I identify in SNR G1.9+0.3. In addition to the ears, there are several pairs of clumps and arcs that I identify. In several pairs and one tentative pair the two twin clumps/arcs are on opposite directions, sometimes at somewhat different distances from the center (Figure 1). The ears and the arc at the base of each ear form a bent point-symmetrical structure, as mark by DHA-0 to DHA-3 on Figure 2. The point-symmetric structure that I identify in SNR G1.9+0.3 is composed of opposite pairs of clumps/arcs/ears that have different directions (the directions of the double-headed arrows). Opposite pairs of jets Figure 2: Presentation of the bent point-symmetrical structure of SNR G1.9+0.3. The original X-ray image from Enokiya et al. (2023) is the same as in Figure 1. I marked the arc at the base of the upper (western) ear with a dashed-white line and its three peaks (yellow-red) with three dashed-black lines. DHA-1 to DHA-3 point at these clumps. I copied and rotated this structure around itself by \(180^{\circ}\) and matched it to the arc at the base of the bottom (eastern) ear. The inset on the lower right enlarges this region. There is a \(9^{\circ}\) bent point symmetry of the two ears (DHA-0) and of the two base arcs (DHA-1 to DHA-3). with varying axis directions, like due to precession, form such structures in a rich variety of astrophysical systems, e.g., from PNe to jet-shaped bubbles in clusters of galaxies. Since explosion models of SNe Ia do not have jets with varying directions (section 2), the most likely explanation is that the ejecta of SNR G1.9+0.3 expands into a point-symmetric CSM. The substantial deceleration of the ejecta of SNR G1.9+0.3 requires a massive CSM, which is more likely to be a PN that was expelled during a CEE in the CD scenario than an AGB wind in the SD scenario (section 2). Although the DD and the DDet scenarios might also occur shortly after the CEE, the probability for that is much lower than that in the CD scenario (section 2). Also, based on the upper bound of its \({}^{44}\)Ti abundance Kosakowski et al. (2023) argue that SNR G1.9+0.3 is most consistent with a near-\(M_{\rm Ch}\) progenitor. The CD scenario is compatible with that finding. The interaction of the ejecta with the PN started some tens of years ago at a radius of \(\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\sim$}}\raise 1.0pt\hbox{$<$}}1\) pc. PNe can have such sizes, e.g., the PN IPHASX J055226.2+323724 in the open cluster M37 (Fragkou et al., 2022) with an age of \(\simeq 10^{5}\) yr, (Fragkou et al., 2022; Werner et al., 2023). Therefore, the explosion could have taken place while the PN was still shining, rather than explosion into an old post-PN shell. I conclude that the most likely explanation for the point-symmetry SNR G1.9+0.3 is an SNIP where the explosion took place into a PN (rather than a remnant of a PN). The explosion destroyed the WD, hence destroyed the PN. ## Acknowledgments I thank an anonymous referee for helpful comments. This research was supported by a grant from the Israel Science Foundation (769/20).
2309.04820
ABC Easy as 123: A Blind Counter for Exemplar-Free Multi-Class Class-agnostic Counting
Class-agnostic counting methods enumerate objects of an arbitrary class, providing tremendous utility in many fields. Prior works have limited usefulness as they require either a set of examples of the type to be counted or that the query image contains only a single type of object. A significant factor in these shortcomings is the lack of a dataset to properly address counting in settings with more than one kind of object present. To address these issues, we propose the first Multi-class, Class-Agnostic Counting dataset (MCAC) and A Blind Counter (ABC123), a method that can count multiple types of objects simultaneously without using examples of type during training or inference. ABC123 introduces a new paradigm where instead of requiring exemplars to guide the enumeration, examples are found after the counting stage to help a user understand the generated outputs. We show that ABC123 outperforms contemporary methods on MCAC without needing human in-the-loop annotations. We also show that this performance transfers to FSC-147, the standard class-agnostic counting dataset. MCAC is available at MCAC.active.vision and ABC123 is available at ABC123.active.vision.
Michael A. Hobley, Victor A. Prisacariu
2023-09-09T15:18:46Z
http://arxiv.org/abs/2309.04820v2
# ABC Easy as 123: A Blind Counter for Exemplar-Free ###### Abstract Class-agnostic counting methods enumerate objects of an arbitrary class, providing tremendous utility in many fields. Prior works have limited usefulness as they require either a set of examples of the type to be counted or that the image contains only a single type of object. A significant factor in these shortcomings is the lack of a dataset to properly address counting in settings with more than one kind of object present. To address these issues, we propose the first Multi-class, Class-Agnostic Counting dataset (MCAC) and A Blind Counter (ABC123), a method that can count multiple types of objects simultaneously without using examples of type during training or inference. ABC123 introduces a new paradigm where instead of requiring exemplars to guide the enumeration, examples are found after the counting stage to help a user understand the generated outputs. We show that ABC123 outperforms contemporary methods on MCAC without the requirement of human in-the-loop annotations. We also show that this performance transfers to FSC-147, the standard class-agnostic counting dataset. Project page: ABC123.active.vision ## 1 Introduction Given an image and told to 'count', a person would generally understand the intended task and complete it with accuracy even if the kind of objects present has not been previously observed. However, in cases with a large number of objects, this is likely very slow. This natural human ability to count arbitrarily has not been modelled by today's datasets or methods. Current automated methods can count with a fair level of accuracy, so long as they have either an exemplar image as a prior on the type to count or an image with only one class of object. Since real-world situations will likely include objects of multiple classes, methods need to accurately count in these environments. We first introduce MCAC, a new synthetic multi-class class-agnostic counting dataset, and show that methods previously assumed to work in multi-class settings perform poorly on it. We then propose ABC123, a multi-class class-agnostic counter which does not need exemplars during training or inference. ABC123 significantly outperforms prior works on MCAC while also generalising to other datasets, mirroring the arbitrary nature of human counting abilities. Our contributions are: * We introduced MCAC, the first multi-class class-agnostic counting dataset * We propose ABC123, the first exemplar-free multi-class class-agnostic counter * We show that prior methods do not perform as expected in multi-class settings and that ABC123 tackles multi-class counting effectively Figure 1: **ABC123 counts objects of multiple unseen types**. Our method not only does not need exemplars to define the type to count but also finds examples of each type it has counted. ## 2 Related Work Class-specific counting methods aim to enumerate the instances of a single or small set of known classes [1, 5, 8, 23]. These methods struggle to adapt to novel classes and need new data and retraining for each type of object. To address these issues, Lu et al. [12] proposed class-agnostic counting, a framework without inference-time classes during training. Still most class-agnostic methods [16, 19, 25], including this one, require an exemplar images of the object class at test time. These methods generally work by creating a sufficiently general feature space and applying some form of matching to the whole feature map [18, 25] or to proposed regions of interest [16, 19]. Recent works, RepRPN [15], CounTR [11], ZSC [24] and RCC [6] do away with exemplar images at inference-time, removing the need for intervention during deployment. RepRPN is a two-step method which proposes regions likely to contain an object of interest and then uses them for an exemplar-based density map regression method. It proposes more than one bounding box and enumerates them separately. ZSC [24] uses a multi-stage process in which a text input is used to generate an generic image of the type to be counted that is then used to find exemplar patches. These exemplar patches then function as the input to an exemplar-based method [18]. CounTR uses a large vision-transformer encoder-decoder to regress a density map of instance locations. It is trained in a mixed few/zero-shot way, applying understanding gained from exemplar-based examples to exemplar-free cases. It has been assumed that the above methods can function in multi-class settings. However, this has not been proven rigorously because the main dataset for class-agnostic counting (FSC-147/133 [6, 16]) contains only one labelled class per image. In fact, we show in Sec. 5 that these methods perform poorly in contexts with multiple types present. FSC-147 being single-class has also explicitly motivated work such as RCC, which regresses a single scalar value from an image. It is trained without exemplar images and uses only scalar supervision instead of density maps. Even with the constraint of only counting one kind of object and with no further direction on the type to count, RCC achieves competitive results with exemplar-based methods on FSC-147, showing the limitations of this dataset. While large models with image inputs [14] like SAM [9] would seem to be able to effectively count objects of arbitrary types, in fact these methods have poor numerical understanding [13], and SAM performs unsatisfactorily on all counting tasks but especially on images with small objects or a high density of objects. ## 3 MCAC Dataset ### Motivation There are currently no datasets suitable for multi-class class-agnostic counting problems. This significantly impacts the research into methods addressing these tasks. The use of FSC-147 as the main class-agnostic counting dataset has caused many problems, the lack of multi-class cases being one of the most significant. This has lead to many methods being wrongly assumed to work in multi-class settings. It has also limited the development of exemplar-free multi-class methods. To this end we introduce MCAC, the first multi-class class class-agnostic counting dataset. While the deployment query scenario, counting given an unlabelled image of objects, is natural, the training and quantitative evaluation of methods to address it is not. To facilitate training and evaluation of methods in multi-class settings, we need images with multiple objects of multiple types. To evaluate a methods generalisability to unseen object types, the classes present in the images need to be mutually exclusive between training, validation and testing. It is infeasible to gather natural images with (a) a wide variety of classes, (b) a wide variety of the number of times an object appears in an image, and (c) no repetition of the types of object between the train, test, and validation splits. Using synthetic images allows the above constraints to be satisfied while also providing a high level of precision and accuracy Figure 2: **An example image from the training set of MCAC. All objects have associated instance labels, class labels, bounding boxes, centre points, and occlusion percentages.** in the labels for each image. As shown in Sec. 5.3, the understanding gained from training on synthetic data is general enough to apply to the standard photographic counting dataset, FSC [16]. ### Ambiguity Both exemplar-based and exemplar-free methods bump into problems of ambiguity. If there are objects of varied levels of generality, which boundary should be used? For example, on a chess board with a single white pawn as the exemplar, should the count be of all the pieces, all the white pieces, all the white pawns, all the pawns, and so on? Given the infeasability of defining every possible way of grouping the objects present in an image, we define a single way of grouping the objects: an identical mesh and texture, independent of size or orientation. We do, however, acknowledge the existence of other _valid-but-unknown_ counts, the unlabelled ways of grouping the objects. ### Dataset MCAC contains images with between 1 and 4 classes of object and between 1 and 400 instances per class. The distribution of classes per image and instances per class are shown in Fig. 3. MCAC has three data splits: training with 4756 images (8298 counts) drawn from 287 classes; validation 2413 images (3640 counts) drawn from 37 classes, and testing with 2114 images (4286 counts) drawn from 19 classes. Each instances in an image have associated class labels, model labels, center coordinates, bounding box coordinates, segmentation maps, unoccluded segmentation maps, and occlusion percentages. The occlusion percentage is calculated as \(1-\frac{A_{0}}{A_{1}}\), where \(A_{0}\) is the number of pixels in the final image and \(A_{1}\) is the number of pixels that would be seen if the object was unoccluded and was completely within the bounds of the image. Objects are 'dropped' into the scene, ensuring random locations and orientations. As objects in real setting often vary in size, we vary the size of objects by \(\pm\)50% from a random nominal size. We also vary the number, location, and intensity of lights present. Models and textures are drawn from ShapeNetSem [17]. The exact data splits by class name are in Appendix A. ### Using MCAC Here we lay out our suggested usage of MCAC, which we used to generate our results. We will also release code for a pytorch dataset to enable easy adoption. Objects that are more than 70% occluded by either other objects or the edge of the frame should be excluded. The ground-truth density map has a Gaussian centred on the center pixel of each object with \(\sigma=8\). As is standard [10, 11, 16, 25], when the Gaussians from different instances overlap, we sum them rather than taking the maximum value. We then scale the whole density map so the sum over it is the correct value. This adjustment improved our results by \(\sim 5\%\). When training exemplar-based methods, we take bounding boxes randomly from instances with less than 30% occlusion. We evaluate these methods using the bounding boxes of the three least occluded instances. In cases with more than three equally occluded instances, we use those with the lowest instance IDs. ## 4 Method Our method, ABC123, takes an image with multiple instances of objects of multiple types and regresses the count of each type. This is achieved _blind_, i.e. on objects of arbitrary classes with no requirement to have seen the object class during training or to have an exemplar image to define the type during inference. We achieve this by first regressing density maps for each type then enumerating the instances using integration. To facilitate training and evaluating ABC123 in an exemplar-free way, we propose a matching stage. To further increase the interpretability of the outputs of ABC123, we design an example discovery stage which finds specific instances of the counted object. ### Density Map Regression For each image, there are \(m\) classes present, each with an associated ground truth count \(y\) and density map \(d\). We regress \(\hat{m}\) counts and density map predictions, \(\hat{y}\) and \(\hat{d}\) respectively. \[\hat{y}=\sum_{h,w}\hat{d}_{(h,w)} \tag{1}\] where \(\hat{d}_{(h,w)}\) denotes the density value for pixel (h, w). We achieve this by using \(\hat{m}\) convolutional up-sampling heads on top of a vision transformer backbone [4]. We use a vision transformer backbone due to its globally receptive field and self-attention mechanism, which Hobley and Prisacariu [6] showed is crucial to generating a complex understanding in counting settings. Each head regresses a single pixel-wise density map prediction and count prediction from a Figure 3: Distributions of the number of classes in an image and of the number of instances of each class per image across the dataset. patch-wise low-resolution high-dimensional feature space. Similar to other contemporary methods [10, 11, 16, 25], we use the pixel-wise error \(||d-\hat{d}||_{1}\) as our loss, where \(d\) and \(\hat{d}\) are the ground truth and predicted density maps. This produced more accurate results than using the L\({}_{2}\) distance between the \(d\) and \(\hat{d}\); see Sec. 6.1 for the full comparison. ### Matching In single-class or exemplar-based settings, there is a single prediction-label pair. However, in multi-class exemplar-free settings, there are multiple predictions as well as multiple labels, without a clearly defined pairing. This resembles class-discovery [21, 26] and clustering [7, 22] problems, where the number and cardinality of new classes is not necessarily known. In keeping with these fields and to facilitate training and evaluate accuracy, we find correspondences between the set of \(m\) known counts and the set of \(\hat{m}\) predicted counts. The correspondence matrix is defined as \(\mathcal{X}=\{0,1\}^{m\times\hat{m}}\) where \(\mathcal{X}_{i,j}=1\) iff prediction \(i\) is assigned to label \(j\). A problem instance is described by an \(m\times\hat{m}\) cost matrix \(\mathcal{C}\), where \(\mathcal{C}_{i,j}\) is the cost of matching prediction \(i\) and ground truth label \(j\). The goal is to find the complete assignment of predictions to labels of minimal cost. Formally, the optimal assignment has cost \[\min_{\mathcal{X}}\sum_{i=0}^{m}\sum_{j=0}^{\hat{m}}\mathcal{C}_{i,j}\cdot \mathcal{X}_{i,j} \tag{2}\] Specifically, our cost function is defined as the the pixel-wise distance of the normalised ground truth density map \(d_{i}\) and the predicted density map \(\hat{d}_{j}\). \[\mathcal{C}_{i,j}=\left|\left|\frac{d_{i}}{||d_{i}||_{2}}-\frac{\hat{d}_{j}}{|| \hat{d}_{j}||_{2}}\right|\right|_{2} \tag{3}\] The normalisation ensures the matching is done on the locality of the counted objects rather than the magnitude of the prediction itself. We use the Hungarian algorithm, specifically the Jonker-Volgenant algorithm outlined in Crouse [3], to solve for \(\mathcal{X}\) robustly. The supervision loss for each image is the sum of the L\({}_{1}\) difference of the ground truth density maps and their matched predictions as: \[\mathcal{L}=\sum_{i,j}^{m,\hat{m}}||d_{i}-\hat{d}_{j}||_{1}\cdot\mathcal{X}_{ i,j} \tag{4}\] It should be noted that every label has an associated prediction, but the inverse is not the case as generally \(\hat{m}>m\) Figure 4: **The ABC123 pipeline.** Our method learns to count objects of multiple novel classes without needing exemplar images. During training and quantitative evaluation, the matcher aligns the unguided predictions to the ground truth labels. The example prediction stage locates instances associated with each generated count. This means we do not impose a loss on the unmatched density maps. This allows the network to generate more nuanced count definitions as it does not punish valid-but-unknown counts which are likely present in any counting setting. As is usual [7, 21, 22, 26], we use the same matching procedure to evaluate our performance at inference-time. It should be noted that as this matching uses the ground-truth density maps, it could be used to significantly benefit a method's quantitative results without improving its deployment capabilities. Specifically, a method could in principle predict all possible density maps and use the matching stage to pick the correct one. We limit ourselves to generating 5 density maps to minimise this behaviour. We explore the effect of this further in Sec. 6.3. We also normalise the density maps between 0 and 1 to ensure we are only matching to the locations of objects rather than the counts themselves. ### Example Discovery While exemplar-free counting saves a user time, as no manual intervention is required, it does require the user to interpret the results. A set of scalar count values can be unclear as it is not always obvious which count corresponds to which type of object in the input image. Density maps can also often be difficult to interpret, especially in high density situations. To aid the user in understanding to which class a generated count corresponds, we propose flipping the usual exemplar-based paradigm. Instead of using exemplar images to define the type to count, we find examples of the type counted. We achieve this by using the peaks from the predicted density map as seed inputs for a pre-trained segmentation method [9]. While this segmentation method is often not accurate enough to segment a singular whole object given only a singular point, they are generally good enough that an expanded bounding box around them contains enough information for a user to understand the class of object that has been counted; see Fig. 7 for examples. ### Implementation We use ViT-Small [4] due to its lightweight nature and for comparison to methods that use the resnet-50 backbone, such as FamNet [16, 18]. ViT-S has a similar number of parameters (21M vs 23M), throughput (1237im/sec vs 1007im/sec), and supervised ImageNet performance (79.3% vs 79.8%) as ResNet-50 [20]. ABC123 is trainable in less than eight hours using two 1080Tis. It takes less than two hours to train just the head with a frozen backbone (ABC123%). Since vision transformers typically demand substantial training data, we initialised our transformer backbone with weights sourced from Caron et al. [2]. This self-supervised pre-training process endows the network with an understanding of meaningful image features prior to exposure to our dataset and without supervision. This approach reduces the risk of overfitting when the model is then trained on our dataset. Our counting heads, comprised of 3 Conv-ReLU-Upsample blocks, increases the patch-wise resolution of the trained counting features from \(k\times(28\times 28)\) to a pixel-wise density map prediction of \(\hat{m}\times(224\times 224)\), where \(k\) is the dimensionality of the transformer features and \(\hat{m}\) is the number of predicted counts. For ABC123, \(k=384\). We set \(\hat{m}=5\) to ensure that the method has the capacity to generate a count per defined class in the dataset and at least one valid-but-unknown count. Our choice of ViT-S limits the resolution of our input image to (224\(\times\)224) as opposed to the \(\geq\)(384\(\times\)384) resolution used by contemporary methods with ResNet-50 or larger ViT backbones. For the example discovery stage, we use a frozen pretrained ViT-B SAM model [9]. ## 5 Results ### Benchmarking Methods We evaluate our method against two trivial baseline methods, predicting the training-set mean or median count for all inference images. As there are no previous multi-class exemplar-free class-agnostic counting methods, we compare ABC123 to exemplar-based methods using separate exemplars from each of the classes present. We compare our method to FamNet [16], BMNet [18] and CounTR [11] on MCAC. Additionally, we compare to them and also RCC [6] and CounTR in its zero-shot configuration on MCAC-M1, the subset of MCAC with only a single type of object present per image. To ensure a good comparison, we follow the suggested procedure laid out in Sec. 3.4, evaluating the exemplar-based methods using the bounding boxes of the three least occluded instances of a given class. As in Xu et al. [24], we use Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Normalized Absolute Error (NAE), and Squared Relative Error (SRE) to evaluate the performance of each method. \[MAE=\frac{1}{nm}\sum_{j=1}^{n}\sum_{i=1}^{m_{j}}|y_{i}-\hat{y_{i}}|\] \[RMSE=\sqrt{\frac{1}{nm}\sum_{j=1}^{n}\sum_{i=1}^{m_{j}}(y_{i}-\hat{y_{i}})^{ 2}}\] \[NAE=\frac{1}{nm}\sum_{j=1}^{n}\sum_{i=1}^{m_{j}}\frac{|y_{i}-\hat{y_{i}}|}{y_ {i}}\] \[SRE=\sqrt{\frac{1}{nm}\sum_{j=1}^{n}\sum_{i=1}^{m_{j}}\frac{(y_{i}-\hat{y_{i}})^{2} }{y_{i}}}\] where \(n\) is the number of test images, \(m_{j}\) is the number of classes in image \(j\), and \(y_{i}\) and \(\hat{y}_{i}\) are the ground truth and the predicted number of objects of class \(i\) in image \(j\) respectively. ### Mcac We achieve significantly better result than FamNet, BMNet, and CounTR on MCAC both quantitatively and qualitatively without needing exemplars; see Tab. 1 for results and Fig. 5 for comparative examples. As seen in Fig. 5, FamNet often fails to discriminate between objects of different kinds when they are visually similar or in high density applications. Both quantitatively and qualitatively, BMNet and CounTR outperform FamNet. However, in many cases, they appear to count the'most obvious' objects in the image regardless of the provided exemplar images. Our method performs well on images with 4 classes even when they have high intra-class appearance variation, such as having different colours on different sides, and low inter-class variation; see Fig. 6. A downside to current exemplar-based class-agnostic counting methods is that while they have some multi-class capabilities, they all take a single exemplar at a time and produce only one count. This is slow and inefficient as compared to our method which generates all counts simultaneously. As would be expected, the performance of all methods improves when evaluating on MCAC-M1, the images from MCAC with only a single class present; see Tab. 2. This is due to a lack of ambiguity as to the type to be counted. This was more significant when the methods were trained on MCAC-M1 instead of MCAC. In this training configuration, the methods generally learnt a broader definition of similarity as there was no chance they would accidentally combine classes or count instances from another class. RCC performs well on MCAC-M1, showing the strength of the simple count-wise loss in cases where there is little ambiguity as to what is to be counted. In contrast to other methods, ABC123 trained on MCAC-M1 has similar performance to when it is trained on the full MCAC dataset, demonstrating that it avoids the issues with other methods concerning intra-class variance and combining classes. Training ABC123 with only a single head (\(\hat{m}=1\)) and no matching stage has very similar performance to using its default (\(\hat{m}=5\)) configuration with a matching stage. This increases our belief that the matching head does not provide an unfair advantage to our method's quantitative results. ### Applicability to FSC-147/133 ABC123 trained on only MCAC, a synthetic dataset, produces accurate results when applied to FSC-133/147, a photographic dataset. However, it often finds valid-but-unknown counts. As seen in Fig. 8, the generated counts are correct for the type of object counted, but the type counted is often not aligned with the labels in the original dataset. Classes are often divided into sub-classes, and unlabelled classes are discovered. There are two clear discrepancies between MCAC and FSC: **Cross domain image differences.** As has been noted in many papers, applying a method solely trained on synthetic data to real data is likely to have negative results. Since images in the MCAC dataset are synthetic and simple, they do not reflect all of the variation found in FSC. For example, all images in MCAC are taken from above with no camera noise or harsh shadows. That being said, as is seen in Fig. 8, the generalised counting ability gained from train Figure 5: **Comparison to other methods on MCAC.** ABC123 produces more accurate results than the exemplar-based methods without using exemplar images. The ground truth (GT) and predicted counts are shown in the top right corner of their respective density maps. ing on MCAC clearly translates into the more varied and complex images in FSC. **Inter and Intra class differences.** As discussed in Sec. 3, the labels in MCAC associate a count with objects of the same mesh and texture. FSC, however, is labelled by hand using high-level semantic understanding. This often leads to grouping of objects of significantly different geometries, colours, or textures under the same label. As seen in Fig. 8, when applying ABC123 trained on MCAC to images from FSC, we find that it often counts unlabelled classes, generates multiple identical counts because it enumerates different parts of the same objects, or subdivides classes by colour or geometry such as splitting 'chairs' into'red chairs' and 'grey chairs'. Novel class-discovery and duplicate part counts do not affect quantitative results on FSC as the matching stage removes them. However, sub-class counting causes erroneous numerical results. As our matching stage does not combine sub-class counts, only one of the sub-class counts is associated with the human labelled count. This generates a high numerical error and leads to poor quantitative results in the standard benchmark tests. ## 6 Ablations In this section, we explain and validate various design decisions including our loss and matching stage scaling. We also show the effect that generating large numbers of predictions can have on quantitative benchmarks. ### Validating Our Loss Hobley and Prisacariu [6] showed that using image-wise count loss functions like the absolute count error could be beneficial over pixel-wise loss functions as they allow a network to learn its own idea of positional salience. We test our loss, the pixel-wise error \(||d-\hat{d}||_{1}\), against both pixel-wise error squared \(||d-\hat{d}||_{2}^{2}\), as in [11, 16, 24], and the image-wise count percentage error \(\frac{|y-\hat{y}|}{y}\)[6], where \(d\) and \(\hat{d}\) are the ground truth and predicted density maps, and \(y\) and \(\hat{y}\) are the ground truth and predicted counts respectively. Using an image-wise count loss creates poor results, with somewhat arbitrary density maps and incorrect counts. This is likely due to the matching stage. Whereas it usually operates on the density map to which the loss is directly applied, it now operates over an unconstrained latent feature. This, along with the added complexity of multi-class images, makes the problem significantly harder to understand from purely integer supervision. We also found that using a pixel Figure 6: **Results of our method on images with 4 classes.** ABC123 is able to generate accurate counts and meaningful density maps from images with four novel classes. MCAC has between one and four classes of object per image. Figure 7: **Example Finding.** Using our density map predictions and a SAM [9] network, we can generate meaningful bounding boxes to aid a user in understanding what has been counted. wise L\({}_{2}\) slightly degraded performance as compared to using the L\({}_{1}\) distance between the ground-truth and predicted density maps. Still, the predicted density maps were qualitatively meaningful. This discrepancy is likely due to the increased significance the higher-density edge-cases have on the training. See Tab. 3 for the quantitative comparison. ### Validating Our Matching Scaling We found that normalising the density maps during training and testing was beneficial because it forced the network to generate the correct localisation as well as the correct magnitude. This normalisation also decreases the issue of the matcher cheating to find the correct count. Instead of selecting the correct count from a large set of predicted counts, the network must generate a density map with a correct localisation. Using the L\({}_{2}\) normalisation over the L\({}_{\infty}\) has a slight performance increase for training the network and during evaluation. See Tab. 4 for qualitative comparison of the normalisations during the matching stage. ### Validating Our Number of Predictions As predicted, generating much larger numbers of predictions increases the quantitative performance of the network; see Tab. 5 for the complete results. This is because the network can generate more diverse counts and use the matching stage to select the best one. We believe, however, that this does not align with a more useful network in a deployment situation. This numerical gain derives purely \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{Val Set} & \multicolumn{4}{c}{Test Set} \\ \cline{3-10} Method & Shots & MAE & RMSE & NAE & SRE & MAE & RMSE & NAE & SRE \\ \hline Mean & N/A & 39.87 & 53.56 & 3.07 & 11.40 & 42.67 & 59.68 & 2.79 & 10.93 \\ Median & N/A & 36.25 & 58.15 & 1.51 & 6.70 & 39.81 & 65.36 & 1.38 & 6.73 \\ \hline \multicolumn{10}{l}{_Exemplar-based_} \\ FamNet+ [16] & 3 & 24.76 & 41.12 & 1.12 & 6.86 & 26.40 & 45.52 & 1.04 & 6.87 \\ BMNet+ [18] & 3 & 15.83 & 27.07 & 0.71 & 4.97 & 17.29 & 29.83 & 0.75 & 6.08 \\ CounTR [11] & 3 & 15.07 & 26.26 & 0.63 & 4.79 & 16.12 & 29.28 & 0.67 & 5.71 \\ \hline \multicolumn{10}{l}{_Exemplar-free_} \\ ABC123 \(\lx@sectionsign\) & 0 & 14.64 & 23.67 & 0.46 & 2.97 & 15.76 & 25.72 & 0.45 & 3.11 \\ ABC123 & 0 & **8.96** & **15.93** & **0.29** & **2.02** & **9.52** & **17.64** & **0.28** & **2.23** \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison to SOTA methods on MCAC.** We significantly outperform methods which use exemplar images and test-time adaptation without requiring them. ABC123\(\lx@sectionsign\) denotes our method trained with a frozen pre-trained backbone. \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & & & \multicolumn{4}{c}{Val Set} & \multicolumn{4}{c}{Test Set} \\ \cline{3-10} Method & Multi-Class Training & Shots & \(\hat{m}\) & MAE & RMSE & NAE & SRE & MAE & RMSE & NAE & SRE \\ \hline Mean & N/A & N/A & N/A & 53.36 & 67.14 & 3.53 & 13.46 & 58.54 & 75.58 & 3.37 & 13.27 \\ Median & N/A & N/A & N/A & 45.98 & 76.64 & 1.08 & 6.68 & 51.35 & 86.61 & 1.03 & 7.00 \\ \hline \multicolumn{10}{l}{_Exemplar-based_} \\ FamNet+ [16] & ✓ & 3 & 1 & 24.97 & 48.63 & 0.36 & 3.79 & 28.31 & 54.88 & 0.35 & 3.97 \\ FamNet+ [16] & ✗ & 3 & 1 & 12.54 & 24.69 & 0.37 & 4.71 & 13.97 & 26.19 & 0.25 & 2.12 \\ BMNet+ [18] & ✓ & 3 & 1 & 11.70 & 23.08 & 0.26 & 2.39 & 11.57 & 22.25 & 0.24 & 1.96 \\ BMNet+ [18] & ✗ & 3 & 1 & 6.82 & 12.84 & 0.25 & 2.95 & 8.05 & 14.57 & 0.19 & 1.43 \\ CountTR [11] & ✓ & 3 & 1 & 11.44 & 21.37 & 0.33 & 2.36 & 10.91 & 21.70 & 0.29 & 2.01 \\ CounTR [11] & ✓ & 0 & 1 & 13.57 & 25.53 & 0.30 & 2.48 & 13.09 & 25.72 & 0.29 & 2.41 \\ CounTR [11] & ✗ & 3 & 1 & 9.00 & 16.91 & 0.41 & 3.56 & 9.96 & 18.92 & 0.38 & 2.93 \\ CounTR [11] & ✗ & 0 & 1 & 9.16 & 17.13 & 0.42 & 3.56 & 10.10 & 19.10 & 0.40 & 3.02 \\ \hline \multicolumn{10}{l}{_Exemplar-free_} \\ CountTR+[11] & ✗ & 0 & 1 & 11.46 & 21.24 & 0.35 & 2.78 & 12.54 & 23.84 & 0.31 & 2.38 \\ RCC [6] & ✗ & 0 & 1 & 7.78 & 15.40 & 0.24 & 2.71 & 8.81 & 16.92 & 0.19 & 1.73 \\ ABC123 \(\lx@sectionsign\) & ✗ & 0 & 5 & 10.78 & 18.83 & 0.28 & 1.97 & 13.23 & 24.57 & 0.29 & 2.39 \\ ABC123 \(\lx@sectionsign\) & ✗ & 0 & 1 & 11.38 & 19.73 & 0.40 & 3.51 & 14.31 & 25.40 & 0.37 & 2.79 \\ ABC123 \(\lx@sectionsign\) & ✓ & 0 & 5 & 10.98 & 18.85 & 0.30 & 1.93 & 13.13 & 23.93 & 0.29 & 2.18 \\ ABC123 & ✗ & 0 & 5 & **5.82** & **11.74** & **0.15** & **1.22** & 7.54 & 15.30 & 0.21 & 1.87 \\ ABC123 & ✗ & 0 & 1 & 5.85 & 12.91 & 0.24 & 3.37 & 7.53 & 15.69 & 0.22 & 2.19 \\ ABC123 & ✓ & 0 & 5 & 6.08 & 12.62 & 0.16 & **1.22** & **6.82** & **14.70** & **0.16** & **1.51** \\ \hline \hline \end{tabular} \end{table} Table 2: **Comparison to SOTA methods on MCAC-M1.** MCAC-M1 is the subset of MCAC with only one class present per image. Methods are either trained on the full multi-class dataset (✓) or MCAC-M1 (✗). ‘\(\hat{m}\)’ denotes the number of predictions the method generates per query and ‘Shots’ denotes the number of exemplar images per query at inference time. CounTR†is an exemplar-free adaption of CounTR as in Hobley and Prisacariu [6]. ABC123 outperforms other methods when trained in both single or multi-class settings. from the matching stage, which is not present during deployment. In fact, during deployment, this would correspond to a much more difficult to interpret output as a user would have to figure out which of the many outputs was most relevant. We found that when high numbers of predictions were generate, fewer than half were used, i.e. the outputs of some heads were never picked. This is likely due to these heads not being matched frequently during training so the loss is rarely propagated back through them. There is also significant redundancy between the heads. The predictions of certain heads over the whole dataset were clearly similar and could be grouped. Of the 39 utilised heads, there were three groups of, respectively, 13, 6, and 4 heads that were very similar, lowering the effective number of utilised heads to 19. ## 7 Conclusion In this work, we present ABC123, a multi-class exemplar-free class-agnostic counter, and show that it is superior to prior exemplar-based methods in a multi-class setting. ABC123 requires no human input at inference-time, works in complex settings with more than one kind of object present, and outputs easy to understand information in the form of examples of the counted objects. Due to this, it has good potential for deployment in various fields. We also propose MCAC, a multi-class class-agnostic counting dataset, and use it to train our method as well as to demonstrate that exemplar-based counting methods may not be as robust as previously assumed in multi-class settings. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Normalisation} & \multicolumn{3}{c}{Val Set} & \multicolumn{3}{c}{Test Set} \\ \cline{2-10} Method & Train & Eval & MAE & RMSE & NAE & SRE & MAE & RMSE & NAE & SRE \\ \hline ABC123 \(\lx@math@degree\) & None & None & 20.31 & 32.40 & 0.78 & 5.40 & 22.67 & 36.63 & 0.92 & 7.25 \\ ABC123 \(\lx@math@degree\) & None & L\({}_{2}\) & 21.38 & 35.08 & 0.86 & 6.43 & 25.46 & 43.37 & 1.05 & 8.67 \\ ABC123 \(\lx@math@degree\) & None & L\({}_{\infty}\) & 20.13 & 32.15 & 0.81 & 5.86 & 22.76 & 36.78 & 0.95 & 7.64 \\ ABC123 \(\lx@math@degree\) & L\({}_{2}\) & None & 41.20 & 66.04 & 0.87 & 6.22 & 45.04 & 73.41 & 0.87 & 6.51 \\ ABC123 \(\lx@math@degree\) & L\({}_{2}\) & L\({}_{\mathbf{14.64}}\) & **23.67** & **0.46** & **2.97** & **15.76** & **25.72** & **0.45** & **3.11** \\ ABC123 \(\lx@math@degree\) & L\({}_{2}\) & L\({}_{\infty}\) & 17.21 & 27.94 & 0.51 & 3.28 & 17.97 & 29.36 & 0.48 & 3.30 \\ ABC123 \(\lx@math@degree\) & L\({}_{\infty}\) & None & 22.40 & 35.80 & 0.73 & 4.89 & 23.93 & 38.59 & 0.80 & 5.86 \\ ABC123 \(\lx@math@degree\) & L\({}_{\infty}\) & L\({}_{2}\) & 17.92 & 28.98 & 0.67 & 5.12 & 20.40 & 33.39 & 0.77 & 6.24 \\ ABC123 \(\lx@math@degree\) & L\({}_{\infty}\) & L\({}_{\infty}\) & 19.33 & 30.75 & 0.67 & 4.77 & 20.84 & 33.61 & 0.74 & 5.91 \\ ABC123 & None & None & 15.46 & 28.49 & 0.60 & 5.12 & 16.15 & 29.82 & 0.69 & 6.50 \\ ABC123 & None & L\({}_{2}\) & 14.83 & 27.25 & 0.68 & 6.15 & 17.04 & 31.88 & 0.84 & 8.06 \\ ABC123 & None & L\({}_{\infty}\) & 14.75 & 27.41 & 0.65 & 5.92 & 16.54 & 30.80 & 0.77 & 7.45 \\ ABC123 & L\({}_{2}\) & None & 35.29 & 61.12 & 0.67 & 5.55 & 38.15 & 67.89 & 0.64 & 5.78 \\ ABC123 & L\({}_{2}\) & L\({}_{2}\) & **8.96** & **15.93** & **0.29** & **2.02** & **9.52** & **17.64** & **0.28** & 2.23 \\ ABC123 & L\({}_{2}\) & L\({}_{\infty}\) & 9.95 & 18.16 & 0.32 & 2.22 & 9.89 & 18.55 & 0.29 & **2.20** \\ ABC123 & L\({}_{\infty}\) & None & 15.72 & 28.83 & 0.59 & 4.69 & 16.11 & 30.22 & 0.67 & 6.16 \\ ABC123 & L\({}_{\infty}\) & L\({}_{2}\) & 12.17 & 22.81 & 0.57 & 5.09 & 13.97 & 27.66 & 0.71 & 7.05 \\ ABC123 & L\({}_{\infty}\) & L\({}_{\infty}\) & 12.46 & 23.00 & 0.56 & 4.90 & 13.96 & 27.12 & 0.66 & 6.37 \\ \hline \hline \end{tabular} \end{table} Table 4: **Effect of using normalised density maps for the matching stage.** Normalisation improves the accuracy of ABC123. \(L_{2}\) normalisation is more effective than \(L_{\infty}\) during training and evaluation. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Val Set} & \multicolumn{3}{c}{Test Set} \\ \cline{3-10} Method & Loss & MAE & RMSE & NAE & SRE & MAE & RMSE & NAE & SRE \\ \hline ABC123 \(\lx@math@degree\) & Count-MAPE & 19.88 & 38.20 & 0.48 & 3.62 & 25.32 & 49.18 & 0.52 & 4.28 \\ ABC123 \(\lx@math@degree\) & Pixel \(L_{2}\) & 16.46 & 26.54 & 0.78 & 4.79 & 17.92 & 28.88 & 0.75 & 4.77 \\ ABC123 \(\lx@math@degree\) & Pixel \(L_{1}\) & **14.64** & **23.67** & **0.46** & **2.97** & **15.76** & **25.72** & **0.45** & **3.11** \\ ABC123 \(\lx@math@degree\) & Count-MAPE & 15.95 & 29.71 & 0.50 & 3.54 & 16.28 & 29.82 & 0.49 & 3.58 \\ ABC123 & Pixel \(L_{2}\) & 10.76 & 19.31 & 0.49 & 3.84 & 11.91 & 21.42 & 0.51 & 4.21 \\ ABC123 & Pixel \(L_{1}\) & **8.96** & **15.93** & **0.29** & **2.02** & **9.52** & **17.64** & **0.28** & **2.23** \\ \hline \hline \end{tabular} \end{table} Table 3: **Effect of different loss functions during training.** Pixel-wise \(L_{1}\) loss produces significantly better results than the pixel-wise \(L_{2}\) loss and the count-wise MAPE.
2307.00132
iMETRE: Incorporating Markers of Entity Types for Relation Extraction
Sentence-level relation extraction (RE) aims to identify the relationship between 2 entities given a contextual sentence. While there have been many attempts to solve this problem, the current solutions have a lot of room to improve. In this paper, we approach the task of relationship extraction in the financial dataset REFinD. Our approach incorporates typed entity markers representations and various models finetuned on the dataset, which has allowed us to achieve an F1 score of 69.65% on the validation set. Through this paper, we discuss various approaches and possible limitations.
N Harsha Vardhan, Manav Chaudhary
2023-06-30T20:54:41Z
http://arxiv.org/abs/2307.00132v1
# METRE: Incorporating Markers of Entity Types for Relation Extraction ###### Abstract. Sentence-level relation extraction (RE) aims to identify the relationship between 2 entities given a contextual sentence. While there have been many attempts to solve this problem, the current solutions have a lot of room to improve. In this paper, we approach the task of relationship extraction in the financial dataset REFinD (Friedman et al., 2017). Our approach incorporates typed entity markers representations, and various models finetuned on the dataset, which has allowed us to achieve an \(F_{1}\) score of 69.65% on the validation set. Through this paper, we discuss various approaches and possible limitations. REFinD, Relation Extraction, Natural Language Processing, Finance, Information Retrieval + Footnote †: journal: Information Information Retrieval + Footnote †: journal: Information Retrieval + Footnote †: journal: Information leading to accurate relationship predictions. Typed Entity Markers are used in PLM-based RE models to mark entity spans and types using punctuation (Kal * FLANG-DistilBERT(Fang et al., 2019) : FLANG-DistilBERT is built by further training the DistilBERT language model in the finance domain with improved performance over previous models due to the use of domain knowledge and vocabulary. Since each of the models has been trained on different data and was different in terms of architecture, there have been slight increments in metrics across models. #### 2.3.2. Classification based on Entity Type Pair One of the shortcomings of treating the task as a \(22\)-label classification problem is that the classification is weakly affected by the entity types. This is coupled with the fact that some of the relationships are semantically close and ambiguous. For example, \(member\_of\), \( employee\_of\), and \(founder\_of\) for entity pairs belonging to the PER-ORG, PER-UNIV, and PER-GOV groups are often confused. Since the data distribution is also non-uniform across classes, this increases the error rate. Hence we divided the classification task into eight classification tasks, one for each entity type pair, keeping a constant model DistilBERT for fine-tuning. The results obtained have been compiled and presented in Table 2. Although we can see better classification metrics in some entity pair classes, the overall \(F_{1}\) score is relatively low. This is primarily due to the unique operationalization of the \(F_{1}\) used for grading on the validation set as well as the disparity along entity-pair classes. The performance also increases in entity pair classes since some relationships are semantically similar, and even the usage of entity markers fails to compensate adequately, while segregation of such classes shows classwise better results. Apart from these, given the unique operationalization of the \(F_{1}\) score used for grading in the shared task, approaches for an architecture that leverages the high % of \(NO\_RELATION\) entries, improvements with larger models of the architectures above mentioned can be some potential ideas of interest. ## 3. Conclusion We see that using a modification of the classical entity mask and marker approaches which are slightly flawed due to a lack of understanding of numerical inferences (using entity markers (quote MTB)), while the proposed punctuated entity markers are better at capturing context within relations of entities as well as detecting the entity spans. We also observe that although the results seem consistent and equitable across models, XLNET-base outperforms the others. This can be mainly attributed to the bidirectional nature of training data which, in essence, captures the importance of sequencing between entities for relation prediction, and due to structured textual forms in REFinD dataset, permutation-based attention masking might have also played a crucial role. This also achieves an \(F_{1}\) score of \(69.65\%\) on the validation dataset. Also, from the eight-class approach, we can see an individual increase in some classes to a high degree. This could be attributed to better separation between semantically similar results. Although this approach still fails to outperform overall \(F_{1}\), across models, mainly due to disparity in support of the classes, it still might be better for pair-specific classifications. ## 4. Future Work Theoretically, using Larger Models instead of Distilled and Base Models should improve the results. Also, techniques like data augmentation to inflate classes with low support might also increase the overall efficiency and viability of the models and dataset. Some other approaches of differentiating based on entity type pairs and integrating into the final layer might also yield some interesting results.
2307.16750
Iterated Resultants in CAD
Cylindrical Algebraic Decomposition (CAD) by projection and lifting requires many iterated univariate resultants. It has been observed that these often factor, but to date this has not been used to optimise implementations of CAD. We continue the investigation into such factorisations, writing in the specific context of SC-Square.
James H. Davenport, Matthew England
2023-07-31T15:16:48Z
http://arxiv.org/abs/2307.16750v1
# Iterated Resultants in CAD ###### Abstract Cylindrical Algebraic Decomposition (CAD) by projection and lifting requires many iterated univariate resultants. It has been observed that these often factor, but to date this has not been used to optimise implementations of CAD. We continue the investigation into such factorisations, writing in the specific context of SC\({}^{2}\). Cylindrical Algebraic Decomposition, Resultant, Grobner Basis 8th International Workshop on Satisfiability Checking and Symbolic Computation, July 28, 2023, Tromso, Norway, Collocated with ISSAC 2023 [email protected] (J. H. Davenport); [email protected] (M. England) [https://people.bath.ac.uk/masjhd](https://people.bath.ac.uk/masjhd) (J. H. Davenport); [https://matthewengland.coventry.domains](https://matthewengland.coventry.domains) (M. England) 0000-0002-3982-7545 (J. H. Davenport); 0000-0001-5729-3420 (M. England) ## 1 Introduction The resultant of two polynomials is a polynomial formed of their coefficients that is equal to zero if and only if the two original polynomials have a common root. Resultants are a widely used tool in symbolic computation, and in satisfiability checking over non-linear arithmetic. In particular, they are a key ingredient of Cylindrical Algebraic Decomposition (CAD) [1] which in its traditional projection and lifting form requires many iterated univariate resultant calculations. [1, pp. 177-178] suggests that iterated resultants, where there are "common ancestors" tend to factor. This was apparently responded to by van der Waerden in a letter [2], which alas we have not seen, but the letter's contents are taken up again in [3]. There are further developments in [4, 5]. [3] is based on the theory in [6], which [7] notes has been deleted from more recent editions (such as [8]). [4] is based on [7]. Despite this factorisation being observed since the inception of CAD, we are not aware of any optimisations in CAD implementations in regards to it. The purpose of this paper is to look at the connections of results on such factorisations with Cylindrical Algebraic Decomposition (CAD) [1] and also Cylindrical Algebraic Coverings (CAC) [9], a recent algorithm that was formed out of the SC\({}^{2}\) community via a reworking of CAD theory to better suit the SMT context. For CAD, we assume that we are constructing a CAD for a specific Boolean formula \(\Phi\), rather than just a set of polynomials. For CAC, we again assume we are looking for SAT/UNSAT for a specific Boolean formula \(\Phi\). ## 2 Theory We are grateful to [10] for a clear exposition of the results in [7], which we have borrowed. **Definition 1**.: _Given \(r\) homogeneous polynomials \(F_{1},\ldots,F_{r}\) in \(x_{1},\ldots,x_{n}\), with indeterminate coefficients comprising a set \(A\), an integral polynomial \(T\) in these indeterminates (that is, \(T\in\mathbf{Z}[A]\)) is called an inertia form for \(F_{1},\ldots,F_{r}\) if \(x_{i}^{\tau}T\in(F_{1},\ldots,F_{r})\), for suitable \(i\) and \(\tau\)._ Van der Waerden observes that the inertia forms comprise an ideal \(I\) of \(\mathbf{Z}[A]\), and he shows further that \(I\) is a prime ideal of this ring. It follows from these observations that we may take the ideal I of inertia forms to be a resultant system for the given \(F_{1},\ldots,F_{r}\) in the sense that for special values of the coefficients in \(K\), the vanishing of all elements of the resultant system is necessary and sufficient for there to exist a non-trivial solution to the system \(F_{1}=0,\ldots,F_{r}=0\) in some extension of \(K\). Now consider the case in which we have \(n\) homogeneous polynomials in the same number \(n\) of variables. Let \(F_{1},\ldots,F_{n}\) be \(n\) generic homogeneous forms in \(x_{1},\ldots,x_{n}\) of positive total degrees \(d_{1},\ldots,d_{n}\). That is, every possible coefficient of each \(F_{i}\) is a distinct indeterminate, and the set of all such indeterminate coefficients is denoted by \(A\). Let \(I\) denote the ideal of inertia forms for \(F_{1},\ldots,F_{n}\). Proofs of the following two propositions may be found in [11]. **Proposition 1**.: _[_10_, Proposition 5]__\(I\) is a nonzero principal ideal of \(\mathbf{Z}[A]\): \(I=(R)\), for some \(R\neq 0\). \(R\) is uniquely determined up to sign. We call \(R\) the (generic multipolynomial) resultant of \(F_{1},\ldots,F_{n}\)._ **Proposition 2**.: _[_10_, Proposition 6]_ _The vanishing of \(R\) for particular \(F_{1},\ldots,F_{n}\) with coefficients in a field \(K\) is necessary and sufficient for the existence of a non-trivial zero of the system \(F_{1}=0,\ldots,F_{n}=0\) in some extension of \(K\)._ The above considerations also lead to the notion of a resultant of \(n\) non-homogeneous polynomials in \(n-1\) variables. For a given non-homogeneous \(f(x_{1},\ldots,x_{n-1})\) over \(K\) of total degree d, we may write \(f=H_{d}+H_{d-1}+\cdots+H_{0}\), where the \(H_{j}\) are homogeneous of degree \(j\). Then \(H_{d}\) is known as the leading form of \(f\). Recall that the homogenization \(F(x_{1},\ldots,x_{n})\) of \(f\) is defined by \(F=H_{d}+H_{d-1}x_{n}+\cdots+H_{0}x_{n}^{d_{n}}\). Let \(f_{1},\ldots,f_{n}\) be particular non-homogeneous polynomials in \(x_{1},\ldots,x_{n-1}\) over \(K\) of positive total degrees \(d_{i}\), and with leading forms \(H_{i,d_{i}}\). We set \(\operatorname{res}(f_{1},\ldots,f_{n})=\operatorname{res}(F_{1},\ldots,F_{n})\), where \(F_{i}\) is the homogenization of \(f_{i}\). Then we have the following (see proof in [11]). **Proposition 3**.: _[_10_, Proposition 7]_ _The vanishing of \(\operatorname{res}(f_{1},\ldots,f_{n})\) is necessary and sufficient for either the forms \(H_{i,d_{i}}\) to have a common nontrivial zero over an extension of \(K\), or the polynomials \(f_{i}\) to have a common zero over an extension of \(K\)._ Observe that the common zeros of the \(f_{i}\) correspond to the affine solutions of the system, whereas the nontrivial common zeros of the leading forms correspond to the projective solutions on the hyperplane at infinity. ## 3 Iterated Resultants: An Example Consider these polynomials: \[f=y^{2}+z^{2}+x+z-1,\] \[g=-x^{2}+y^{2}+z^{2}-1,\] \[h=x^{2}+y+z.\] ### First variable ordering Under variable ordering \(z\succ y\succ x\) we may calculate the iterated resultant: \[\mathrm{res}_{y}(\mathrm{res}_{z}(f,g),\mathrm{res}_{z}(f,h)) = 5x^{8}+16x^{7}+14x^{6}-2x^{5}-12x^{4}-8x^{3}+3x^{2}+2x \tag{1}\] \[= \underbrace{x\left(5x^{3}+6x^{2}-3x-2\right)}_{\text{spurious}} \underbrace{\left(x^{2}+x+1\right)\left(x^{2}+x-1\right)}_{\text{genuine}}\ \.\] We define the meaning of the labels below. An alternative computational path may have calculated similarly \[\mathrm{res}_{y}(\mathrm{res}_{z}(f,g),\mathrm{res}_{z}(g,h)) = 5x^{8}+16x^{7}+18x^{6}+8x^{5}-5x^{4}-8x^{3}-2x^{2}+1 \tag{2}\] \[= \underbrace{\left(x^{2}+x+1\right)\left(x^{2}+x-1\right)}_{\text {genuine}}\underbrace{\left(5x^{4}+6x^{3}+x^{2}-1\right)}_{\text{spurious}}.\] The final choice would have been to calculate, \[\mathrm{res}_{y}(\mathrm{res}_{z}(f,h),\mathrm{res}_{z}(g,h)) = 2x^{4}+4x^{3}+2x^{2}-2 \tag{3}\] \[= 2\underbrace{\left(x^{2}+x+1\right)\left(x^{2}+x-1\right)}_{ \text{genuine}}.\] Up to constants (3) divides (2) and (1), but this need not happen in general. What does happen in general is that, if we consider a Grobner Basis, \[\mathtt{Basis}_{\mathtt{plex}}(f,g,h)=\left\{x^{4}+2x^{3}+x^{2}-1,y-x,x^{2}+x +z\right\}, \tag{4}\] then we see that the basis polynomial in \(x\) only divides all three iterated resultants and in fact _is_\(\mathrm{res}(f,g,h)\) in the sense of SS2. In this example, it is also (3), but again this need not happen in general. The labels above are made in regards to the roots of the tagged resultant factors. The roots of the part we have labelled as "genuine" are \[\{x:\exists y\exists zf(x,y,z)=g(x,y,z)=h(x,y,z)=0\}, \tag{5}\] whereas the roots of the part we have labelled as "spurious" are \[\{x:\exists y\left(\exists z_{1}f(x,y,z_{1})=g(x,y,z_{1})=0\wedge\exists z_{2} \neq z_{1}f(x,y,z_{2})=h(x,y,z_{2})=0)\}\,. \tag{6}\] They are "spurious" in the sense that they do not go on to form true triple roots. Nevertheless, they are \(x\) values above which the topology changes, so they cannot always be discarded. Note that SS2 implies that there is always a neat factorisation (over \(\mathbf{Z}\) if that was the original ring) into "genuine" versus "spurious". ### Second variable ordering What happens if we take the variables in a different order? In ordering \(x\succ y\succ z\) we have: \[\operatorname{res}_{y}(\operatorname{res}_{x}(f,g),\operatorname{res}_{x}(f,h)) =(z^{2}-1)^{2}, \tag{7}\] \[\operatorname{res}_{y}(\operatorname{res}_{x}(f,g),\operatorname{res}_{x}(g,h)) =(z^{2}-1)^{4}, \tag{8}\] \[\operatorname{res}_{y}(\operatorname{res}_{x}(h,g),\operatorname{res}_{x}(f,h)) =(z^{2}-1)^{4}, \tag{9}\] and \[\mathtt{Basis}_{\mathtt{plex}(x,y,z)}(f,g,h)=\left\{z^{2}-1,y^{2}+y+z,x-y \right\}. \tag{10}\] Ie. no spurious roots were uncovered with this ordering. The question of CAD variable ordering is well studied and known to greatly effect the complexity of CAD both in practice [12] and theory [13]. The introduction of spurious factors in some orderings but not others may be a significant contributing factor to this. ## 4 When Can Spurious Factors be Discarded? This section is not a complete classification on when spurious factors may be discarded, but it is a start. ### During CAD with multiple equational constraints McCallum [14] introduced the concept of multiple equation constraints, i.e. the case when \[\Phi\equiv f_{1}=0\wedge f_{2}=0\wedge\cdots f_{k}=0\wedge\overline{\Phi}(f_ {k+1},\ldots,f_{m}). \tag{11}\] Here McCallum projects just \(\operatorname{res}_{x_{n}}(f_{1},f_{i})\) and \(\operatorname{disc}_{x_{n}}(f_{i})\) (as well as various coefficients, which do not contribute to the degree explosion). But since \(f_{1}=0\) and \(f_{2}=0\), we know that \(\operatorname{res}_{x_{n}}(f_{1},f_{2})=0\) also. Hence all the \(\operatorname{res}_{x_{n}}(f_{1},f_{i})\) are equational constraints in \(x_{1},\ldots,x_{n-1}\). Thus the next projection is \[\operatorname{res}_{x_{n-1}}(\operatorname{res}_{x_{n}}(f_{1},f_{2}), \operatorname{res}_{x_{n}}(f_{1},f_{i})), \tag{12}\] \(\operatorname{res}_{x_{n-1}}(\operatorname{res}_{x_{n}}(f_{1},f_{2}), \operatorname{disc}_{x_{n}}(f_{i}))\) and numerous discriminants. In this case, we are only interested in the genuine zeros, as away from these the formula will be uniformly false and thus further refinement is unnecessary. So we can replace (12) by \(\operatorname{res}(f_{1},f_{2},f_{i})\). If the \(f_{i}\) have degree \(d\) in each \(x_{i}\), then the equivalent of (12) after \(k\) eliminations (i.e. eliminating all equational constraints) has degree \(O\big{(}(2d)d^{2^{k}}\big{)}\) (doubly exponential), whereas \(\operatorname{res}(f_{1},\ldots,f_{k})\) has degree \(O\left(d^{k}\right)\) (the Bezout bound). We note that [15] observed that use of \(k\) equational constraints reduces the double exponent of \(m\) from \(n\) to \(n-k\): the present observations show that the same reduction applies to the double exponent of \(d\), at least _inasmuch as the nested resultants are concerned_. Though it would have to be proved, it seems very likely that the same conclusions would apply to equational constraints with the Lazard projection [16]. Here, there are challenges with "curtains" [17], which are the same as the regions of nullification in [18]. ### During CAC In CAC [9], each polynomial has (at least one) explicit reason for being where it is in the computation. For example, \(\operatorname{res}_{x_{n}}(f_{1},f_{2})\) might be in the computation because of a specific root \(\alpha\), where it is the case for \(x_{n-1}>\alpha\) (until the next point) the regions ruled out by \(f_{1}\) and \(f_{2}\) overlap, whereas for \(x_{n-1}<\alpha\) we need a further reason to rule out regions. The same might be true of \(\operatorname{res}_{x_{n}}(f_{1},f_{3})\), needed because of a specific root \(\beta\). Then (12) tracks where \(\alpha\) and \(\beta\) meet. Hence in this context we are interested only in genuine roots, and again we can replace (12) by \(\operatorname{res}(f_{1},f_{2},f_{i})\). We would need to work this through precisely with an implementation of CAC, which has yet to be done. ## 5 Detecting Spurious Factors In the examples above the factors were marked as "spurious" or "genuine" via manual analysis to see if the roots of the factors led to common zeros or not. Are there alternatives to such manual detection? We note that in some cases we can discard factors with based on their degree, when this breaches the Bezout Bound on the true multivariate resultant. I.e., if \(\operatorname{res}_{y}(\operatorname{res}_{z}(f,g),\operatorname{res}_{z}(f,h))\) has an irreducible factor of degree \(>d^{3}\), it _must_ be spurious and can be discarded. Since it is common for CAD implementation to factor polynomials, this is a cheap, if incomplete, test. **Example 1**.: _For example, the following three 3-variable polynomials were created randomly in Maple to have total degree 5:_ \[f =-34x^{2}z^{3}-20y^{5}+7x^{2}y^{2}-43y^{3}z+63x+16z,\] \[g =13xz^{4}-27z^{4}-21xy^{2}+30yz-42x-81,\] \[h =-65xz^{4}+13z^{5}+30x^{3}z+17xy^{3}+25yz+78.\] _Then \(res_{y}(res_{z}(f,g),res_{z}(f,h))\) factors into a constant times two irreducible polynomials: one of degree \(378\) and the other of degree \(89\). With no further computation we can identify the first as spurious since its degree is greater than \(5^{3}=125\). The second could be genuine, or be another spurious factor: we may check manually that it is indeed genuine._ In an example where we have multiple factors below the bound we could work through them in turn keeping count of the sum of degrees of genuine factors as we uncover then, in each case reducing the degree bound accordingly for any further factors to be investigated as genuine. ## 6 Conclusions There is much to be done to develop these ideas. 1. In SS4.1, we have only looked at the resultants, not the discriminants, and indeed only at resultants of resultants. Undoubtedly something similar can be said about, for example \[\operatorname{res}_{y}(\operatorname{res}_{z}(f,g),\operatorname{disc}_{z}(f )),\] (13) but we have not explored this fully yet. We observe that, in the case of the polynomials from Example 1, (13) is a perfect square, and this seems to be true in general. We would need a complete solution for resultants of discriminants, discriminants of resultants and discriminants of discriminants in order to need to remove the caveat in italics towards the end of SS4.1. 2. As stated in SS4.2, the "genuine parts of resultants" idea would need to be worked through an implementation of CAC. 3. If we look at (3), we see that this polynomial, which is the "genuine" part, factors further, and one factor has no real roots. Hence this factor can be discarded, though there is not much benefit, since we are at the univariate phase. Nevertheless, this shows that even the "genuine" part may still be overkill for _real_ geometry. Can we 1. detect that a factor of a resultant etc. has no real components; and 2. use this to further reduce the polynomials? Furthermore, 3. can we make any meaningful statement about the complexity implications of this? ## Acknowledgements Both authors are supported by the UK's EPSRC, via the DEWCAD Project, _Pushing Back the Doubly-Exponential Wall of Cylindrical Algebraic Decomposition_; grant numbers EP/T015713/1 and EP/T015748/1. We are also grateful to Gregory Sankaran and Ali Uncu for many useful conversations.
2309.11593
Sentence Attention Blocks for Answer Grounding
Answer grounding is the task of locating relevant visual evidence for the Visual Question Answering task. While a wide variety of attention methods have been introduced for this task, they suffer from the following three problems: designs that do not allow the usage of pre-trained networks and do not benefit from large data pre-training, custom designs that are not based on well-grounded previous designs, therefore limiting the learning power of the network, or complicated designs that make it challenging to re-implement or improve them. In this paper, we propose a novel architectural block, which we term Sentence Attention Block, to solve these problems. The proposed block re-calibrates channel-wise image feature-maps by explicitly modeling inter-dependencies between the image feature-maps and sentence embedding. We visually demonstrate how this block filters out irrelevant feature-maps channels based on sentence embedding. We start our design with a well-known attention method, and by making minor modifications, we improve the results to achieve state-of-the-art accuracy. The flexibility of our method makes it easy to use different pre-trained backbone networks, and its simplicity makes it easy to understand and be re-implemented. We demonstrate the effectiveness of our method on the TextVQA-X, VQS, VQA-X, and VizWiz-VQA-Grounding datasets. We perform multiple ablation studies to show the effectiveness of our design choices.
Seyedalireza Khoshsirat, Chandra Kambhamettu
2023-09-20T19:12:06Z
http://arxiv.org/abs/2309.11593v1
# Sentence Attention Blocks for Answer Grounding ###### Abstract Answer grounding is the task of locating relevant visual evidence for the Visual Question Answering task. While a wide variety of attention methods have been introduced for this task, they suffer from the following three problems: designs that do not allow the usage of pre-trained networks and do not benefit from large data pre-training, custom designs that are not based on well-grounded previous designs, therefore limiting the learning power of the network, or complicated designs that make it challenging to re-implement or improve them. In this paper, we propose a novel architectural block, which we term Sentence Attention Block, to solve these problems. The proposed block re-calibrates channel-wise image feature-maps by explicitly modeling inter-dependencies between the image feature-maps and sentence embedding. We visually demonstrate how this block filters out irrelevant feature-maps channels based on sentence embedding. We start our design with a well-known attention method, and by making minor modifications, we improve the results to achieve state-of-the-art accuracy. The flexibility of our method makes it easy to use different pre-trained backbone networks, and its simplicity makes it easy to understand and be re-implemented. We demonstrate the effectiveness of our method on the TextVQA-X, VQS, VQA-X, and VizWiz-VQA-Grounding datasets. We perform multiple ablation studies to show the effectiveness of our design choices. ## 1 Introduction ### Visual Question Answering Visual Question Answering (VQA) systems try to accurately answer natural language questions regarding an input image [1]. This topic aims at developing systems that can communicate effectively about an image in natural language and comprehend the contents of images similar to humans. ### Answer Grounding The answer grounding task is defined as detecting the pixels that can provide evidence for the answer to a given question regarding an image [3]. In other words, the task is to return the image regions used to arrive at the answer for a given visual question (question-image pair) with an answer. Although the VQA community has made significant progress, the best-performing systems are complicated black-box models, raising concerns about whether their answer reasoning is based on correct visual evidence. By understanding the reasoning mechanism of the model, we can evaluate the quality of answers, improve model performance, and provide explanations for end-users. To address this problem, answer grounding has been introduced into VQA systems, which requires the model to locate relevant image regions as well as answer visual questions. By providing answer groundings in response to visual questions, numerous applications become possible. First, they allow for evaluating whether a VQA model is reasoning correctly based on visual evidence. This is useful as an explanation as well as for assisting developers with model debugging. Second, answer grounding makes it possible to separate important regions from irrelevant background regions. Given that non-professional users can mistakenly have private information in the background of their pictures, answer grounding is a useful tool to obfuscate the background for privacy preserving. Third, suppose a service can magnify the relevant visual evidence. In that case, users will be able to discover the needed information in less time. This is useful in part because VQA answers can be insufficient sometimes. ### Multimodal Deep Learning By definition, the VQA and answer grounding tasks are multimodal tasks since a method for these tasks should be able to process and correlate two different modalities. **Multimodal Joint-Embedding Models**: These models merge and learn representations of multiple modalities in a joint feature space. Joint-embeddings underpin the building of a lot of cross-modal methods as they can bridge the gap between different modalities. In this joint space, the dis tance of different points is equivalent to the semantic distance between their corresponding original inputs. **Multimodal Attention-based Models**: The main objective of the attention mechanism is to design systems that use local features of image or text, extract features from different regions, and assign priorities to them. The attention portion in an image determines salient regions. Then the language generation component focuses more on those salient regions for additional processing [35, 2, 12, 16, 26]. ### Attention Mechanism In deep learning, attention is a mechanism that mimics cognitive attention. The goal is to enhance the essential features of the input data and vanish out the rest. Attention methods can be classified into two classes based on their inputs: self-attention and cross-attention. Self-attention (also known as intra-attention) is a type of attention that quantifies the interdependence within the elements of a single input. At the same time, cross-attention (also known as inter-attention) finds the interdependence across two or more inputs [40, 18, 17]. Usually, cross-attention methods are used for multimodal inputs [43]. Cross-attention models first process individual modalities using modality-specific encoders, then the encoded features are fed into cross-attention modules. The Squeeze-and-Excitation method [13] is a channel-wise self-attention mechanism widely used in classification networks [37, 38]. It consists of a global average pooling of the input, followed by two linear layers with an interleaved non-linearity and a sigmoid function. Concretely, the output of this method is: \[\sigma(FC(RELU(FC(g\_avg\_pool(\mathbf{X})))))\times\mathbf{X} \tag{1}\] where \(\mathbf{X}\) is the feature-maps. This module aims to dynamically focus on more important channels, essentially increasing the importance of specific channels over others. This is accomplished by scaling the more important channels by a higher value. While many feature descriptors exist to reduce the spatial dimensions of the feature maps to a singular value, this module uses average pooling to keep the required computation low. The next part of the module maps the scaling weights using a Multi-Layer Perceptron (MLP) with a bottleneck structure. The result values are scaled to a range of 0-1 by passing them through a sigmoid activation layer. Afterward, using a common broadcasted element-wise multiplication, the output is applied directly to the input. This multiplication scales each input feature-maps channel with its corresponding weight learned from the MLP in the Excitation module. The following is the summary of our contributions: * We present a novel attention module based on the Squeeze-and-Excitation method for the answer grounding task. * We evaluate our method on common datasets and show that it achieves new state-of-the-art results. * We compare our design with the top-performing networks. * We perform multiple ablation studies to learn more about this network. ## 2 Related Work ### Attention Methods for Answer Grounding Models based on different attention methods have been actively explored for the answer grounding task. MAC-Caps [39] is based on MAC [15] which has a recurrent reasoning architecture that performs \(T\) reasoning steps to answer the question. At each reasoning step, MAC uses an attention block to read from image features and writing memory. MAC-Caps adds capsule layers on top of the convolutional layers to obtain visual capsules from the feature-maps. Att-MCB [32] is designed to use a Knowledge Base. It is composed of nine modules, out of which two modules use an attention mechanism. One module uses an attention mechanism to generate visual explanations, and the other one to consume and point out relevant information from the Knowledge Base. A multi-grained attention method is introduced in [14]. It consists of two types of object-level groundings to explore fine-grained information, and a more sophisticated language model for better question representation. A Word-Label Matching attention vector that indicates the weight that should be given to each of the \(K\) objects in the image is computed in terms of the semantic similarity between the category labels of the objects and the words in the question. A word-object matching module is exploited to evaluate how likely a question word matches a visual object. A Sentence-Object attention module is used to capture the global semantics of the whole sentence to guide the focus on relevant objects. Multiple custom-designed attention modules are used in [46] to generate an attention map for the given image and question pair. Questions are tokenized and passed through an embedding layer followed by an LSTM layer to generate the question features. An Image Attention Supervision Module is used as an auxiliary classification task; that is, the ground-truth visual grounding labels are used to guide the model to focus on significant parts of the image to answer each question. A so-called "accumulated attention" mechanism is introduced in [5]. It consists of three attention modules, one for the input text, one for the objects, and one for the whole image. All three modules have a similar structure of a linear layer followed by a Softmax function. ### Multimodal Attention Methods Since its introduction, the attention mechanism has been widely adopted in the computer vision community due to its special capabilities for many multimodal applications. In [11], a multimodal attention model is proposed based on the encoder-decoder networks and RNNs for video captioning and sentence generation. In particular, the multimodal attention framework combined image, audio, and motion features by selecting each modality's most relevant context vector. In [44], utilizing stacked attention networks is suggested to look for image regions that correlate with a query answer and pinpoint representative features of a given question more accurately. More recently, in [9], a normalized variant of the self-attention mechanism, named normalized self-attention (NSA), is introduced. NSA seeks to encode and decode the image and caption features and normalize the distribution of internal activations during training. For the video question answering task, attention and memory mechanisms are used in [6] to efficiently learn visual features and the semantic correlations that precisely answer questions. ### Answer Grounding Methods The answer grounding task has also been studied under different names, such as grounded VQA and visual explanation for VQA. The VinVL [45] method is based on OSCAR [21] which proposes a novel vision-language pre-training that incorporates image anchor points as input, taken from an object detection network. VinVL improves the visual representations by using a bigger visual model pre-trained on larger training corpora that combine multiple annotated object detection datasets. LXMERT [36] presents a cross-modality framework for learning the connections between vision and language. The authors build a large-scale Transformer model that consists of three encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. This model is then pre-trained on a large-scale dataset of image-and-sentence pairs. MTXNet [30] is an end-to-end trainable architecture that generates multimodal explanations focusing on the text in the image. It contains a Graph Attention Network, and each object location and OCR token is treated as a node in the graph. It also contains a multimodal transformer that operates on three modalities: question words, visual objects, and OCR tokens. U-CAM [29] is a method that uses gradient-based uncertainty estimates to provide visual attention maps. The certainty measure is improved by adding the certainty gradients to the existing standard Cross-Entropy loss gradients for training the model during back-propagation. ## 3 Method Current attention methods have one or more of the following three problems: * The attention mechanism is not based on well-known and well-tested methods [32, 36, 29]. Well-established methods have been evaluated on many different tasks and datasets, especially, they perform well on large-scale datasets. Their design details have been actively studied, and their advantages and disadvantages are visible to the users. * The complicated design makes it challenging to re-implement them on other platforms [32, 39, 30]. Recently, many applications have been made based on deep neural networks that are expected to be used by the end-users of different platforms. For example, answer grounding methods are helpful in assistive technologies for people with vision impairments [3]. In order to use such technologies offline, it is crucial to implement the method for each platform. Figure 1: Our proposed sentence attention block. \(C\) denotes the number of channels in image feature-maps, and \(E\) represents the embedding size of the sentence encoder network. Question and answer embeddings are concatenated into one vector. * The custom design does not allow the usage of pre-trained models [39, 45, 28, 27]. Answer grounding datasets are less diverse and have fewer samples than the classification datasets. Using pre-trained models can improve the accuracy and make the model more robust to noise and unseen samples. This paper proposes an attention block that avoids the mentioned problems. We design an attention block based on well-established and well-studied methods. It has a simple design to make it easy to re-implement and uses pre-trained models to achieve maximum accuracy. Our proposed method has three main parts: region proposal, sentence embedding, and attention fusion. ### Region Proposal This module's primary goal is to process the image and generate all the possible region candidates. Unlike Region Proposal Networks [31] which generate anchor-based bounding boxes, this module generates dense regions (segmentations). A pre-trained classification network is used as the backbone. However, the final classifier is removed to get the multi-scale feature-maps. During the end-to-end training of the proposed method, the backbone network learns all the candidate regions. Later, these candidate regions are filtered and combined by the proposed Sentence Attention Blocks. In Section 5.3, we show that the backbone network works as expected by visualizing the feature-maps. ### Sentence Embedding Since the input questions and answers have variable lengths, they are processed by a sentence embedding network. A pre-trained sentence encoder is employed to encode the given question and answer to a vector space separately. The result is two vectors, one for the question and one for the answer. The two vectors are concatenated and passed to the Sentence Attention Blocks as one vector. ### Attention Fusion Given multiple potential regions for an image, the answer grounding task reduces to choosing and/or combining these regions based on the given question and/or answer. The multi-scale channel-wise image features from the back Figure 2: The complete proposed network. The multi-scale channel-wise image features contain different regions that are filtered by our Sentence Attention Block (SAB) based on the question and answer embeddings. bone network contain candidate regions. The goal is to filter and combine these regions based on the question and answer vector from the sentence embedding network. Towards this goal, we design an attention block based on the Squeeze-and-Excitation (SE) block [13]. We start with an SE block and make four modifications as follows: 1. The first modification is changing it from self-attention to cross-attention. In other words, unlike a self-attention block, the proposed block pays attention to an external source: the encoded question and answer. 2. The second modification is adding a normalization layer. Empirically we found that adding a normalization layer improves the accuracy. Similar to [40], we use LayerNorm since it performs better on sequences. 3. The third modification is replacing the sigmoid function with softmax. Since the goal is to force the model to choose a few of the channel-wise image features, the sigmoid function is replaced with softmax. 4. The last modification is inverting the bottleneck. Instead of shrinking the embedding space, we expand the question and answer embeddings by a factor of two. In Section 5.1, the effect of each modification on the network accuracy is shown. Figure 1 depicts the final structure of the proposed block. Precisely, the proposed attention block is as follows: \[Softmax(FC(ReLU(LN(FC(\mathbf{QA})))))\times\mathbf{R}_{i} \tag{2}\] where \(FC\) is a single-layer perceptron, \(LN\) is LayerNorm, \(\mathbf{QA}\) is the joint question and answer embeddings, and \(\mathbf{R}_{i}\) is the image regions at scale \(i\). For each scale, a separate instance of this attention block is created so that each block learns special weights for its corresponding feature-map scale. After computing the attention block for all the scales, all the outputs at different scales are combined into one output. Starting from the lowest resolution output, a \(1\times 1\) convolution is used to convert the number of channels to the number of channels of the next lowest resolution output. Then, Bilinear interpolation is used for upsampling the output feature-map by a factor of 2. Finally, the output is added to the next lowest resolution output. This process is repeated until no other output is left. In the end, a \(1\times 1\) convolution is used to reduce the number of channels of the final output to two. Therefore, the final output has two channels representing answer grounding and its background. We use Argmax to output a binary image. The complete design of the proposed method is depicted in Figure 2. ### Comparison to Existing Methods We compare the design of our proposed method to the top-performing existing methods for the answer grounding task. MAC-Caps [39] is one of the best-performing networks that currently exist. This network is based on MAC [15] which employs attention blocks in a recurrent reasoning architecture. MAC-Caps adds capsule layers to process the feature-maps further. Although similar to our method, MAC-Caps employs an attention mechanism. However, the recurrent design brings the usual problems of RNNs, such as gradient vanishing/exploding, longer training time, and not performing well on long sequences. Since our proposed method does not have a recurrent design, it avoids such problems. VinVL [45] is another top-performing network for the answer grounding task. This method is based on OSCAR [21] which incorporates image anchor points as inputs. VinVL improves the visual representations by using a bigger visual model pre-trained on larger training corpora that combine multiple annotated object detection datasets. This method relies on object detection networks to provide the anchor points, which adds an extra task for the network to learn. Also, using large generic corpora may not improve the accuracy for special datasets such as VizWiz-VQA-Grounding [3]. In contrast, our proposed method relies only on the feature-maps and does not define another task. Att-MCB [32] is the state-of-the-art network on the VQA-X dataset [27]. This method is designed to use a Knowledge Base. It comprises nine modules and four loss functions, making it a tedious design. This complicated design is resource-demanding and makes the training sensitive \begin{table} \begin{tabular}{l|c} Method & Mean IoU \\ \hline MTXNet [30] & 18.9 \\ LXMERT [36] & 24.9 \\ VinVL [45] & 25.6 \\ MAC-Caps [39] & 26.1 \\ \hline **Ours** & **29.0 (+2.9)** \\ \hline \end{tabular} \end{table} Table 1: Comparison results on the test set of the TextVQA-X dataset [30]. \begin{table} \begin{tabular}{l|c} Method & Mean IoU \\ \hline DeconvNet [7] & 29.8 \\ Mask Aggregation [7] & 32.6 \\ LXMERT [36] & 33.3 \\ VinVL [45] & 33.9 \\ MAC-Caps [39] & 34.3 \\ \hline **Ours** & **36.8 (+2.5)** \\ \hline \end{tabular} \end{table} Table 2: Comparison results on the test set of the VQS dataset [7]. to hyper-parameters. Our proposed method comprises three main modules and one loss function. This minimalistic design helps the gradient flow during back-propagation, and the training is less sensitive to hyper-parameters. ## 4 Experiments We train and test our proposed method on four answer grounding datasets. We use the same setup for all of the experiments. ### Setup We use a pre-trained EfficientNet [37] to extract the multi-scale image features and we use a MiniLMv2 [42] sentence encoder network to encode the questions. In Section 5.2, we perform an ablation study on different backbones. we use BEiT [41] to generate answers for the datasets where the answers are not publicly available. We use RMI [47] with its default settings along with cross-entropy. Therefore, the overall loss function to minimize is: \[\begin{split}\mathcal{L}_{all}(y,p)&=\frac{1}{B} \sum_{b=1}^{B}\lambda\mathcal{L}_{ce}(y^{(b)},p^{(b)})\\ &+(1-\lambda)\mathcal{L}_{rmi}(y^{(b)},p^{(b)})\end{split} \tag{3}\] where \(\lambda\in[0,1]\) is a weight factor, \(B\) denotes the number of samples in a mini-batch, \(\mathcal{L}_{ce}(y^{(b)},p^{(b)})\) is the standard cross entropy loss between the \(b\)-th sample and its corresponding prediction, respectively, and \(\mathcal{L}_{rmi}(y^{(b)},p^{(b)})\) is the RMI loss as follows: \[\mathcal{L}_{rmi}(\mathbf{Y},\mathbf{P})=\sum_{c=1}^{C}\frac{1}{2d}Trace(log( \mathbf{M})) \tag{4}\] and, \[\mathbf{M}=\mathbf{\Sigma_{Y}}-Cov(\mathbf{Y},\mathbf{P})(\mathbf{\Sigma_{P} ^{-1}})^{T}Cov(\mathbf{Y},\mathbf{P})^{T} \tag{5}\] where \(\mathbf{Y}=y^{(b)}\), \(\mathbf{P}=p^{(b)}\), \(\mathbf{M}\in\mathbb{R}^{d\times d}\), \(d\) denotes the number of pixels, \(C\) is the number of object classes, here \(C=2\) for the answer grounding and its background, \(\mathbf{\Sigma_{Y}}\) is the variance matrix of Y, and \(Cov(\mathbf{Y},\mathbf{P})\) is the covariance matrix of \(\mathbf{Y}\) and \(\mathbf{P}\). Similar to [47], we set \(\lambda=0.5\). We use AdamW optimizer [25] with a weight decay of 0.05 and batch size of 16. We apply the "polynomial" learning rate policy with a poly exponent of 0.9 and an initial learning rate of 0.0001. Synchronized batch normalization is used across multiple GPUs. We use RandAugment [4] for data augmentation. ### TextVQA-X The TextVQA-X dataset [30] is a subset of the TextVQA dataset [33] such that the answer groundings are generated through a manual segmentation annotation process. TextVQA-X consists of 10,379 training images and 3,354 test images. Since only a few models have been evaluated on this dataset, we train three more models for a fair comparison. Similar to [3], we train top-performing methods that their code is publicly available; specifically, LXMERT [36], VinVL [45], and MAC-Caps [39]. We use the same setup as in [3] to train the three models with the difference that we train the models on the TextVQA-X training set. Similar to the existing methods, we use the test set to evaluate our method and report the mean IoU measure. ### Vqs The VQS dataset [7] builds upon the images, instance segmentation masks, and bounding boxes in COCO [22] and the questions and answers in the VQA dataset [1]. VQS contains 26,995 training, 5,000 validation, and 5,873 test images. Similar to Section 4.2, we train top-performing methods that their code is publicly available; specifically, \begin{table} \begin{tabular}{c|c} Team & Mean IoU \\ \hline MindX & 66.9 \\ MGTV & 69.7 \\ hsslab\_inspur & 70.1 \\ GroundTruth & 70.3 \\ Aurora & 70.6 \\ \hline **SAB (Ours)** & **72.4 (+1.8)** \\ \hline \end{tabular} \end{table} Table 4: The top-performing teams on the VizWiz-VQA-Grounding Challenge 2022 leaderboard [3]. At the time of this writing, none of the methods have been published. \begin{table} \begin{tabular}{c|c c} Method & \multicolumn{2}{c}{Rank Correlation} \\ \hline PJ-X [27] & 0.342 \\ CCM [28] & 0.368 \\ U-CAM [29] & 0.372 \\ VinVL [45] & 0.373 \\ Att-MFH [32] & 0.376 \\ MAC-Caps [39] & 0.389 \\ Att-MCB [32] & 0.396 \\ \hline **Ours** & **0.421 (+0.025)** \\ \hline \end{tabular} \end{table} Table 3: Comparison results on the test set of the VQA-X dataset [27]. \begin{table} \begin{tabular}{l|c c} Method & \multicolumn{1}{c}{\# of params} & \multicolumn{1}{c}{inference time} \\ \hline LXMERT [36] & 153M & 56ms \\ VinVL [45] & 406M & 69ms \\ MAC-Caps [39] & 80M & 54ms \\ \hline Ours & 88M & **49ms** \\ \hline \end{tabular} \end{table} Table 5: Comparison of our proposed method to existing methods regarding the number of parameters and inference time. LXMERT [36], VinVL [45], and MAC-Caps [39]. We use the same setup as in [3] to train the three models with the difference that we train the models on the VQS training set. Similar to the existing methods, we use the test set to evaluate our method and report the mean IoU measure. ### Vqa-X The VQA-X dataset [27] does not have visual annotations for its training set. It has visual annotations only for its validation and test sets. Therefore, we train our method on the training set of the VQS dataset [7], which has related images to the VQA-X validation and test sets. The images of both datasets are from the COCO dataset [22]. The other methods are trained on different datasets, including Visual Genome [20] and VQA v2.0 [8]. The VQA-X dataset consists of 24,876 training, 1,431 validation, and 1,921 test images. Similar to the other methods, we use the test set of the VQA-X dataset to evaluate our method and report the Rank Correlation as in [27]. ### VizWiz-VQA-Grounding The VizWiz-VQA-Grounding dataset [3] is based on the VizWiz-VQA dataset [10], and the images and questions come from visually impaired people who shared them to request visual assistance in their day-to-day lives. VizWiz-VQA-Grounding contains a total of 9,998 VQAs divided into 6,494/1,131/2,373 VQAs for training, validation, and testing. The 2022 VizWiz-VQA-Grounding Challenge is designed around the aforementioned VizWiz-VQA-Grounding dataset. This challenge was held recently, and at the time of this writing, current methods in this challenge have not been published yet. Therefore, we evaluate our method using the VizWiz-VQA-Grounding Challenge online leaderboard [3]. The leaderboard evaluates all the methods against the VizWiz-VQA-Grounding test set. The ground truth labels for the test set are not published yet. ### Results We evaluate our proposed method on four answer grounding datasets. Tables 1, 2, and 3 compare the results of our proposed method with the existing methods on the TextVQA-X, VQS, and VQA-X datasets respectively. Our method achieves new state-of-the-art accuracy on three datasets. More specifically, improvements of 2.9, 2.5, and 0.025 on the TextVQA-X, VQS, and VQA-X datasets, respectively. Table 4 lists the top-performing teams on the VizWiz-VQA-Grounding Challenge 2022 leaderboard [3]. At the time of this writing, our method holds first place with considerably higher accuracy than the other methods. Furthermore, Table 5 compares our proposed method to existing methods regarding the number of parameters and inference time. The efficient design of our method makes it the fastest method. ## 5 Ablation Studies In this section, we perform four ablation studies. All of the experiments are done on the VizWiz-VQA-Grounding [3] validation set and using the setup from Section 4.1 unless otherwise stated. ### Design Modifications We started the design of our sentence attention block from the standard SE block and made four modifications as explained in Section 3.3. In this section, we perform an ablation study on these modifications. Table 7 shows the impact of each modification on the final accuracy. These modifications are done cumulatively. The first modification is changing from self-attention to cross-attention, which makes our baseline for this ablation study. The second modification is adding a normalization layer. This modification improves our baseline by 1.1 percent. The third modification is replacing the Sigmoid function with Softmax. This modification has the highest impact on our method, 1.5 percent. And the fourth modification is the expansion of the embeddings. This last modification brings an addition of 0.8 percent. \begin{table} \begin{tabular}{l|c} Backbone & Mean IoU \\ \hline ResNet-50x3 [19] & 71.9 \\ Swin Transformer V2-B [24] & 72.8 \\ EfficientNetV2-L [38] & 73.1 \\ EfficientNet-B7 [37] & **73.5** \\ \hline RoBERTa [23] & 72.8 \\ MPNet [34] & 73.1 \\ MiniLMv2 [42] & **73.5** \\ \hline \end{tabular} \end{table} Table 6: The ablation study of the pre-trained backbone networks. **Top Section:** Using a MiniLMv2 as a fixed sentence embedding network and replacing the image feature extraction networks. **Bottom Section:** Using a fixed image feature extraction network (EfficientNet-B7) and replacing the sentence embedding networks. \begin{table} \begin{tabular}{l|c} Modification & Mean IoU \\ \hline Switching to cross-attention & 70.1 \\ Adding normalization layer & 71.2 (+1.1) \\ Switching to Softmax & 72.7 (+1.5) \\ Expansion of the embeddings & **73.5 (+0.8)** \\ \hline \end{tabular} \end{table} Table 7: The results of our ablation study of the design modifications. The modifications are done cumulatively from top to bottom. ### Backbone Networks The flexibility of our method enables us to use a variety of pre-trained networks. In this ablation study, we compare and analyze different pre-trained networks. To compare the image backbone networks, we keep the sentence embedding network fixed and change the image backbone network. Similarly, to compare the sentence embedding networks, we keep the image backbone network with the highest accuracy and change the sentence embedding network. We use well-known and well-performing networks whose pre-trained weights are publicly available. Specifically, we use EfficientNet [37], EfficientNetV2 [38], Swin Transformer V2 [24], and ResNet-50x3 [19] for image feature extraction and MiniLMv2 [42], MPNet [34], and RoBERTa [23] for sentence embedding. Table 6 shows the results of this ablation study. The highest accuracy is achieved by using EfficientNet-B7 and MiniLMv2. Since the EfficientNetV2-L reduces the input resolution more aggressively than EfficientNet-B7, it performs worse. Transformer networks usually perform well on large datasets. However, since VizWiz-VQA-Grounding is not a large dataset, Swin Transformer V2 does not achieve the highest accuracy. The ResNet-50x3 is pre-trained on ImageNet-21k, but it achieves the least accuracy in this experiment. ### Region Proposals Feature-map visualization will provide insight into the internal representations of each layer in a network for a specific input image. This ablation study aims to show that the image backbone network has learned the potential regions for an image, which are then processed by the proposed sentence attention block. To this aim, we visualize the final output feature-maps of the image backbone network, before feeding them to an attention block. We use the image backbone of our fully trained network on the VizWiz dataset. Figure 3 (a) shows a few manually-chosen feature-maps for a sample image. These feature-maps show that after training, the backbone network has been able to learn the potential important regions of images. Having all the candidate regions, the task of the proposed attention block is to filter the irrelevant regions to the question and answer. Figure 3 (b) shows the final output of the complete network for the same image in (a), using different questions and answers (for brevity, the answers are not shown). This figure shows that our proposed block is able to filter the channel-wise feature-maps based on the input questions and answers. ### Multi-scale Fusions There is a trade-off in the answer grounding task such that some images are best handled at lower inference resolution and others better handled at higher inference resolution. Fine details, such as the edges of objects or thin structures, are often better predicted with scaled-up images. At the same time, the prediction of large structures, which requires more global context, is often done better at scaled-down images because the network's receptive field can observe more of the necessary context. This ablation study shows the impact of adding the extra scales on the accuracy. Table 8 shows the results of this Figure 3: **(a)** Eight manually-selected sample feature-maps for an image. These feature-maps are the output of the feature extraction backbone network, before feeding them to the sentence attention block. The network has learned to output potential regions in different channels. **(b)** The final output of our complete network for the same image in (a), given different questions and answers (for brevity, the answers are not shown). This figure shows that our proposed attention block can filter the channel-wise feature-maps based on the input questions and answers. \begin{table} \begin{tabular}{l|l} Scale & Mean IoU \\ \hline 1/32 & 71.4 \\ 1/32 + 1/16 & 72.5 (+1.1) \\ 1/32 + 1/16 + 1/8 & 73.1 (+0.6) \\ 1/32 + 1/16 + 1/8 + 1/4 & **73.5 (+0.4)** \\ \hline \end{tabular} \end{table} Table 8: The gradual effect of multi-scale fusion. From top to bottom, the first row uses only the lowest resolution feature-map, and each row adds higher resolution feature-maps. Each scale is the ratio of the feature-map size to the input image. study. From top row to bottom, we start with only one scale and one attention block and gradually add higher resolution feature-maps. The lowest resolution is the final output feature-map of the backbone network. ## 6 Conclusions In the study, a new architectural block termed the Sentence Attention Block was introduced to address certain challenges. This block recalibrates channel-wise image feature-maps by modeling inter-dependencies between image feature-maps and sentence embedding. The design began with a recognized attention method, and with minor adjustments, enhanced results were achieved, reaching top-tier accuracy. We showed the block's ability to filter out irrelevant feature-map channels based on sentence embedding. The method's adaptability allows for various pre-trained backbone networks, and its straightforward nature facilitates comprehension and re-implementation. The method's efficacy was demonstrated on the TextVQA-X, VQS, VQA-X, and VizWiz-VQA-Grounding datasets, and several ablation studies highlighted the validity of the design decisions.
2310.20605
Learning Lyapunov-Stable Polynomial Dynamical Systems through Imitation
Imitation learning is a paradigm to address complex motion planning problems by learning a policy to imitate an expert's behavior. However, relying solely on the expert's data might lead to unsafe actions when the robot deviates from the demonstrated trajectories. Stability guarantees have previously been provided utilizing nonlinear dynamical systems, acting as high-level motion planners, in conjunction with the Lyapunov stability theorem. Yet, these methods are prone to inaccurate policies, high computational cost, sample inefficiency, or quasi stability when replicating complex and highly nonlinear trajectories. To mitigate this problem, we present an approach for learning a globally stable nonlinear dynamical system as a motion planning policy. We model the nonlinear dynamical system as a parametric polynomial and learn the polynomial's coefficients jointly with a Lyapunov candidate. To showcase its success, we compare our method against the state of the art in simulation and conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our experiments demonstrate the sample efficiency and reproduction accuracy of our method for various expert trajectories, while remaining stable in the face of perturbations.
Amin Abyaneh, Hsiu-Chin Lin
2023-10-31T16:39:58Z
http://arxiv.org/abs/2310.20605v3
# Learning Lyapunov-Stable Polynomial Dynamical Systems Through Imitation ###### Abstract Imitation learning is a paradigm to address complex motion planning problems by learning a policy to imitate an expert's behavior. However, relying solely on the expert's data might lead to unsafe actions when the robot deviates from the demonstrated trajectories. Stability guarantees have previously been provided utilizing nonlinear dynamical systems, acting as high-level motion planners, in conjunction with the Lyapunov stability theorem. Yet, these methods are prone to inaccurate policies, high computational cost, sample inefficiency, or quasi stability when replicating complex and highly nonlinear trajectories. To mitigate this problem, we present an approach for learning a globally stable nonlinear dynamical system as a motion planning policy. We model the nonlinear dynamical system as a parametric polynomial and learn the polynomial's coefficients jointly with a Lyapunov candidate. To showcase its success, we compare our method against the state of the art in simulation and conduct real-world experiments with the Kinova Gen3 Lite manipulator arm. Our experiments demonstrate the sample efficiency and reproduction accuracy of our method for various expert trajectories, while remaining stable in the face of perturbations. Imitation learning, Safe learning, Motion planning, Dynamical system, Semidefinite programming, Robotic manipulation ## 1 Introduction Motion planning for robotic systems is generally regarded as a decomposition of a desired motion into a series of configurations that potentially satisfy a set of constraints [1]. Imitation learning tackles motion planning by imitating an expert's behavior to learn a planning policy [2]. To this day, only a handful of imitation learning methods provide mathematical stability guarantees for their resultant policy. Stability is a critical factor when deploying imitation policies in environments exposed to external perturbations. Therefore, unpredictable environments require a policy that reasonably responds in unexplored regions of state space, away from original demonstrations. Researchers have turned to autonomous dynamical systems (DS) as a means to learn stable motion planning policies [3; 4; 5]. Essentially, a parametric time-invariant DS is optimized to provide an action (velocity) given the current state (position), while adhering to constraints that attain global Lyapunov stability. This approach leads to safety and predictability of planned trajectories, even in areas of state space without expert demonstrations. However, previous work is mostly confined to basic Lyapunov functions that adversely impact the reproduction accuracy, and require sufficiently large set of demonstrations. Others have proposed approaches focused on diffeomorphism and Riemannian geometry [6; 7; 8] and contraction theory [9], that are prone to quasi-stability, increased computational time, or restricted hypothesis class. We propose a method to simultaneously learn a polynomial dynamical system (PLYDS) and a polynomial Lyapunov candidate to generate globally stable imitation policies. Polynomials, depending on the degree, possess an expressive power to approximate highly nonlinear systems, and polynomial regression can empirically compete with neural networks on challenging tasks [10; 11]. Unlike most neural policies, global stability can be naturally expressed with polynomials. Polynomials also enable us to utilize efficient semi-definite programming [12; 13] and sum-of-squares (SOS) optimization techniques [14; 15], and offer adaptability to expert's demonstrations. Our main contribution is twofold. We propose a polynomial representation of the planning policy and Lyapunov candidate function, coupled with concrete mathematical stability certification for precise and safe replication of the expert's demonstrations, as depicted in Figure 1. Then, we define a regularized semi-definite optimization problem to jointly learn the DS and the Lyapunov candidate with higher flexibility and precision. We compare the reproduction accuracy of PLYDS with alternatives in the literature and evaluate the performance in both simulation and real robotic systems. ## 2 Background and Notation Consider a system operating in a state-space \(\mathcal{X}\subset\mathbb{R}^{n}\), e.g., a robot in its task- or configuration-space. The system can execute actions in \(\mathcal{A}\subset\mathbb{R}^{n}\), for instance, velocity or torque commands, leading to state evolution. We denote the state variable with \(\mathbf{x}\triangleq[x_{1}\ x_{2}\ \dots\ x_{n}]^{T}\in\mathcal{X}\), and consider the action variable to be the state's derivative \(\dot{\mathbf{x}}\in\mathcal{A}\). Within this space, our goal is to learn an imitation policy through a dataset of experts' state-action pairs, referred to as trajectories. Let \(N_{d}\in\mathbb{N}\) be the number of trajectories demonstrated by the expert. Each trajectory contains \(N_{s}\in\mathbb{N}\) state-action pairs. The dataset of expert trajectories stacks all state-action pairs, defined as: \[\mathbf{D}\triangleq\Big{\{}\big{(}\mathbf{x}^{d}(s),\ \dot{\mathbf{x}}^{d}(s) \big{)}\ \big{|}\ d\in\{1,\dots,N_{d}\},\ s\in\{1,\dots,N_{s}\}\Big{\}}, \tag{1}\] where \((\mathbf{x}^{d}(s),\ \dot{\mathbf{x}}^{d}(s))\) is the dataset entry corresponding to the \(s\)-th sample of the \(d\)-th demonstrated trajectory. The dataset \(\mathbf{D}\) holds \(N_{t}=N_{d}N_{s}\) samples. We assume that the trajectories contain the same sample size (\(N_{s}\)), share a common target (\(\mathbf{x}^{*}\in\mathcal{X}\)), and have zero velocity at the target, i.e., \(\mathbf{x}^{d}(N_{s})=\mathbf{x}^{*}\) and \(\dot{\mathbf{x}}^{d}(N_{s})=\mathbf{0}\) for all trajectories \(d\in\{1,\dots,N_{d}\}\). **Definition 2.1**.: _(Dynamical Systems). The mapping between the state and the action in each sample can be modelled with a time-invariant autonomous dynamical system (DS), denoted by:_ \[\dot{\mathbf{x}}=f(\mathbf{x})+\epsilon=\hat{f}(\mathbf{x}),\ \ f,\hat{f}: \mathcal{X}\rightarrow\mathcal{A}. \tag{2}\] In Equation (2), \(f\) is an ordinary differential equation for the true underlying DS. The term \(\epsilon\in\mathbb{R}^{n}\) captures measurement and recording noise of expert's demonstrations. We assume that \(\epsilon\) is embedded in the estimated DS, \(\hat{f}\), and eliminate the need for modeling its distribution. Following [3], we aim at learning a noise-free estimation of \(f(\mathbf{x})\), denoted by \(\hat{f}(\mathbf{x})\). One can view \(\hat{f}(\mathbf{x})\) in Equation (2) as a _policy_ that maps states to actions for reproducing the demonstrated trajectories in the state-space. For instance, when the robot is located in \(\mathbf{x}_{0}\in\mathcal{X}\), the policy yields an action \(\dot{\mathbf{x}}_{0}=\hat{f}(\mathbf{x}_{0})\), which can be passed to the robot's velocity controller. The estimated DS in Equation (2), \(\hat{f}(\mathbf{x})\), is globally asymptotically stable (GAS) around an equilibrium point \(\mathbf{x}^{e}\), if and only if for every initial state, \(\mathbf{x}\rightarrow\mathbf{x}^{e}\) as the system evolves and time goes to Figure 1: Overview of the stable policy learning framework. Policy learning (left) optimizes a stable polynomial DS from expert’s demonstration data. This policy is then deployed (right) to plan globally stable and predictable trajectories in the entire state space. infinity [16]. A popular tool to study the GAS property of a DS is the Lyapunov stability theorem. According to this theorem, a DS exhibits GAS if there exists a positive-definite function \(v:\mathcal{X}\to\mathbb{R}\), known as Lyapunov potential function (LPF), such that \(\dot{v}(\mathbf{x})<0\) for all \(\mathbf{x}\neq\mathbf{x}^{e}\) and \(\dot{v}(\mathbf{x}^{e})=0\). To ensure GAS for \(\hat{f}(\mathbf{x})\), we simultaneously learn the policy, \(\hat{f}\), and the LPF, \(v\). ## 3 Related work Extensive research is conducted on imitation learning and its applications in robotic motion planning for a variety of tasks. Existing efforts can be divided into the following predominant research tracks. **Dynamical systems for motion planning.** Dynamical systems have proved to effectively counter autonomous motion planning problems by proposing a time-invariant policy [17]. Traditional methods of encoding trajectories are based on spline decomposition [18], Gaussian process regression [19], or unstable dynamical systems [20; 21]. They either lack robustness because of time-variance or fail to provide GAS. SEDS [3] is the first attempt to learn stable planning policies. However, its performance declines when applied to highly nonlinear expert trajectories. Most notably, it suffers from trajectories where the distance to the target is not monotonically decreasing. The intrinsic limitation of SEDS comes from the choice of a simple Lyapunov function. Follow-up research introduces more complex Lyapunov candidates to stably mimic nonlinear trajectories [4; 22], but are still restricted in representing the Lyapunov candidate. Others have tried to tackle SEDS limitations through diffeomorphic transformations and Riemannian geometry [6; 8; 7] that yield quasi-stable planners for some trajectories, and contraction theory [9] that restricts the class of metrics to make the optimization tractable. Lastly, most improvements to the original SEDS still use the Gaussian mixture model formulation, that is vulnerable in presence of limited expert demonstrations. **Imitation learning.** Recent imitation learning developments can be applied to motion planning tasks with minimal modifications, since motion planning can be achieved by finding a (not necessarily stable) policy in the robot's task-space from the expert's behavior. For instance, GAIL [23] introduces an adversarial imitation learning approach that directly optimizes the expert's policy, but requires a large set of expert's data (low sample efficiency) and extensive training iterations. The growing interest in neural policies has also led to the development of end-to-end autonomous driving [24] and behavioral cloning [25; 26; 27] methods. Nevertheless, they generally lack GAS, and it is unclear whether the robot can recover from perturbations. The same drawbacks exist with apprenticeship learning approaches, such as Abbeel and Ng [28] and inverse reinforcement learning, such as Ziebart et al. [29], and the computational demand is even higher for the latter. **Stability in neural dynamical systems.** Methods such as [30; 31] represent the dynamics with a Neural Network, and propose the joint training of dynamics and a Lyapunov function to guarantee the stability. Though theoretically sound, these methods have only been applied to rather simple settings and require large demonstration sets. Neural Lyapunov methods [32; 33; 34] promise a data driven and potentially stable approach to control and model nonlinear dynamics, but lack global stability. Methods such as [35] are also not stable-by-design and the dynamical system lacks autonomy. ## 4 Methodology We instantiate the policy and the corresponding LPF candidate, \(\hat{f}\) and \(v\), with two polynomials in Section 4.1 and Section 4.2, respectively. This allows us to accurately imitate an expert's behavior, while providing formal GAS guarantees. Subsequently, we formulate a tractable optimization problem for jointly learning the policy and the LPF in Section 4.3. ### Dynamical system policy formulation We need to approximate the unknown underlying DS in Equation (2) to discover the mapping between states and actions from expert's behavior. To this end, we opt to model the policy with a parametric polynomial. The representative power of polynomials was originally established through the Weierstrass approximation theorem, stating that every continuous function defined on a closed interval can be approximated with desired precision by a polynomial. This idea is fortified by recent studies, such as [10; 11], that compare polynomials to neural networks on a variety of tasks. **Definition 4.1**.: _(Polynomial Dynamical Systems). A Polynomial Dynamical System (PLYDS) is a polynomial approximation of the policy in Equation (2), and is expressed as,_ \[\dot{\mathbf{x}}=\hat{f}(\mathbf{x};\ \mathbf{P})\triangleq\begin{bmatrix} \mathbf{b}_{\mathbf{x},\alpha}^{T}\mathbf{P}_{1}\mathbf{b}_{\mathbf{x}, \alpha}&\mathbf{b}_{\mathbf{x},\alpha}^{T}\mathbf{P}_{2}\mathbf{b}_{\mathbf{x },\alpha}&\ldots&\mathbf{b}_{\mathbf{x},\alpha}^{T}\mathbf{P}_{n}\mathbf{b}_{ \mathbf{x},\alpha}\end{bmatrix}^{T}, \tag{3}\] _where \(\mathbf{b}_{\mathbf{x},\alpha}\triangleq[1\ \ (\mathbf{x}^{T})^{\circ 1}\ \ ( \mathbf{x}^{T})^{\circ 2}\ldots(\mathbf{x}^{T})^{\circ\alpha}]^{T}\) is the polynomial basis of degree \(\alpha\in\mathbb{N}\), and \((\mathbf{x}^{T})^{\circ k}\) is the element-wise \(k\)-th power of \(\mathbf{x}^{T}\). Every row \(i\) of \(\hat{f}\) is a polynomial of degree \(2\alpha\), \(\hat{f}_{i}(\mathbf{x};\ \mathbf{P}_{i})=\mathbf{b}_{\mathbf{x},\alpha}^{T} \mathbf{P}_{i}\mathbf{b}_{\mathbf{x},\alpha}\), where \(\mathbf{P}_{i}\in\mathbb{S}^{\alpha n+1}\) and \(\mathbb{S}^{k}\triangleq\{S\in\mathbb{R}^{k\times k}|S^{T}=S\}\). The matrix \(\mathbf{P}\in\mathbb{S}^{\alpha n^{2}+n}\) encapsulates the block-diagonal form of all \(\mathbf{P}_{i}\) matrices._ Below, we present an example to show how PLYDS, as defined in Definition 4.1, captures nonlinear time-invariant policies. One can further complicate the policy by increasing \(\alpha\), which in turn produces a larger basis vector and a more flexible polynomial. **Example 4.1.1**.: _A second-order polynomial representation of a one-dimensional DS is:_ \[\dot{\mathbf{x}}=\hat{f}(\mathbf{x};\ \mathbf{P})=\begin{bmatrix} \mathbf{b}_{\mathbf{x},\alpha}^{T}\ \mathbf{P}_{1}\ \mathbf{b}_{\mathbf{x},\alpha}\end{bmatrix}= \begin{bmatrix}1&x\end{bmatrix}\begin{bmatrix}p_{00}&p_{01}\\ p_{01}&p_{11}\end{bmatrix}\begin{bmatrix}1\\ x\end{bmatrix}=p_{00}x^{2}+(p_{01}+p_{10})x+p_{00},\] _where \(\alpha=1,\mathbf{b}_{\mathbf{x},\alpha}=[1\ x]^{T}\). Note how \(\mathbf{P}\) can be symmetric without loss of generality._ ### Global stability guarantees for polynomial dynamical systems As explained in Section 4.1, a polynomial policy allows for accurately imitating the expert's demonstrations. Yet, there is no formal GAS guarantee that the robot will ultimately converge to the target in the face of perturbations, deflecting it from the expert's trajectories. Owing to the Lyapunov stability theorem, finding an LPF that meets the criteria in Section 2 ensures the desired stability [36]. The major challenge lies in learning an LPF, \(v\), which is a positive definite function with negative gradient. We tackle this by confining to the class of polynomial LPF candidates. **Definition 4.2**.: _(Polynomial Lyapunov Candidate). A multidimensional polynomial LPF is given by,_ \[v(\mathbf{x};\ \mathbf{Q})\triangleq\begin{bmatrix}\mathbf{b}_{\mathbf{x}, \beta}^{T}\mathbf{Q}_{1}\mathbf{b}_{\mathbf{x},\beta}&\mathbf{b}_{\mathbf{x}, \beta}^{T}\mathbf{Q}_{2}\mathbf{b}_{\mathbf{x},\beta}&\ldots&\mathbf{b}_{ \mathbf{x},\beta}^{T}\mathbf{Q}_{n}\mathbf{b}_{\mathbf{x},\beta}\end{bmatrix}^ {T},\ v:\mathcal{X}\rightarrow\mathbb{R}^{n}, \tag{4}\] _where \(\beta\in\mathbb{N}\) is the polynomial basis degree. Each row is defined by \(v_{i}(\mathbf{x};\ \mathbf{Q}_{i})=\mathbf{b}_{\mathbf{x},\beta}^{T}\mathbf{Q}_{i} \mathbf{b}_{\mathbf{x},\beta},v_{i}:\mathcal{X}\rightarrow\mathbb{R}\), and can be viewed as a scalar Lyapunov function. The parameters matrix, \(\mathbf{Q}\in\mathbb{S}^{\beta n^{2}+n}\), is a block-diagonal of all \(\mathbf{Q}_{i}\in\mathbb{S}^{\beta n+1}\) matrices._ Definition 4.2 introduces a non-conventional LPF candidate. Rather than considering a single LPF, we designate a distinct polynomial LPF for each dimension of the state space and stack them into \(v(\mathbf{x};\ \mathbf{Q})\). This characterization, known as a vector Lyapunov function [37], is less restrictive and enables the policy and LPF to be learned moreindependently for each dimension of the state space. We highlight that the GAS of the policy in each dimension, \(\hat{f}_{i}(\mathbf{x};\ \mathbf{P}_{i})\), implies the GAS of the entire policy, \(\hat{f}(\mathbf{x};\ \mathbf{P})\). Proposition 4.3 establishes a link between the policy stability in each row and the global stability of the multidimensional policy. **Proposition 4.3**.: _Assuming each pair \((\hat{f}_{i}(\mathbf{x};\ \mathbf{P}_{i}),\ v_{i}(\mathbf{x};\ \mathbf{Q}_{i}))\) individually satisfies the GAS conditions. Then, the sum \(\hat{v}=\sum_{i=1}^{n}v_{i}(\mathbf{x};\ \mathbf{Q}_{i})\) yields a valid standard Lyapunov function for \(\hat{f}(\mathbf{x};\ \mathbf{P})\), proving that the policy satisfies GAS conditions. The proof is given in Appendix A.1._ The formulation of the policy and the LPF as multidimensional polynomials empowers us to leverage tools from sum-of-squares (SOS) [15; 38]. The SOS approach boils the Lyapunov GAS conditions down to verifying positive-definiteness of a set of specified matrices. The next two lemmas illustrate the SOS formulation of Lyapunov stability conditions. **Lemma 4.4**.: _The first Lyapunov stability criterion, \(v_{i}(\mathbf{x};\ \mathbf{Q}_{i})\succeq 0\), is satisfied for each \(i\in\{1,\ldots,n\}\) if \(\mathbf{Q}_{i}\succeq 0\) and \(\mathbf{Q}_{i}\in\mathbb{S}^{3n+1}\). The proof is outlined in Appendix A.2._ **Lemma 4.5**.: _The second Lyapunov criterion, \(\frac{\partial}{\partial t}v_{i}(\mathbf{x};\ \mathbf{Q}_{i})\prec 0\), is fulfilled for each \(i\in\{1,\ldots,n\}\) if there exists a symmetric matrix \(\mathbf{G}_{i}\prec 0\) and \(\mathbf{G}_{i}\in\mathbb{S}^{(\alpha+\beta)n+1}\) such that:_ \[\frac{\partial}{\partial t}v_{i}(\mathbf{x};\ \mathbf{Q}_{i})=\frac{\partial v _{i}(\mathbf{x};\ \mathbf{Q}_{i})}{\partial\mathbf{x}}\frac{\partial\mathbf{x}}{ \partial t}=\frac{\partial v_{i}(\mathbf{x};\ \mathbf{Q}_{i})}{\partial\mathbf{x}}\hat{f}( \mathbf{x};\ \mathbf{P})=\mathbf{b}_{\mathbf{x},\alpha+\beta}^{T}\mathbf{G}_{i} \mathbf{b}_{\mathbf{x},\alpha+\beta}, \tag{5}\] _where \(\alpha+\beta\) is the basis degree. The matrix \(\mathbf{G}_{i}\) is acquired by polynomial coefficient matching, and depends on \(\mathbf{P}\) and \(\mathbf{Q}_{i}\). We summarize this dependence with the function \(\mathcal{G}(\mathbf{P},\mathbf{Q})=\mathbf{G}\), where \(\mathbf{G}\) symbolizes the block-diagonal form of all \(\mathbf{G}_{i}\) matrices. The proof is outlined in Appendix A.3._ Finally, with the necessary tools at our disposal, we can establish the connection between the global stability of the policy and finding SOS polynomials in Theorem 4.6. This theorem serves as the fundamental basis for the subsequent policy optimization process. **Theorem 4.6**.: _A polynomial DS policy, \(\hat{f}(\mathbf{x};\ \mathbf{P})\), is GAS if the following conditions are satisfied:_ \[(a)\ \mathbf{Q}\succeq 0,\ \ \ \ (b)\ \mathbf{G}\prec 0,\ \ \ \ (c)\ \mathcal{G}(\mathbf{P},\mathbf{Q})=\mathbf{G}. \tag{6}\] _The proof is straightforward and is sketched in Appendix A.4._ ### Joint optimization problem At this stage, we have established polynomial representations for both the policy and the LPF, along with a firm connection that confirms global stability. Now, we develop an objective function using the Mean-Squared Error (MSE) cost with the Elastic Net Regularization [39]. The MSE is calculated between the policy output and the expert's actions across demonstrated trajectories, and it solely depends on the policy parameters. Essentially, this problem entails regularized polynomial regression to minimize the imitation MSE to expert's demonstrations, subject to the existence of an LPF that satisfies the Lyapunov conditions. The optimization problem is framed as: \[\min_{\mathbf{Q},\mathbf{G},\mathbf{P}} J(\mathbf{P})=\frac{1}{2N_{t}}\sum_{d=1}^{N_{d}}\sum_{s=1}^{N_{s}}( \hat{f}(\mathbf{x}^{d}(s);\ \mathbf{P})-\hat{\mathbf{x}}^{d}(s))^{2}+\lambda_{1}\| \mathbf{P}\|_{1}+\lambda_{2}\|\mathbf{P}\|_{F}^{2}, \tag{7}\] \[s.t.\ \ \ (a)\ \mathbf{Q}\succeq 0\ \ \ (b)\ \mathbf{G}\prec 0\ \ \ (c)\ \mathcal{G}(\mathbf{P},\mathbf{Q})=\mathbf{G}\ \ \ (d)\ \mathbf{Q}=\mathbf{Q}^{T},\ \mathbf{G}=\mathbf{G}^{T},\ \mathbf{P}=\mathbf{P}^{T},\] where \(\|.\|_{1}\) and \(\|.\|_{F}^{2}\) denote the first and Frobenius norms, and \(\lambda_{1},\ \lambda_{2}\in\mathbb{R}^{+}\) represent the regularization coefficients. Equation 7 is a semi-definite programming with nonlinear cost function [40; 38], and can be solved using standard semi-definite solvers [41; 42]. Semi-definite programming facilitates optimization over the convex cone of symmetric, positive semi-definite matrices or its affine subsets. Note that \((c)\) can cause the optimization to become non-convex. To alleviate this, we employ SDP relaxations [12], iterative methods based on an initial guess of the \(Q\) matrix, and ultimately sequential quadratic programming (SQP) [43]. Notice that the negative-definite constraints can be transformed to a semi-definite constraints by a constant shift. Furthermore, the Lyapunov conditions restrict the gradient to nonzero values away from the origin. The restriction ensures that the LPF has only one global minimum at the target. ## 5 Experiments We employ two motion planning datasets for experiments. Our primary data comes from the widely recognized LASA Handwriting Motion Dataset [44], which comprises data recorded from handwritten trajectories. The second dataset contains expert demonstrations collected through teleoperating a robotic arm on realistic manipulation tasks. Details about both datasets can be found in Appendix B.1. ### Evaluation For evaluation purposes, we apply PLYDS and the baselines to the dataset (Figure 1(a)) and evaluate the performance of policy rollouts in PyBullet simulation (Figure 1(b)) before deploying safe policies onto a manipulator (Figure 1(c)). In all experiments, we randomly split the demonstrated trajectories in the dataset into train and test sets. The policy learning stage, introduced in Equation (7), is carried out on the training data. The learned policy is subsequently evaluated by calculating the MSE between the policy predictions and the ground truth in the test data, \(\frac{1}{2N_{d}^{test}N_{s}}\sum_{d=1}^{N_{d}^{test}}\sum_{s=1}^{N_{s}}(\hat{f}( \mathbf{x}^{d}(s);\ \mathbf{P})-\hat{\mathbf{x}}^{d}(s))^{2}\). Recall that the policy output, \(\hat{\mathbf{x}}\), is the velocity passed to the robot's low-level controller. We repeat this procedure over 20 different random seeds, and report the average and standard deviation. We compare the accuracy of our approach to existing baselines. Primarily, we compare against Stable Estimator of Dynamical Systems (SEDS) [3], Linear Parameter Varying Dynamical Systems (LPV-DS) [22], and Stable Dynamical System learning using Euclideanizing Flows (SDS-EF) as methods that ensure GAS. We also compare our method to Behavioral Cloning (BC) [26], and Generative Adversarial Imitation Learning (GAIL) [23] to highlight the importance of global stability. Note that among these, BC and GAIL do not provide mathematical stability guarantees, but the results could provide further comparison ground for the accuracy and computation time. The implementation details, hyperparameters and architecture are discussed in Appendix B.2 and Appendix B.3. ### Handwriting dataset We compare the learned policies of PLYDS to the baselines on eight designated motions. The outcome of these experiments is reported in Figure 3. Despite stability guarantees, the overall accuracy is better among stable imitation learning methods compared to unstable neural approaches. To analyze GAS, we visualize the learned policies of all methods as streamlines. Figure 4 illustrates the policy rollouts for N-Shaped motion of the handwriting dataset. Each sub-figure represents a trained policy illustrated with gray streamlines. It is evident that SEDS and PLYDS maintain GAS, Figure 3: Comparison of (a) mean and standard deviation of reproduction MSE and (b) computation time to designated imitation learning methods. PLYDS performs reasonably well in terms of accuracy and is even more promising in terms of computational cost. Figure 2: Overview of the evaluation sequence: (a) learning from demonstrated data, (b) numerical evaluation in simulation and (c) deployed in real-world Gen3 Lite manipulator. while GAIL and SDS-EF fail to demonstrate converging trajectories for the entire state space. The same pattern persists for other motions as depicted in Appendix C.1. Finally, we examine the sample efficiency of our method by reducing the input data to **one** demonstrated trajectory. From Figure 5, we can see that PLYDS learns a stable policy with such limited training samples, while the baselines generate trajectories which diverges from expert data. So far in this section, the policy and the LPF polynomial degrees were set to \(\alpha=6\) and \(\beta=2\). To understand the way in which the complexity of polynomials affects the overall performance, we repeated the same experiments with degrees of \(\alpha=4\), \(6\), and \(8\), and present the result in Appendix C.2. We observe that a higher complexity leads to improved precision, if not halted by overfitting or stability sacrifice. Moreover, we study different LPF complexities in Appendix C.3, evaluate the performance of PLYDS with noisy demonstrations in Appendix C.4, and further investigate the computational times in Appendix C.5. ### Manipulation tasks To conduct real-world trials, we collect a second set of expert demonstrations through teleoperating Kinova Gen3 Lite, a manipulator arm with six degrees of freedom. This new dataset holds three distinct motions: (a) root-parabola, (b) standard pick and place, and (c) prolonged-sine, which represent exemplary nonlinear trajectories (Figure 6). Additional details are available in Appendix B. The performance of all methods is summarized in Table 1, where PLYDS often outperforms other baselines. Next, the learned policy of PLYDS is transferred to the physical arm (Figure 6) and successfully imitates the introduced manipulation tasks. We also start the robot at regions that are further away from the demonstrations and introduce perturbations by randomly pushing the robot to reveal the inherent GAS of PLYDS. As expected, PLYDS manage to successfully recover to the goal. Figure 4: Policy rollout for N-Shaped motion of the handwriting dataset. Each figure represents a trained policy (gray) and rollouts (red) learned from demonstrations (blue). Note the stability issues with GAIL and SDS-EF, where some streamlines fail to reliably converge to the target. Figure 5: Policy rollout for Sine-Shaped motion (blue) of the handwriting dataset, with access to only _one_ expert demonstration. Each figure represents a trained policy (gray) and one rollout (red) learned from one demonstration (blue). Methods requiring large datasets for clustering, such as SEDS and LPV-DS, exhibit inaccurate and unsteady performance. ## 6 Conclusion and Limitations We introduced an approach that aims to learn globally stable nonlinear policies represented by polynomial dynamical systems. We employ the learned policies for motion planning based on imitating expert demonstrations. Our approach jointly learns a polynomial policy along with a parametric Lyapunov candidate that verifies global asymptotic stability by design. The resulting DS is utilized as a motion planning policy, guiding robots to stably imitate the expert's behavior. A comprehensive experimental evaluation is presented in real-world and simulation, where the method is compared against prominent imitation learning baselines. **Limitations.** A limitation of SOS is that the set of non-negative polynomials is larger than the ones expressed as SOS [45]. Though rare in motion planning tasks, this implies that finding a Lyapunov candidate could be difficult, especially with simultaneous search for a suitable dynamical system. Lasserre hierarchy and SOS extensions [46] can search in a broader class of LPF candidates and tackle this issue. Another limitation occurs when finding highly complex policies that lead to a violation of stability guarantees. This often happens when the regularization coefficients or the optimization tolerance are not set properly. We discuss this trade-off between stability and accuracy in Appendix C.2. Further, the computation complexity of PLYDS is feasible with a reasonable choice of polynomial degrees. Higher degrees are computationally demanding, but are often unnecessary in normal motion planning tasks. **Future work.** Future work includes incorporating more elaborate safety criteria, such as control barrier functions [47] or real-time obstacle avoidance, into our learning objectives. Plus, applications of our method in SE(3) planning, or other higher-dimensional spaces, such as configuration space of manipulator robots, may be further investigated. Vector Lyapunov functions and adaptable complexity of polynomials can pave the way for such applications, as they assuage major computational challenges. ## 7 Video, Codebase, and Reproducibility The codebase, video supplements, etc. related to this project are available on our Git repository 1. Reproducing the experiments is as straightforward as installing dependent software packages, and running a Unix commands in README files. \begin{table} \begin{tabular}{c|c|c|c|c} Expert Motion & Prolonged Sine & Root Parabola & Pick-and-Place & Computational Time \\ \hline SEDS [3] & \(0.234\pm 0.015\) & \(0.152\pm 0.023\) & \(0.094\pm 0.012\) & \(277.02\pm 13.60\) \\ BC [26] & \(1.650\pm 0.133\) & \(0.931\pm 0.078\) & \(0.725\pm 0.133\) & \(38.93\pm 9.11\) \\ GAIL [23] & \(2.322\pm 0.098\) & \(1.322\pm 0.094\) & \(0.663\pm 0.098\) & \(143.15\pm 8.68\) \\ SDS-EF [8] & \(0.234\pm 0.015\) & \(0.152\pm 0.023\) & \(0.094\pm 0.012\) & \(715.62\pm 18.79\) \\ LPV-DS + P-QLF [22] & \(0.234\pm 0.015\) & \(0.152\pm 0.023\) & \(0.094\pm 0.012\) & \(334.55\pm 25.74\) \\ PLYDS (ours) & \(0.111\pm 0.007\) & \(0.176\pm 0.015\) & \(0.021\pm 0.003\) & \(21.37\pm 1.52\) \\ \hline \end{tabular} \end{table} Table 1: Policy rollout reproduction MSE and computational time in PyBullet. Figure 6: Manipulation tasks: (a) root-parabola, (b) standard pick and place, and (c) prolonged-sine. #### Acknowledgments This work is sponsored by NSERC Discovery Grant. We also appreciate the thoughtful reviewers' comments which helped us enhance the paper, particularly the experiments.
2309.15905
Wide post-common envelope binaries containing ultramassive white dwarfs: evidence for efficient envelope ejection in massive AGB stars
Post-common-envelope binaries (PCEBs) containing a white dwarf (WD) and a main-sequence (MS) star can constrain the physics of common envelope evolution and calibrate binary evolution models. Most PCEBs studied to date have short orbital periods ($P_{\rm orb} \lesssim 1\,$d), implying relatively inefficient harnessing of binaries' orbital energy for envelope expulsion. Here, we present follow-up observations of five binaries from {\it Gaia} DR3 containing solar-type MS stars and probable ultramassive WDs ($M\gtrsim 1.2\,M_{\odot}$) with significantly wider orbits than previously known PCEBs, $P_{\rm orb} = 18-49\,$d. The WD masses are much higher than expected for systems formed via stable mass transfer at these periods, and their near-circular orbits suggest partial tidal circularization when the WD progenitors were giants. These properties strongly suggest that the binaries are PCEBs. Forming PCEBs at such wide separations requires highly efficient envelope ejection, and we find that the observed periods can only be explained if a significant fraction of the energy released when the envelope recombines goes into ejecting it. Our 1D stellar models including recombination energy confirm prior predictions that a wide range of PCEB orbital periods, extending up to months or years, can potentially result from Roche lobe overflow of a luminous AGB star. This evolutionary scenario may also explain the formation of several wide WD+MS binaries discovered via self-lensing, as well as a significant fraction of post-AGB binaries and barium stars.
Natsuko Yamaguchi, Kareem El-Badry, Jim Fuller, David W. Latham, Phillip A. Cargile, Tsevi Mazeh, Sahar Shahaf, Allyson Bieryla, Lars A. Buchhave, Melissa Hobson
2023-09-27T18:00:02Z
http://arxiv.org/abs/2309.15905v2
Wide post-common envelope binaries containing ultramassive white dwarfs: evidence for efficient envelope ejection in massive AGB stars ###### Abstract Post-common-envelope binaries (PCEBs) containing a white dwarf (WD) and a main-sequence (MS) star can constrain the physics of common envelope evolution and calibrate binary evolution models. Most PCEBs studied to date have short orbital periods (\(P_{\rm orb}\lesssim 1\) d), implying relatively inefficient harnessing of binaries' orbital energy for envelope expulsion. Here, we present follow-up observations of five binaries from _Gaia_ DR3 containing solar-type MS stars and probable ultramassive WDs (\(M\gtrsim 1.2\,M_{\odot}\)) with significantly wider orbits than previously known PCEBs, \(P_{\rm orb}=18-49\) d. The WD masses are much higher than expected for systems formed via stable mass transfer at these periods, and their near-circular orbits suggest partial tidal circularization when the WD progenitors were giants. These properties strongly suggest that the binaries are PCEBs. Forming PCEBs at such wide separations requires highly efficient envelope ejection, and we find that the observed periods can only be explained if a significant fraction of the energy released when the envelope recombines goes into ejecting it. Using 1D stellar evolution calculations, we show that the binding energy of massive AGB star envelopes is formally _positive_ if recombination energy is included in the energy budget. This suggests that the star's envelope can be efficiently ejected if binary interaction causes it to recombine, and that a wide range of PCEB orbital periods can potentially result from Roche lobe overflow of an AGB star. This evolutionary scenario may also explain the formation of several wide WD+MS binaries discovered via self-lensing. keywords: binaries: spectroscopic - white dwarfs - stars: AGB and post-AGB - stars: evolution ## 1 Introduction Common envelope evolution (CEE) is a major unsolved problem in binary evolution. CEE is the outcome of dynamically unstable mass transfer (MT), generally from a more massive donor to a less massive accretor. During CEE, both stars orbit inside a shared envelope, spiraling inward on a dynamical or thermal timescale. In some cases, the orbital energy liberated during this inspiral is sufficient to eject the shared envelope, leaving behind a close binary in which at least one component has lost most of its envelope. If envelope ejection is _not_ successful, the final outcome of CEE is a stellar merger. Modeling of CEE is a key uncertainty in our understanding of the formation of a wide variety of binary systems, including cataclysmic variables (e.g. Paczynski, 1976; Meyer & Meyer-Hofmeister, 1979; Willems & Kolb, 2004), X-ray binaries (e.g. Kalogera & Webbink, 1998), Type Ia supernovae (e.g. Webbink, 1984; Meng & Podsiadlowski, 2017), binary neutron stars (e.g. Bhattacharya & van den Heuvel, 1991), and binary black holes (Belczynski et al., 2016; Marchant et al., 2021). CEE is a dynamical process often involving an enormous range of physical and temporal scales. Because detailed, end-to-end calculations of CEE are currently infeasible (e.g. Ivanova et al., 2013) - and because it is often necessary to model the evolution of large numbers of binaries to understand the possible formation pathways of a single observed system - binary population synthesis (BPS; e.g. Hurley et al., 2002) codes are often used to model the evolution of millions of binaries, making it possible to explore a broad parameter space at the expense of physical realism. These codes make use of simplified models of CEE based on energy or angular momentum conservation. In the most widely used formalism, it is assumed that a fixed fraction ("\(\alpha\)") of the liberated orbital energy goes in ejecting the envelope. This fraction, and the binding energy of the envelope, then sets the post-CEE orbital separation (e.g. Livio & Soker, 1988; Tout et al., 1997; De Marco et al., 2011). Other energy sources, such as the photons released during recombination when the envelope expands, are often modeled as reducing the envelope's binding energy. CEE models have historically been calibrated by comparing the binary populations they predict to observed post-common envelope binaries (PCEBs). The most abundantly observed PCEBs contain white dwarfs or hot subdwarfs in tight orbits (\(P_{\rm orb}\lesssim 1\) d) with a main-sequence (MS) star, having been produced when the MS star spiraled through the envelope of a red giant and ultimately ejected it. BPS models have succeeded in explaining some broad population properties of these binaries when using the \(\alpha\)-formalism (Han et al., 2002, 2003; Camacho et al., 2014). Such modeling makes it possible to empirically constrain \(\alpha\), and several calculations have found that the observations can best be reproduced by models assuming \(\alpha\approx 0.3\), meaning that \(\approx 30\%\) of the orbital energy liberated during inspiral goes into ejecting the envelope (Zorotovic et al., 2010; Davis et al., 2012; Toonen & Nelemans, 2013; Camacho et al., 2014; Zorotovic & Schreiber, 2022; Scherbak & Fuller, 2023). At least one PCEB is known with a (relatively) wide orbit. That system, IK Peg, has a period of 22 days and hosts an unusually massive WD, with mass \(M\approx 1.2\,M_{\odot}\)(Wonnacott et al., 1993). The system's wide orbit (\(a\approx 0.2\,\mathrm{au}\)) means that less orbital energy was liberated during the MS star's inspiral than in typical PCEBs with \(a\approx 0.01\,\mathrm{au}\). Zorotovic et al. (2010) found that in the \(\alpha-\)formalism, IK Peg's orbit can _only_ be explained when additional sources of energy besides orbital inspiral are taken into account. Besides IK Peg, several wide WD+MS binaries have been discovered via self-lensing, with orbital periods ranging from a few months to a few years (Kruse & Agol, 2014; Kawahara et al., 2018). While it is not clear whether these systems formed via CEE (see Section 5.1), they are also candidates for being wide PCEBs and would require additional energy sources (and/or high \(\alpha\) values) to explain (Zorotovic et al., 2014). Energy released by H and He recombination in the expanding envelope of the WD progenitor (e.g. Webbink, 2008; Ivanova et al., 2015; Ivanova, 2018) is a prime suspect for supplying the additional energy. Most PCEBs studied to date were identified via their composite spectra and RV variability detectable with low-resolution spectra (Rebassa-Mansergas et al., 2007, 2017; Lagos et al., 2022). This leads to strong selection effects in favor of PCEBs containing low-mass MS stars (which are less likely to outshine the WD) in tight orbits (where RV shifts are larger). The recent 3rd data release of the _Gaia_ mission (DR3; Gaia Collaboration et al., 2023) contains orbital solutions for more than \(10^{5}\) astrometric binaries, and for more than \(10^{5}\) single-lined spectroscopic binaries identified from medium-resolution spectra (Gaia Collaboration et al., 2023). This dataset provides a new opportunity to search for PCEBs with wider orbits and more massive MS companions. In this paper, we present five binaries in relatively wide orbits containing solar-type main sequence stars and probable ultramassive WD candiates. Section 2 describes our identification of wide PCEB candidates from the _Gaia_ DR3 catalog. Section 3 describes follow-up spectroscopic observations to obtain radial velocities (RVs), spectral analysis to calculate metallicities, and fitting to the broadband spectral energy distributions to constrain stellar parameters of the MS stars. In Section 4, we fit the RVs to infer orbital solutions. In particular, we measure a mass function which, when combined with the luminous star mass, yields a minimum mass for the compact object. We also discuss alternative possibilities for the nature of the unseen companions. In Section 5, we compare our systems to other known PCEBs. Section 6 describes models of the massive WD progenitors and constraints on CEE. In Section 7, we briefly describe an alternative CE formalism, the occurrence rate of close and wide PCEBs, and selection biases in past surveys. Finally, in Section 8, we summarize our main results and conclude. ## 2 Discovery The five objects studied in this paper (Table 1) were discovered in the course of a broader search for compact objects with single-lined spectroscopic ("SB1") or astrometric + spectroscopic ("AstroSpectroSB1") solutions in the _Gaia_ DR3 non-single star (NSS) catalog (Gaia Collaboration et al., 2023). We selected promising candidates for further follow-up based on their mass functions, CMD positions, and _Gaia_ quality flags. In brief, we targeted sources whose CMD positions suggested a single luminous source and whose _Gaia_ mass functions implied a companion mass near the Chandrasekhar limit. For objects with SB1 solutions, we prioritized those for which Bashi et al. (2022) reported a robustness "score" above 0.5, corresponding roughly to an expected 20% contamination rate with spurious solutions. Our spectroscopic follow-up revealed some sources to have spurious _Gaia_ orbital solutions and others to be double- or triple-lined binaries. Here, we focus on 5 promising sources that are single-lined and whose _Gaia_-reported orbits were validated by our follow-up. All 5 of these sources turned out to have near-circular orbits, but eccentricity did not enter our initial selection, and we did not find any similar (single-lined, high mass function) targets with comparable periods and higher eccentricities. Our full search will be described in future work. Four targets have spectroscopic SB1 solutions, but no astrometric binary solution. We suspect this is a result of the stringent cuts on astrometric signal-to-noise ratio applied to astrometric solutions with short periods (see Gaia Collaboration et al., 2023). For these objects, the inclination is unknown, and only a minimum companion mass can be inferred. One object, J1314+3818, has a joint astrometric and spectroscopic ("AstroSpectroSB1") solution, meaning that its inclination is constrained. This object was identified as a likely MS + compact object binary by Shahaf et al. (2023) on the basis of its large astrometric mass ratio function. Another object in our sample, J2034-5037, was previously identified by Jayasinghe et al. (2023) as a candidate neutron star + MS binary. ## 3 Follow-up Here, we describe the follow-up spectra that we obtained, the process of measuring metallicities from these spectra, and our constraints on the MS stars' parameters from their spectral energy distributions. A log of our observations and measured RVs can be found in Appendix A. ### Feros We obtained 59 spectra with the Fiberfed Extended Range Optical Spectrograph (FEROS; Kaufer et al., 1999) on the 2.2 m ESO/MPG telescope at La Silla Observatory (programs P109.A-9001, P110.A-9014, and P111.A-9003). Some observations used \(2\times 2\) binning to reduce readout noise at the expense of spectral resolution; the rest used \(1\times 1\) binning. The resulting spectra have resolution \(R\approx 40,000\) (\(2\times 2\) binning) and \(R\approx 50,000\) (\(1\times 1\) binning). Exposure times ranged from 1200 to 1800 seconds. We reduced the data using the CERES pipeline (Brahm et al., 2017), which performs bias-subtraction, flat fielding, wavelength calibration, and optimal extraction. The pipeline measures and corrects for small shifts in the wavelength solution during the course a night via simultaneous observations of a ThAr lamp obtained with a second fiber. We first calculate RVs by cross-correlating a synthetic template spectrum with each order individually and then report the mean RV across 15 orders with wavelengths between 4500 and 6700 A. We calculate the uncertainty on this mean RV from the dispersion between orders; i.e., \(\sigma_{\rm RV}=\rm std\left(RVs\right)/\sqrt{15}\). We used a Kurucz spectral template from the B0SZ grid (Bohlin et al., 2017) matched to the effective temperature of each star. ### Tres We obtained 34 spectra using the Tillinghast Reflector Echelle Spectrograph (TRES; Furesz, 2008) mounted on the 1.5 m Tillinghast Reflector telescope at the Fred Lawrence Whipple Observatory (FLWO) atop Mount Hopkins, Arizona. TRES is a fibrefed echelle spectrograph with a wavelength range of 390-910 nm and spectral resolution \(R\approx 44,000\). Exposure times ranged from 1800 to 3600 seconds. We extracted the spectra as described in Buchhave et al. (2010). As with the FEROS data, we measured RVs by cross-correlating the normalized spectra from each of 31 orders with a Kurucz spectrum template, and we estimate RV uncertainties from the dispersion between RVs measured from different orders; i.e., \(\sigma_{\rm RV}=\rm std\left(RVs\right)/\sqrt{31}\). ### Mike We observed J2117+0332 and J2034-5037 with the Magellan Inamori Kyocera Echelle (MIKE) spectrograph on the Magellan 2 telescope at Las Campanas Observatory (Bernstein et al., 2003). We used the 0.7" slit with an exposure of 600s. This yielded a spectral resolution \(R\sim 35,000\) and typical SNR of \(\sim 35\) and 16 per pixel on the red and blue side respectively. The total wavelength coverage was \(\sim 3330-9680\) A (though we only used spectra below 6850 A to avoid telluric line contamination when computing the metallicity). The spectra were reduced with the MIKE Pipeline using CarPy (Kelson et al., 2000; Kelson, 2003). We flux-calibrated the spectra using a standard star and merged the orders into a single spectrum, weighting by inverse variance in the overlap regions. ### Metallicities Measuring metallicities of the MS stars is important for constraining their masses and ages. #### 3.4.1 Spc We fit the TRES spectra using the Stellar Parameter Classification (SPC) tool (Buchhave et al., 2012). This code cross-correlates a grid of synthetic spectra with each observed spectrum in the wavelength range of 5050 to 5360 A, centered on the Mg I b triplet. It then fits the peaks of the CCF with a three dimensional third order polynomial to return best-fit values of effective temperature \(T_{\rm eff}\), surface gravity \(\log g\), and metallicity [M/H] that may lie in between the spacings of the grid. As described in the supplementary material of Buchhave et al. (2012), given systematic uncertainties in the synthetic stellar spectra, error floors on the derived [M/H] and \(T_{\rm eff}\) values are \(\sim 0.08\) dex and \(\sim 50\) K, respectively (See also Furlan et al., 2018). We report these floors in in Table 2. #### 3.4.2 Bacchus For the MIKE and FEROS spectra, we used the Brussels Automatic Code for Characterizing High accUracy Spectra (BACCHUS; Masseron et al., 2016; Hayes et al., 2022). This code performs 1D LTE spectral synthesis to determine stellar parameters. It carries out normalization by linearly fitting the continuum 30A around a line. It then uses several methods to compare each line of the observed spectrum to that of synthetic spectra to calculate an abundance. The effective temperature, surface gravity, and microturbulence are estimated by determining values that result in null trends between the inferred abundances of a given element against the excitation potential, ionization potential, and equivalent widths, respectively. The metallicity [Fe/H] is the mean Fe abundance calculated over lines in the VALD atomic linelist (Piskunov et al., 1995; Ryabchikova et al., 2015) with a wavelength coverage of 4200 to 9200A. We assume the detailed abundance pattern traces solar values. The errors reported by BACCHUS represent the scatter in the implied abundances between the different lines and methods of abundance calculations but do not take into account other systematic uncertainties (Hayes et al., 2022). #### 3.4.3 Gaia Xp We also compare the values measured with SPC and BACCHUS to those calculated by Andrae et al. (2023) using the _Gaia_ XP very low-resolution spectra. These authors derive \(T_{\rm eff}\), log(g), and [M/H] for 175 million stars with XP spectra published in DR3. Although the spectra from which these parameters are derived have low resolution, Andrae et al. (2023) demonstrated that their reported metallicities are accurate to within better than 0.1 dex for bright and nearby stars like our targets with temperatures within the range of our sample. #### 3.4.4 Results The metallicities and effective temperatures obtained from spectral fitting are summarized in Table 2. The metallicities range from \(-0.15\) to 0.20 dex. For J2117+0332, we see that the metallicities from SPC and BACCHUS are in agreement. The _Gaia_ XP metallicites are not used in our analysis in the following sections but provide a useful comparison point. Most of our [M/H] measurements are \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Name & Gaia DR3 ID & RA [deg] & Dec [deg] & \(G\) [mag] & RUWE & \(\varpi\) [mas] \\ \hline J2117+0332 & 2692960678029100800 & 319.34490 & 3.54044 & 12.47 & 1.27 & 1.96 \(\pm\) 0.02 \\ J111+5515 & 843829411442724864 & 167.80947 & 55.26410 & 10.61 & 1.47 & 3.24 \(\pm\) 0.02 \\ J1314+3818 & 1522897482203494784 & 198.51734 & 38.30119 & 11.05 & - & 12.45 \(\pm\) 0.02 \\ J2034-5037 & 6475655404885617920 & 308.60840 & -50.62557 & 12.37 & 2.94 & 3.23 \(\pm\) 0.04 \\ J0107-2827 & 5033197892724532736 & 16.98021 & -28.46128 & 12.27 & 1.74 & 2.14 \(\pm\) 0.02 \\ \hline \hline \end{tabular} \end{table} Table 1: Basic information from Gaia DR3 of the five objects found in this work. The format for the name of each object is ‘_T_’ for J2000 followed by the coordinates of the right ascension (RA) in hours and minutes, and declination (Dec) in degrees and minutes. \(G\) is the G-band mean magnitude, RUWE is the Renormalised Unit Weight Error, and \(\varpi\) is the parallax. consistent with the _Gaia_ XP measurements from Andrae et al. (2023) within \(1\sigma\). The good agreement between the three metallicities shows that XP metallicities are likely sufficiently accurate for analysis of larger samples in cases where high resolution follow-up would be prohibitively expensive. We also add a column for the best-fit [Fe/H] values from our SED fitting (Section 3.6), which uses the SPC and BACCHUS metallicities as a prior. ### Light curves We retrieved observed light curves for our objects from the All-Sky Automated Survey for Supernovae (ASAS-SN; Shappee et al., 2014; Kochanek et al., 2017). We used the \(V\) band data, for which the number of photometric points ranged from 1609 to 3194 across the five objects. The typical uncertainty in normalized flux is \(\sim 0.01\). To search for periodic variability, we computed Lomb-Scargle periodograms of these light curves (Lomb, 1976; Scargle, 1982; Astropy Collaboration et al., 2022). We did not find any significant periodicities beyond the lunar cycle and sidered day. This allows us to rule out periodic variability with amplitude greater than the strongest noise peaks, which have amplitude \(\sim 0.002-0.003\) for all objects except J1314+3813, where they have amplitude \(\sim 0.006\). ### SED fitting We constructed broadband spectral energy distributions (SEDs) of our targets using synthetic _ugriz_ SDSS photometry calculated from _Gaia_ XP spectra (Gaia Collaboration et al., 2022) (with the exception of J12117+0332 where actual SDSS photometry was available and used instead; Padmanabhan et al., 2008), 2MASS _JHK_ photometry (Skrutskie et al., 2006), and _WISE_\(W_{1}W_{2}W_{3}\) photometry (Wright et al., 2010). We obtained \(E(B-V)\) for each object using the Lallement et al. (2022) 3D dust map for declinations below -30\({}^{\circ}\) and the Bayesat2019 3D dust map (Green et al., 2019) for declinations above -30\({}^{\circ}\). These are given in Table 3. We assume a Cardelli et al. (1989) extinction law with \(R_{V}=3.1\). The Bayestar2019 map provides \(E(g-r)\) which is approximately equal to \(E(B-V)\)(Schlafly and Finkbeiner, 2011), while the Lallement et al. (2022) map provides the extinction \(A_{0}\) at 550 nm which we take to be \(A_{V}\). As all our objects are relatively nearby with \(E(B-V)<0.05\), the uncertainties in these extinction values do not dominate the uncertainties in the final fitted parameters. We do not attempt to account for flux contributions from the WD companions, which must be very small (given their high masses) and faint in the optical. We justify this assumption in Appendix B, where we show that even very hot WDs with \(T_{\rm eff}=60,000\) K would not significantly contribute to the photometry of all but one of our targets. For the one exception, we find that a WD with \(T_{\rm eff}\gtrsim 30,000\) K could contribute to the \(u\)-band photometry, so we conservatively excluded the \(u-\)band measurement from our fit. We fit the SEDs using MINEsweeper(Cargile et al., 2020), a code designed for joint modeling of stellar photometry and spectra. We only use the code's photometric modeling capabilities but place a prior on the present-day surface metallicity from spectroscopy. The free parameters to be fit are each star's parallax, mass \(M_{\star}\), initial metallicity [Fe/H]\({}_{\rm init}\), and Equivalent Evolutionary Phase (EEP, a monotonic function of age; see Dotter, 2016). From each set of parameters, MINEsweeper generates a predicted SED and photometry in specified filters using neural network interpolation. We use emcee, a Python Markov chain Monte Carlo (MCMC) sampler (Foreman-Mackey et al., 2013), to sample from posterior. Constraints from fitting each source's SED are listed in Table 3. We note that MINEsweeper constrains the _initial_ metallicity, which is not identical to the present-day surface value measured from spectroscopy. For our targets, the difference between initial and present-day surface metallicity is a result of atomic diffusion, where heavier elements settle out of the atmosphere over time (Dotter et al., 2017). The present-day surface metallicity [Fe/H] is predicted by the isochrones given a set of \(M_{\star}\), [Fe/H]\({}_{\rm init}\), and EEP, so the spectroscopic metallicities found in Section 3.4 are used to add a Gaussian constraint on [Fe/H] to the likelihood. Putting everything together, the final likelihood function is: \[\begin{split}\ln L=&-\frac{1}{2}\sum_{I}\frac{ \left(\rm mag_{\rm pred,i}-\rm mag_{\rm obs,i}\right)^{2}}{\sigma_{\rm mag,i}^{2 }}-\frac{1}{2}\frac{\left(\rm[Fe/H]_{\rm pred}-[Fe/H]_{\rm obs}\right)^{2}}{ \sigma_{\rm[Fe/H]}^{2}}\\ &-\frac{1}{2}\frac{\left(\rm\varpi_{\rm pred}-\varpi_{\rm obs} \right)^{2}}{\sigma_{\rm gr}^{2}},\end{split} \tag{1}\] where "mag" stands for apparent magnitudes and the summation is over the appropriate photometric filters for each object, \(\sigma_{\rm X}\) is the error on the observed value of some quantity \(X\), and [Fe/H] is the present-day surface metallicity. We set a floor on \(\sigma_{\rm mag}\) of 0.02 dex (given possible calibration issues) to avoid underestimating the errors. We report the medians of the marginalized posterior distributions for each parameter in Table 3. \(M_{\star}\), EEP, and [Fe/H]\({}_{\rm init}\) are the parameters directly fitted for by MINEsweeper, while [Fe/H], \(T_{\rm eff}\), and \(R\) are calculated from the isochrones corresponding to the fitted parameters. The reported errors are described in Section 3.6.2. We also list constraints on [Fe/H] and \(T_{\rm eff}\) for comparison with the values measured from spectroscopy (Table 2). Figure 1 shows MIST isochrones corresponding to the stellar parameters of 100 random posterior samples (gray). The cyan lines show the best-fit parameters and the red point marks the present inferred parameters of the MS stars. Two systems host stars that have evolved slightly evolved off the MS. This is likely to be the result of selection bias, as evolved stars are brighter and thus over-represented in magnitude-limited samples. In addition, we assumed that stars were on the MS when estimating their masses in our initial selection of targets for follow-up. These initial estimates were moderately overestimated for evolved stars, leading to overestimated companion masses. Since we targeted massive companions - and massive companions are intrinsically rare - we expect evolved MS stars to be preferentially selected. The observed and predicted SEDs are shown in Figure 2. The model SEDs plotted were generated using pytstelllibs1 with the best-fit parameters as inputs. We have checked that these models give roughly consistent photometry to that predicted with MINEsweeper which does not itself return a continuous SED. The residuals of the photometry typically lie within 0.1 mag. Where available, GALEX NUV points are shown in Figure 2, but these were not used in the fitting as they could have significant contamination from the WD companion. For J1314+3818, we also found that WDs with \(T_{\rm eff}\gtrsim 30000\) K would significantly contribute to the SDSS \(u\)-band photometry (See Appendix B), so this point was also excluded in the fitting. Footnote 1: [https://mfouesneau.github.io/pytelllibs/](https://mfouesneau.github.io/pytelllibs/) #### 3.6.1 A wide tertiary One system, J0107-2827, has a resolved tertiary separated by a distance of 2.21 arcseconds, corresponding to a projected physical separation of 1033 AU. The consistency in the parallaxes and proper motions of the two sources make it highly likely that they are in fact physically bound, as opposed to a chance alignment (e.g. El-Badry et al. 2021). While the source is resolved by _Gaia_ in the \(G\) band, the XP, 2MASS, and _WISE_ photometry of the two sources are likely all unresolved, so we model its SED as a sum of two luminous stars. Assuming the tertiary is on the main sequence, its \(G-\)band absolute magnitude of \(\sim 7.3\) (calculated using the reported apparent magnitude of 15.7) corresponds to a mass of approximately \(0.67\,M_{\odot}\). We assume solar metallicity (consistent with the initial metallicity we infer for the primary) and an age of \(\sim 6\) Gyr. Using these parameters, we generated photometry for this third star (gray line in Figure 2) which we added to the model primary (black). This sum (magenta) was fit to the observations. #### 3.6.2 Effect of potentially underestimated parallax errors Our fitting also leaves the parallax, \(\varpi\), free, allowing us to propagate parallaxes uncertainties through to the stellar parameters. From Table 1, we see that the Renormalised Unit Weight Errors (RUWEs) from _Gaia_ DR3 for several of our objects are above 1.4, which may indicate that the reported parallax uncertainties are underestimated (Lindegren 2018) as a result of orbital motion, which is not accounted for in the _Gaia_ single-star astrometric model. To estimate more realistic parallax uncertainties, we carry out the following analysis. We select sources from the _Gaia_ NSS catalog with Orbital or AstroSpectroSB1 solutions. In addition to the "single-star model" parallaxes reported for these solutions in the gaia_source table, these sources have improved parallaxes from astrometric solutions that account for wobble induced by their binarity (Gaia Collaboration et al. 2023b; Halbwachs et al. 2023). From these, we select those with phot_g_mean_mag < 13 and RUWE values comparable to our targets. We then calculate the standard deviation of the difference between the the parallaxes reported from single-star solutions (gaia_source) and binary solution (NSS catalog). We found a standard deviation of 0.104 mas for 13,132 sources with RUWE between 1.4 and 2, and a standard deviation of 0.181 mas for 18,351 sources with RUWE between 2 and 3. The maximum RUWE for our objects is 2.94 (Table 1). Based on these values, we re-run the SED fitting with increased parallax uncertainties of 0.2 mas for J2034-5037 (with a RUWE > 2) and 0.1 mas for the remaining four objects. We find no significant changes to the best-fit values of the parameters but a general increase in the uncertainties (i.e. the standard derivations of the parameters from the posterior). We report these inflated uncertainties in Table 3. Note that in Section 4.2, we also set an uncertainity floor of \(\pm 0.05\,M_{\odot}\) on \(M_{\star}\) to obtain conservative errors on the inferred masses of the WDs. ## 4 Orbital Fits We fit the FEROS and TRES RVs with a Keplerian model using emcee. The free parameters of the fit are the orbital period \(P_{\rm orb}\), periastron time \(T_{p}\), eccentricity \(e\), RV semi-amplitude \(K_{\star}\), argument of periastron \(\omega\), and center-of-mass RV \(\gamma\). In the case of J2117+0332, where we have RVs from two different instruments, we also fit for an RV offset between the two instruments as an additional parameter. We set broad, uniform priors on all parameters. The likelihood function is defined as: \[\ln L=-\frac{1}{2}\sum_{i}\frac{\left({\rm RV}_{\rm pred}\left(t_{i}\right)-{ \rm RV}_{i}\right)^{2}}{\sigma_{{\rm RV},i}^{2}} \tag{2}\] where \({\rm RV}_{\rm pred}\left(t_{i}\right)\) and \({\rm RV}_{i}\) are the predicted and measured RVs at times \(t_{i}\), and \(\sigma_{{\rm RV},i}^{2}\) are the errors in the measurements. The best-fit RV curve for each object is shown in Figure 3. Best-fit values for \(P_{\rm orb}\), \(e\), and \(K_{\star}\), along with the implied mass functions \(f_{m}\) given these parameters, are reported in Table 4. For comparison, we also list the mass functions calculated using the same parameters from the _Gaia_ DR3 SB1 solutions. We find that all of the systems have a small but non-zero eccentricity. To confirm that these are significant, we also fit the RVs using a model with eccentricity and \(\omega\) fixed to zero. The residuals from the two models are plotted on the second and third panels for each object in Figure 3. We see that the residuals from the model with \(e=0\) are obviously larger than those from the model that fits for \(e\), with the possible exception of J2117+0332 (which has \(e=0.0007\pm 0.0002\)) where the difference is more subtle. Since eccentricity cannot be negative, observational eccentricities will result in a positive eccentricity bias for orbits that are nearly circular (e.g. Hara et al. 2019). For J2117+0332, we generate simulated RVs with the orbital parameters of the \(e=0\) fit at the JDs of our observations, adding to them Gaussian noise with a standard deviation of \(0.05\,{\rm km\,s^{-1}}\). We then fit these RVs with a Keplerian model, which yields \(e\sim 0.0003\pm 0.0002\). This is comparable to the uncertainty on \(e\) we find with the measured RVs, and smaller than the measured eccentricity. This experiment provides additional support that the non-zero eccentricity measured for J2117+0332 is real. ### Joint astrometric + RV fitting One of our targets, J1314+3818, has a _Gaia_ AstroSpectroSB1 solution, meaning that the _Gaia_ astrometry and RVs were fit with a combined orbital model. This model has 15 parameters: \begin{table} \begin{tabular}{c c c c c c c c c} \hline & \multicolumn{4}{c}{[Fe/H]} & \multicolumn{4}{c}{\(T_{\rm eff}\) [K]} \\ \cline{2-9} name & SPC & BACCHUS & Gaia XP & SED & SPC & BACCHUS & Gaia XP & SED \\ \hline J2117+0332 & -0.24 \(\pm\) 0.08 & -0.284 \(\pm\) 0.18 & -0.380 & -0.22 \(\pm\) 0.06 & 6029 \(\pm\) 50 & 6152 +/- 79 & 6111.0 & 6226 \(\pm\) 19 \\ J1111+5515 & -0.15 \(\pm\) 0.08 & - & -0.172 & -0.17 \(\pm\) 0.06 & 5987 \(\pm\) 50 & - & 6006.3 & 6190 \(\pm\) 22 \\ J1314+3818 & -0.39 \(\pm\) 0.08 & - & -0.291 & -0.34 \(\pm\) 0.05 & 4707 \(\pm\) 50 & - & 4700.2 & 4684 \(\pm\) 13 \\ J2034-5037 & - & -0.346 \(\pm\) 0.078 & -0.352 & -0.19 \(\pm\) 0.06 & - & 5789 \(\pm\) 17 & 5758.8 & 5856 \(\pm\) 20 \\ J0107-2827 & - & 0.198 \(\pm\) 0.127 & 0.244 & 0.04 \(\pm\) 0.07 & - & 5524 \(\pm\) 51 & 5330.4 & 5387 \(\pm\) 21 \\ \hline \end{tabular} \end{table} Table 2: Comparison of the metallicities and \(T_{\rm eff}\) obtained from various methods. Figure 1: MIST isochrones for all of our objects. In each plot, gray lines are the isochrones given the masses and metallicities of 100 randomly chosen posterior samples from the photometric fitting, while the cyan line is that of the best-fit parameters. The red point marks the present location of the object and the error bars are the standard deviation in the radii and temperatures of the posteriors at the corresponding ages. Figure 2: Model SEDs for all five objects using the best-fit parameters from fitting the photometry using MINEsweeper, compared with the observations. J0107-2827 has a visible third companion whose SED is shown in gray. The magenta line is the sum of the two luminous stars (the black line corresponding to the more luminous star lies close to, but under, the magenta line). Note that while GALEX points are plotted, these were not used in the photometric fitting to avoid possible contamination from the WD companions. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Name & \(E\left(B-V\right)\) & \(M_{\bullet}\) [\(M_{\odot}\)] & EEP & [Fe/H]\({}_{\rm init}\) & \(\varpi\) [mas] & [Fe/H] & \(T_{\rm eff}\) [K] & \(R\) [\(R_{\odot}\)] \\ \hline J2117+0332 & 0.045 & 1.11 \(\pm\) 0.03 & 383.26 \(\pm\) 16.40 & -0.09 \(\pm\) 0.05 & 1.96 \(\pm\) 0.10 & -0.23 \(\pm\) 0.05 & 6297 \(\pm\) 20 & 1.25 \(\pm\) 0.06 \\ J1111+5515 & 0.009 & 1.15 \(\pm\) 0.02 & 444.90 \(\pm\) 2.92 & -0.09 \(\pm\) 0.06 & 3.24 \(\pm\) 0.10 & -0.17 \(\pm\) 0.06 & 6169 \(\pm\) 22 & 1.78 \(\pm\) 0.06 \\ J1314+3818 & 0.000 & 0.71 \(\pm\) 0.01 & 276.03 \(\pm\) 35.93 & -0.33 \(\pm\) 0.04 & 12.45 \(\pm\) 0.10 & -0.34 \(\pm\) 0.05 & 4670 \(\pm\) 13 & 0.71 \(\pm\) 0.01 \\ J2034-5037 & 0.024 & 0.96 \(\pm\) 0.02 & 321.80 \(\pm\) 52.27 & -0.16 \(\pm\) 0.06 & 3.24 \(\pm\) 0.17 & -0.18 \(\pm\) 0.06 & 5857 \(\pm\) 19 & 0.89 \(\pm\) 0.05 \\ J0107-2827 & 0.027 & 0.97 \(\pm\) 0.03 & 459.26 \(\pm\) 1.39 & 0.08 \(\pm\) 0.07 & 2.14 \(\pm\) 0.08 & 0.01 \(\pm\) 0.06 & 5325 \(\pm\) 19 & 1.71 \(\pm\) 0.07 \\ \hline \end{tabular} \end{table} Table 3: Best-fit parameters from SED fitting. We have also added the extinction \(E\left(B-V\right)\) for all objects – these were taken from 3D dust maps and have uncertainties of \(\sim 0.02\) mag. \(M_{\bullet}\) is the mass of the luminous star, EEP is the Equivalent Evolutionary Phase (related to its age), [Fe/H]\({}_{\rm init}\) is the initial metallicity of the star, and \(\varpi\) is the parallax. For \(M_{\bullet}\), we set an uncertainity floor of \(\pm 0.05\,M_{\odot}\) when calculating WD masses in Section 4.2. The other parameters are inferred from the isochrone corresponding to the fitted parameters, where [Fe/H] is the present-day metallicity, \(T_{\rm eff}\) is the effective temperature, and \(R\) is the radius. Figure 3: Results of the RV fitting. For each object, the top panel shows the best-fit RV curve over the observed points, the second panel shows the residuals of this fit, the third panel shows the residuals of the fit with eccentricity set to 0, and the final panel shows the implied WD mass as a function of inclination (Section 4.2). For all objects (except possibly J2117+0332), the residuals for the model which fits for the eccentricity are significantly smaller than those with eccentricity fixed to zero. For J1314+3818, we fit the RVs and _Gaia_ astrometry simultaneously (Section 4.1), with the red cross indicating the best-fit \(i\) and the corresponding \(M_{\rm WD}\sim 1.324\,M_{\odot}\). ra, dec, parallax, pmra, pmdec, a_thiele_innes, b_thiele_innes, f_thiele_innes, g_thiele_innes, c_thiele_innes, h_thiele_innes, center_of_mass_velocity, eccentricity, period, t_periastron (Halbwachs et al., 2023; Gaia Collaboration et al., 2023b). The Thiele-Innes elements A, B, F, and G describe the astrometric orbit of the photocenter and are transformations of the Campbell elements. The C and H elements are constrained from the _Gaia_ RVs of the MS star. In the case of a dark companion, the photocenter simply traces the MS star. We fit our RVs and the _Gaia_ constraints simultaneously, using the likelihood described by El-Badry et al. (2023) which we briefly summarize here. _Gaia_ stores the correlation matrix of the parameters in a vector corr_vec, from which, along with the errors on the parameters, we can construct a covariance matrix. We then construct a log-likelihood function that is a sum of two terms: one that compares the predicted astrometric parameters and all Thiele-Innes coefficients to the _Gaia_ constraints, and one that compares the measured and predicted RVs (Equation 2). The model we fit has 14 free parameters: ra, dec, parallax, pmra, pmdec, eccentricity \(e\), inclination \(i\), angle of the ascending node \(\Omega\), argument of periastron \(\omega\), periastron time \(T_{p}\), center-of-mass RV \(\gamma\), companion mass \(M_{\rm WD}\), and luminous star mass \(M_{\bullet}\). For \(M_{\bullet}\), we set a Gaussian constraint based on its best fit value and error obtained from the SED fitting (Table 3). The resulting parameters can be found in Table 5 and the plots are shown in Figure 3. We find \(i\approx 100.0^{\circ}\) (or \(\approx 80^{\circ}\)) and \(M_{\rm WD}=1.306\pm 0.012\,M_{\odot}\). ### White dwarf masses From the parameters of the RV fitting, we can calculate the mass function, \(f_{m}\), which provides a constraint on the mass of the unseen (WD) companion \(M_{\rm WD}\): \[f_{m}=\frac{M_{\rm WD}^{3}\sin^{3}i}{(M_{\star}+M_{\rm WD})^{2}}=\frac{P_{\rm orb }K_{\rm 2}^{3}}{2\pi G}\left(1-e^{2}\right)^{3/2} \tag{3}\] where \(M_{\bullet}\) is the mass of the luminous star, which we constrained by fitting the SED (Section 3.6). With just the RVs, the inclination \(i\) is not constrained, meaning that for most of our objects, we can only place a lower limit on the WD mass which occurs when \(i=90^{\circ}\) (i.e. when the orbit is "edge-on" to our line of sight). The implied \(M_{\rm WD}\) as a function of the inclination is plotted on the lower panels of Figure 3. We shade the regions for \(M_{\bullet}\) values \(\pm 0.05\,M_{\odot}\) above and below the best-fit value from the SED fitting (Section 3.6.2). The minimum masses \(M_{\rm WD,min}\) range from \(1.244^{+0.027}_{-0.027}\) to \(1.418^{+0.033}_{-0.033}\,M_{\odot}\). Given the uncertainties, these values are all consistent with masses close to, but just below, the Chandrasekhar limit of \(\sim 1.4\,M_{\odot}\). For J1314+3818, we obtain an inclination constraint from astrometry (Section 4.1) and thus a precise value for \(M_{\rm WD}\) of \(1.323\pm 0.008\,M_{\odot}\), as opposed to just a lower limit. This point is shown as a red cross on the plot of the \(M_{\rm WD}(i)\) for this object in Figure 3. The inferred WD masses are summarized in Table 6. At the time of writing, these WDs are among the most massive WDs known (e.g. Hermes et al., 2013; Curd et al., 2017; Cognard et al., 2017; Hollands et al., 2020; Caiazzo et al., 2021; Miller et al., 2022), if they are indeed WDs (see Section 4.3). We note that most other ultramassive WD candidates have mass estimates that depend on WD cooling models and mass-radius relations, which are uncertain for the most massive WDs (e.g. Camisassa et al., 2019; Schwab, 2021). Meanwhile, our measurements (and similarly, those of Cognard et al., 2017) provide fairly robust constraints on the mass with minimal assumptions about the WD itself (though there is still dependence on the stellar models used to infer the mass of the main-sequence companions). ### Nature of the unseen companions Here we discuss whether the unseen companions could be objects other than WDs. #### 4.3.1 MS binaries or triples A MS companion with a mass of \(\sim 1.3\,M_{\odot}\) would dominate the SEDs of all the objects in our sample. In this case we would see two sets of lines in the spectra and changes in the composite line profiles with orbital phase. Since the spectra of our targets are all \begin{table} \begin{tabular}{c c c c c c} \hline \hline Name & \(P_{\rm orb}\) [d] & \(e\) & \(K_{\bullet}\) [km/s] & \(f_{m}\) [\(M_{\odot}\)] & \(f_{m,G}\) [\(M_{\odot}\)] \\ \hline J2117+0332 & 17.9239 \(\pm\) 0.0001 & 0.0007 \(\pm\) 0.0002 & 57.215 \(\pm\) 0.011 & 0.3478 \(\pm\) 0.0002 & 0.4143 \(\pm\) 0.0398 \\ J111+5515 & 32.1494 \(\pm\) 0.0022 & 0.0217 \(\pm\) 0.0003 & 49.435 \(\pm\) 0.019 & 0.4021 \(\pm\) 0.0004 & 0.3981 \(\pm\) 0.0052 \\ J1314+3818 & 45.5150 \(\pm\) 0.0047 & 0.0503 \(\pm\) 0.0003 & 48.468 \(\pm\) 0.015 & 0.5349 \(\pm\) 0.0005 & – \\ J2034-5037 & 46.1147 \(\pm\) 0.0006 & 0.0079 \(\pm\) 0.0002 & 47.299 \(\pm\) 0.012 & 0.5056 \(\pm\) 0.0004 & 0.6392 \(\pm\) 0.0944 \\ J0107-2827 & 49.0063 \(\pm\) 0.0008 & 0.0901 \(\pm\) 0.0005 & 43.370 \(\pm\) 0.011 & 0.4092 \(\pm\) 0.0003 & 0.4175 \(\pm\) 0.0275 \\ \hline \hline \end{tabular} \end{table} Table 4: Best-fit orbital parameters from the RV fitting. \(f_{m}\) is the mass function given the other three parameters and \(f_{m,G}\) is the mass function given the same parameters from the _Gaia_ solution. \begin{table} \begin{tabular}{c c c} \hline \hline & Astrometry Only & Astrometry + RV \\ \hline A [deg] & 198.517 \(\pm\) 0.023 & 198.532 \(\pm\) 0.010 \\ Dec [deg] & 38.299 \(\pm\) 0.013 & 38.307 \(\pm\) 0.012 \\ \(\pi\) [mas] & 12.446 \(\pm\) 0.015 & 12.447 \(\pm\) 0.015 \\ PMRA [mas/yr] & 129.523 \(\pm\) 0.013 & 129.526 \(\pm\) 0.011 \\ PMDEC [mas/yr] & -224.216 \(\pm\) 0.013 & -224.217 \(\pm\) 0.013 \\ \(P_{\rm orb}\) [days] & 45.516 \(\pm\) 0.005 & 45.519 \(\pm\) 0.000 \\ \(e\) & 0.046 \(\pm\) 0.006 & 0.0504 \(\pm\) 0.0003 \\ \(i\) [deg] & 99.834 \(\pm\) 0.370 & 99.945 \(\pm\) 0.350 \\ \(\Omega\) [deg] & 86.017 \(\pm\) 0.339 & 85.997 \(\pm\) 0.344 \\ \(\omega\) [deg] & 99.219 \(\pm\) 7.147 & 93.220 \(\pm\) 0.307 \\ \(t_{\rm peri}\) [days] & -10.996 \(\pm\) 0.916 & -11.763 \(\pm\) 0.047 \\ \(\gamma\) [km/s] & 2.445 \(\pm\) 0.209 & 2.811 \(\pm\) 0.010 \\ \(M_{\rm WD}\) [M\({}_{\odot}\)] & 1.325 \(\pm\) 0.046 & 1.324 \(\pm\) 0.037 \\ \(M_{\bullet}\) [M\({}_{\odot}\)] & 0.713 \(\pm\) 0.050 & 0.712 \(\pm\) 0.049 \\ \hline \hline \end{tabular} \end{table} Table 5: Best-fit parameters of J1314+3818 from fitting just the astrometric solution, and when combined with the RVs. well-fit by single-star models, we can definitively rule out a single MS companion. A different possibility is that these systems are hierarchical triples consisting of a close inner binary of two \(\sim 0.65\,M_{\odot}\) MS stars orbiting the primary (e.g. van den Heuvel & Tauris, 2020). Together, the two would be dimmer than a single \(1.3\,M_{\odot}\) MS star. We can estimate the contribution of such a binary to the overall SED in a similar way we do for the case of a WD in Appendix B. We once again use pytestlllibs to generate an SED but for a \(0.65\,M_{\odot}\) star on the MS. This mass roughly corresponds to a K7V star with a radius and temperature of \(0.63R_{\odot}\) and \(4100\,\)K respectively (e.g. Pecaut & Mamajek, 2013). We can then calculate the ratio of the flux from the two stars to that of the single star which was fitted for in Section 3.6. At 550 nm, the fractional flux contribution of such an inner triple would be \(4.9\), \(2.6\), \(66.9\), \(1.32\), and \(5.2\)%, respectively, for J2117+0332, J111+5515, J1314+3818, J2034-5037, and J0107-2827. In the infrared, at \(3\,\mu\)m, the contribution would be larger, ranging from \(\sim 13\) to \(56\)% for four objects, with the exception of J1314+3818 where the inner binary would outshine the tertiary. Therefore, for J1314+3818, a hierarchical triple model is untenable. For the other objects, it is less obvious as there is a relatively small contribution at optical wavelengths meaning that any colour difference or spectral contribution is likely not enough to be distinguished from a single source. We also note that a WD+WD or WD+MS inner binary would similarly be dim in the optical and difficult to detect, but forming these in close orbits would be challenging from an evolutionary standpoint (given the size of the WD progenitor) and thus we do not consider them further. We also tested whether an inner binary's presence could be inferred from the SED fit. For each system, we constructed a "mock triple" SED by adding synthetic photometry for the inner binary to synthetic photometry for the best-fit single star model. We then fit this composite SED with a single-star model using MINEsweeper and check whether the residuals of the fit worsens significantly compared to those in Section 3.6. We find that while the median residual does worsen slightly (at most by a factor of a few), the residuals of most photometric points still remain within \(\lesssim 0.1\) mag. As expected, the exception is J1314+3818, where the residuals reach \(0.2\) mag. We conclude that the worst-case inner binaries could escape detection via a poor SED fit in all systems except J1314+3818. We next consider the possible periods of hypothetical inner binaries. There is a maximum period set by dynamical stability considerations: \(P_{\rm out}/P_{\rm in}\geq 5\) where \(P_{\rm out}\) and \(P_{\rm in}\) are the outer and inner orbital periods respectively (we are also taking \(e\sim 0\) and the ratio of the mass of the outer star to that of the inner binary \(\sim 1\); Mardling & Aarseth, 2001; Tokovinin, 2014). Given that our objects have \(P_{\rm out}\sim 30\) days, this implies \(P_{\rm in}\lesssim 6\) days. As for the minimum period, if the orbit of the inner binary is sufficiently tight, we may detect ellipsoidal variability due to tidal distortion of the inner components. We use the code PHysics Of Eclipsing BinariEs (PHOEBE; Prsa & Zwitter, 2005) to generate synthetic light curves of an inner binary of two K dwarfs for a range of periods from \(\sim 0.25\) to \(1\) days. The amplitude of the ellipsoidal variability decreases with increasing period. We then add a fraction of the signal of these synthetic light curves to the observed light curves (described in Section 3.5). Given that an inner binary contributes \(\lesssim 10\) % of the total light (from above), we set this fraction to \(0.1\). We then generate periodograms of these light curves (Astropy Collaboration et al., 2022) and see whether or not we would be able to detect variability on half the inner binary's period. We find that with only \(\sim 10\) % of the light coming from the inner binary, ellipsoidal variability can only be distinguished from the noise for \(P_{\rm inner}\lesssim 0.3\) days. Thus, the range of possible inner period is \(\sim 0.3\) to \(6\) days. In summary, with the available observations, we cannot rule out a tight MS binary in four out of the five systems. However, we emphasize that there are very few hierarchical triple systems that have outer orbital periods below \(\sim 1000\) days (this is not a selection effect, see Tokovinin, 2014), while all our systems have \(P_{\rm orb}<50\) days. The few known compact hierarchical triples that have been found all have significantly more eccentric outer orbits than our objects, with values ranging from about \(0.2\) to \(0.6\). The only exceptions known are two triples in which the outer tertiary is a giant with a large radius, which likely circularized their orbits (Rappaport et al., 2022, 2023). All five of our systems have eccentricities close to zero and host MS stars in orbits that would not circularize in a Hubble time. This fact is easily understood if the companions are WDs - the binaries would have been (partially) circularized when the WD progenitor was a red giant. If the companions were tight MS binaries, there would be no reason to expect circular outer orbits, and it would be very improbable for all 5 systems to have \(e<0.1\) by chance. #### 4.3.2 Neutron stars As we report minimum masses that are very close to the Chandrasekhar limit (Table 6), we also consider the possibility that the companions are neutron stars (NSs). However, NSs are expected to be born with natal kicks which drive their orbits to be eccentric (e.g. Hills, 1983; Colpi & Wasserman, 2002; Podsiadlowski et al., 2005). Thus, we must consider formation mechanisms which can explain the near zero eccentricities of our objects (Table 4). In the case of no natal kick and spherically symmetric mass loss forming the NS (e.g. Blaauw, 1961), the eccentricity acquired (taking an initial eccentricity of zero) is given by \[e=\frac{\Delta m}{m_{c}+m_{2}} \tag{4}\] where \(\Delta m\) is the mass lost, \(m_{c}\) is the remaining core/NS mass, and \(m_{2}\) is the mass of the companion (Hills, 1983). In the case where a \(8\,M_{\odot}\) star explodes by a core-collapse supernova (SN) to form a \(1.3\,M_{\odot}\) NS around a \(1\,M_{\odot}\) MS companion, we see that \(e>1\) and the system will be unbound. In reality, the massive progenitor may lose a significant amount of mass through winds or binary interactions prior to the explosion in which case the eccentricity will be smaller and may allow the binary to survive, though likely in an eccentric orbit. Even in the case of an ultra-stripped SN explosion with \(\sim 0.3\,M_{\odot}\) of ejecta (De et al., 2018; Yao et al., 2020), we expect \(e\sim 0.1\), significantly larger than the majority of our systems (though even lower ejecta masses are possible; Tauris et al., 2013, 2015). Moreover, if the SN is asymmetric (in its ejecta or neutrino emission), a strong kick can be imparted on the NS, which will likely result in large eccentricities, if it does not \begin{table} \begin{tabular}{c c} \hline \hline name & M\({}_{\rm WD,min}\) [M\({}_{\odot}\)] \\ \hline J2117+0332 & \(1.244^{+0.027}_{-0.027}\) \\ J1111+5515 & \(1.367^{+0.028}_{-0.029}\) \\ J1314+3818 & \(1.324\pm 0.037*\) \\ J2034-5037 & \(1.418^{+0.033}_{-0.033}\) \\ J0107-2827 & \(1.271^{+0.030}_{-0.031}\) \\ \hline \end{tabular} \end{table} Table 6: Minimum white dwarf masses using orbital parameters from the RV fitting and \(M_{\bullet}\) from the SED fitting. J1314+3818 is marked in asterisk (*) to indicate that this value is not a lower bound, but the actual value given the inclination constraint from its astrometric solution (Table 5). unbind the NS (e.g. Fryer et al., 1999; Tauris et al., 1999). Thus, a NS formed in this way is unlikely to be found in very circular orbits like our targets. Alternatively, a NS may form from a massive WD accreting up to the Chandrasekhar limit through accretion-induced collapse (for a recent review on this topic, see Wang & Liu, 2020). Here, the ejecta masses are expected to be significantly smaller, though quite uncertain, with values ranging from \(1\times 10^{-3}\) to \(0.05\,M_{\odot}\)(Darbha et al., 2010; Fryer et al., 1999). Such ejecta masses could correspond to \(e\lesssim 0.02\) which are consistent with the eccentricities of some of our objects. However, it is difficult to explain how the progenitor WD would have accreted the necessary mass to begin with. Our systems all contain MS star companions which do not have strong winds, so there should be no significant wind accretion. Moreover, our objects are in orbits that are too wide for there to have been MT from the MS star through RLOF. Thus, while accretion-induced collapse could produce NSs in circular orbits, it struggles to do so in our systems where there are no obvious MT mechanisms. Therefore, we conclude that these alternative scenarios are improbable (though we emphasize that they are possible) and we proceed under the assumption that the unseen companions are WDs. ## 5 Comparison to other binary populations Here we compare the properties of our targets to other related classes of binaries, including other WD+MS PCEBs and WD + millisecond pulsar (MSP) binaries. ### Literature PCEBs The Sloan Digital Sky Survey (SDSS; Abazajian et al., 2009) detected large numbers of close WD+MS binaries. Rebassa-Mansergas et al. (2007) identified 37 new PCEBs from the SDSS PCEB survey (described in Section 7.3), and combining this with 25 that were previously known, Zorotovic et al. (2010) compiled a total of 62 PCEB systems. In addition, the "white dwarf binary pathways survey" also identified several PCEBs with AFGK companions using TESS light curves (diamond markers; Hernandez et al., 2021, 2022, 2022). In Figure 4, we plot the minimum orbital separation \(a_{\rm peri}\) against \(M_{\rm WD}\) for these literature PCEBs, the five objects from our sample, and several self-lensing WD+MS binaries discovered by the _Kepler_ survey (SLBs+ KOI-3278; Kawahara et al., 2018; Kruse & Agol, 2014). With the exception of J1314+3818 where the precise value of \(M_{\rm WD}\) was obtained using astrometry (Section 4.1), we have plotted \(M_{\rm WD,min}\) for our objects which is indicated with arrows. The colours of the points represent the mass of the luminous (MS) companion, \(M_{\star}\). The gray dashed line comes from the \(P_{\rm orb}-M_{\rm WD}\) relation derived in Rappaport et al. (1995) for stable MT (with a spread in orbital period of a factor of \(\sim 2.4\)), where \(P_{\rm orb}\) has been converted to separation assuming a \(1\,M_{\odot}\) MS star and circular (\(e=0\)) orbit (assuming instead a \(0.1\,M_{\odot}\) M dwarf star only shifts this downwards by a small amount). The fact that all of these objects lie below this relation means that they are unlikely to have formed through stable RLOF, with the possible exception of the SLBs, which are not too far below the relation. The blue line in Figure 4 shows the maximum radius of the WD progenitor, _if it evolved in isolation_. Binaries located below this line must have interacted at some point in their evolution. This approximate relation was obtained by first calculating the progenitor mass \(M_{\rm init}\) using the WD Initial-Final Mass Relation (IFMR) derived in Williams et al. (2009): \(M_{\rm final}=0.339+0.129M_{\rm init}\), then generating MIST evolutionary tracks (Dotter, 2016; Choi et al., 2016) to identify the maximum radius reached by a star with a given \(M_{\rm init}\). Note that this is a conservative limit since the RLOF would begin before the giant touches the companion. #### 5.1.1 A population of PCEBs in wide orbits? IK Peg was previously isolated in its region of the \(P_{\rm orb}-M_{\rm WD}\) parameter space: being in a wider orbit with a period of 22 days and hosting a more massive WD of \(\sim 1.2\,M_{\odot}\)(Wonnacott et al., 1993) than the vast majority of SDSS PCEBs. Our five targets fall in the same region as IK Peg. Their current orbits are far too tight for the binaries to have escaped interaction when the WD progenitors were red giants or AGB stars, strongly suggesting that these objects are indeed PCEBs. As we show in Section 6, the current orbits can only be understood as an outcome of CEE if additional energy sources (besides liberated orbital energy) helped unbind the common envelope. The SLBs occupy a different isolated region in this space, with normal WD masses but at separations even wider than our systems and IK Peg. Like our targets, they contain solar-type MS stars, which are more massive than the M dwarfs in the SDSS PCEB sample. Kruse & Agol (2014) initially interpreted KOI-3278 as a "normal" PCEB, but Zorotovic et al. (2014) subsequently showed that the system's wide orbit requires an extra source of energy, beyond orbital energy, to contribute to the CE ejection. The three SLBs identified by Kawahara et al. (2018) have even wider orbits than KOI-3278. Those authors interpreted the systems as having formed through stable MT, but Figure 4 shows that SLB 2 and 3 fall well below the Rappaport et al. (1995) prediction. Formation through stable MT thus seems tenable only if these WDs have significantly overestimated masses. We also distinguish the objects discovered by Hernandez et al. (2021, 2022), (plotted with diamond markers) from the SDSS PCEBs as they host higher-mass (\(\sim 1\,M_{\odot}\)) MS stars. Compared to our objects, these have shorter orbital periods (\(P_{\rm orb}\sim 1-2\) days) and can therefore be explained with just the liberated orbital energy, without needing to invoke additional sources. This tells us that having an intermediate-mass MS star as a companion does not necessarily lead to a wide post-CE orbit. We note that while Figure 4 may suggest that there are two distinct groups of PCEBs in wide orbits with different WD masses (self-lensing binaries vs. IK-Peg analogs), this is not necessarily the case. We remind the reader that here we specifically targeted very massive companions which would at least partly explain why we did not find any that are less massive. A search for more objects located in currently sparse regions on the plot would be useful. ### Eccentricities and comparison to MSP binaries On Figure 5, we plot the eccentricity \(e\) against \(P_{\rm orb}\) and compare our objects to MSP + WD binaries. We plot objects primarily from the Australia Telescope National Facility (ATNF) catalogue (Manchester et al., 2005) (taking the version analyzed by Hui et al., 2018) and distinguish those with minimum WD companion masses above and below \(0.45\,M_{\odot}\) (approximate upper limit to the mass of a He WD). We also plot the theoretical relation derived by Phinney (1992) for MSP pulsars with He WDs formed through stable MT. MSPs form through "recycling", where an old NS is spun up to short periods by the transfer of mass and angular momentum from a companion (e.g. Alpar et al., 1982; Radhakrishnan & Srinivasan, 1982; Bhattacharya & van den Heuvel, 1991). Tides are almost expected to circularize MSP + WD binaries, but a very low orbital eccentricity remains because convection in the WD progenitor produces a time-varying quadrupole moment, leading to perturbations and eccentricity excursions that are larger in longer-period systems, which hosted larger giants (Phinney, 1992). To date, the period-eccentricity relation has mainly been tested with MSP+WD binaries, because their eccentricities can be easily measured with high precision. However, a similar process should operate in MS+WD binaries, if MT occurs over a long enough period for tidal circularization to occur. Figure 5 shows that in general, the MSP binaries with more massive (CO/ONeMg) WDs tend to have higher eccentricities at fixed period than those with low mass (He) WDs. The standard interpretation is that the systems with He WDs formed via stable MT, while those with more massive WDs formed through CEE (e.g. van den Heuvel, 1994; Tauris et al., 2012). Although NS + red giant orbits are expected to be circularized prior to the onset of MT, eccentricity is produced during the dynamical plunge-in phase of CEE (e.g. Ivanova et al., 2013), and there is likely insufficient time for this eccentricity to be fully damped between the end of CEE and the formation of the WD (e.g. Glanz & Perets, 2021). The objects in our sample have periods and eccentricities similar to the longest-period MSP + CO/ONeMg binaries, perhaps pointing to a similar formation history. In particular, the MSP binaries J1727-2946 (Lorimer et al., 2015) and J2045+3633 (Berezina et al., 2017) (circled in green) are located very close to our objects in Figure 5. Both of these systems contain mildly recycled pulsars (with spin periods of 27 and 32 ms) and massive WDs (\(M_{\rm WD}>0.8\,M_{\odot}\)), and they have eccentricities of 0.045 and 0.017 at orbit periods of 40 and 32 days. A common envelope origin has also been proposed for J2045+3633 (McKee et al., 2020). There are also stable MT scenarios that have been proposed to explain the formation of MSP + CO/ONeMg WD binaries (e.g. Tauris et al., 2000), which would likely predict lower eccentricities. We refer readers to Tauris (2011) for a concise overview of this topic. It is worth mentioning that several systems with low-mass WDs in anomalously eccentric orbits have been discovered, called eccentric MSPs (eMSPs; e.g. Bailes, 2010). These points are circled in red in Figure 5. Interestingly, these eMSPs occupy a narrow range in orbital periods with eccentricities that are comparable to the two eccentric MSP + CO WD binaries (as pointed out by Berezina et al., 2017). This is unexpected, as binaries with He WDs are thought to have formed through distinct evolutionary paths, involving long periods of stable MT in low mass X-ray binaries (Tauris, 2011). There have been multiple proposed mechanisms to explain the large eccentricities, some of which may be applicable to one or more of the systems just discussed (MSP + He WD, MSP + CO WD, MS + WD PCEB). These include MSPs being directly formed from the accretion-induced collapse of a super-Chandrasekhar mass ONeMg WD (Freire & Tauris, 2014), interaction with a circumbinary disk formed from material lost from the WD during H shell flashes (Antoniadis, 2014), or a circumbinary disk formed from the ejected envelope after a CE phase (Dermine et al., 2013). Figure 4: Periastron separation, \(a_{\rm peri}\), vs. WD mass, \(M_{\rm WD}\), for a sample of PCEBs from Zorotovic et al. (2011, circle markers) and the five objects from this work (stars; the arrows indicate lower limits). IK Peg is distinguished from the other known PCEBs with a triangle marker as it lies very close to our objects. We also plot the PCEBs from the “white dwarf binary pathways survey” (diamond markers; Hernandez et al., 2021, 2022,b). Finally, we plot self-lensing binaries (SLBs) discovered by Kawahara et al. (2018) as well as KOI-3278 (Kruse & Agol, 2014, also a SLB), which were all detected by _Kepler_. The colours of the points represents the masses of the luminous MS companions. The dashed gray shows the prediction for stable MT from Rappaport et al. (1995) and the blue line indicates the maximum radius reached by the WD progenitor for a range of WD masses. ## 6 Feasibility of formation through common envelope evolution To test whether our targets could have formed via CEE, we ran MESA binary evolution models (Paxton et al., 2011, 2013, 2015, 2018) of progenitor stars to the WDs in our systems, evolving intermediate-mass stars up to the asymptotic giant branch (AGB). In the \(\alpha\)-formalism, the CEE ends when the loss of orbital energy from the spiral-in exceeds the binding energy of the envelope \(E_{\rm bind}\), resulting in the envelope being ejected: \[E_{\rm bind}=\alpha_{\rm CE}\left(-\frac{GM_{\rm WD}M_{\star}}{2a_{f}}+\frac{ GM_{i}M_{\star}}{2a_{i}}\right) \tag{5}\] where \(M_{i}\) is the mass of the WD progenitor, \(a_{i}\) is the initial separation at the onset of CE, \(a_{f}\) is the separation at the end of the CE phase, and \(\alpha_{\rm CE}\) is the fraction of the liberated orbital energy that goes into unbinding the envelope. Thus, \(E_{\rm bind}\) determines the final separation of the system for a given initial separation. In the simplest case, \(E_{\rm bind}\) is just the gravitational binding energy of the envelope. However, previous works using BPS have found that in this case, no values of \(\alpha_{\rm CE}<1\) can reproduce the relatively wide orbits of IK Peg and KOI-3278 (Davis et al., 2010; Zorotovic et al., 2010, 2014; Parsons et al., 2023), which have comparable separations to our systems. Moreover, it is clear that additional energy exists within the stellar envelope, which can potentially help to unbind it. Here, we consider the inclusion of internal energy which is defined in MESA as the sum of thermal and recombination energy (Paxton et al., 2018). This makes the binding energy less negative (possibly even positive), corresponding to an envelope that is less bound. Recombination of H and He can occur if there is some process (in this case, the binary interaction through the CE phase) that causes the envelope to expand and cool down, which will release energy (e.g. Ivanova et al., 2013). From initial-final mass relations of WDs (e.g. Williams et al., 2009; Cummings et al., 2018; El-Badry et al., 2018), we expect the MS progenitor masses of our ultra-massive WDs to be in the range of \(6-9\,M_{\odot}\). Thus, we run MESA models of a \(7\,M_{\odot}\) star, following its evolution up to the AGB. Our inlists are based on those of Farmer et al. (2015) (same wind and mixing prescriptions; For more details, we refer readers to Section 2 of Farmer et al., 2015), although we have updated them for the more recent MESA version r22.05.1. We only run non-rotating models. The evolution of the \(7\,M_{\odot}\) model, from pre-main sequence to termination at the tip of the AGB, is shown on the HR diagram in the leftmost panel of Figure 6. In the following, we consider progenitors on the red giant branch (RGB), including the sub-giant branch (SGB), and the AGB which are highlighted in blue and red on the diagram respectively. We define these phases following the convention used in the MIST project (See Section 2.1 of Dotter, 2016). We note that our model terminates before core carbon burning, but as the envelope binding energy becomes positive before this point (see below), our main conclusions should not be largely affected by this. At each timestep, we calculate the binding energy of the envelope as a sum of the gravitational (\(E_{\rm grav}\)) and internal (\(E_{\rm init}\)) components: \[E_{\rm bind} =E_{\rm grav}+E_{\rm int} \tag{6}\] \[=\int_{M_{\rm core}}^{M_{\rm tot}}-\frac{Gm}{r(m)}+U(m)dm \tag{7}\] where \(E_{\rm grav}\) and \(E_{\rm int}\) correspond to the first and second parts of the integrand respectively, \(m\) is the mass enclosed within a radius \(r\), and \(U(m)\) is the internal energy per unit mass. The integral is taken from the He core boundary to the surface of the star. We use the default definition of the He core in MESA which is where the hydrogen mass fraction \(X_{H}<0.1\) and helium mass fraction \(X_{\rm He}>0.1\). We have tested changing these boundaries to 0.01 and find no significant change. Here, we assume that all of the internal energy contributes to the envelope binding energy. We discuss the consequences of relaxing this assumption in the following section. Using equation 5, we can solve for \(a_{f}\) for Roche lobe overflow occurring at different points on the RGB and AGB. We calculate the initial separation \(a_{i}\) using the Eggleton formula (Eggleton, 1983), assuming the giant fills its Roche lobe at the onset of the CEE. The mass of the WD remaining after the envelope ejection, \(M_{\rm WD}\), is assumed to be equal to the core mass of the giant \(M_{\rm core}\). We also assume \(M_{\star}=1\,M_{\odot}\), which is close to the median value for our objects (Table 3). The predicted \(a_{f}\) is shown for a range of \(a_{i}\) in Figure 6. In the central panel, the gray dashed line marks \(a_{f}=0.15\) AU, which is slightly smaller than the minimum \(a_{\rm peri}\) of our objects at \(\sim 0.18\) AU (Figure 4). The red dashed line marks \(a_{f}=0.01\) AU (\(\sim 2R_{\odot}\)), below which a \(\sim 1\,M_{\odot}\) MS star cannot fit inside the orbit and thus a PCEB would not form (a merger, or perhaps stable MT of the MS onto the WD, may occur instead). The orange line shows the case where we only consider the gravitational binding energy (\(E_{\rm bind}=E_{\rm grav}\)) and set \(\alpha_{\rm CE}=1\) (i.e. 100% of the orbital energy loss goes into envelope ejection). We see that it never crosses the \(a_{f}=0.15\) AU mark, meaning that even in this optimistic case, there is not enough energy to unbind the envelope and produce orbits as wide as our observed systems. In the remaining three cases, we include the internal energy (\(E_{\rm bind}=E_{\rm grav}+E_{\rm int}\), Equation 6) and let \(\alpha_{\rm CE}=0.3\) (the "standard" value), 0.6, and 0.9. We see that for each case, there is a region where \(a_{f}\) exceeds 0.15 AU. On the right panel, we zoom into this region Figure 5: Eccentricity vs. orbital period. We plot the sample of MSP + WD binaries mainly from the ATNF catalogue (Manchester et al., 2005, with a few others, compiled by Hui et al., 2018), differentiating between those with minimum WD mass above and below 0.45 \(M_{\odot}\) (orange triangles and blue circles, respectively). We show our objects with magenta stars. The gray line is the theoretical relation for MSP + He WD binaries formed through stable MT derived by Phinney (1992). The points circled in red are cMSPs (e.g., see Stovall et al., 2019, for a description of the five plotted here). We also circle in green two binaries with massive CO WDs with large eccentricities (Lorimer et al., 2015; Berezina et al., 2017). and find \(a_{i}\) ranges from \(\sim 3.5\) - \(4.4\) AU across all values of \(\alpha_{\rm CE}\), with a narrower range for lower values of \(\alpha_{\rm CE}\). In Figure 6, we exclude models in regions of parameter space where the \(\alpha\)-formalism does not make clear predictions, namely, those in which \(E_{\rm bind}>0\) (the envelope is unbound when recombination energy is included). This case is likely still relevant for producing wide PCEBs, but it is unclear what the final separation should be: if the MS star does not penetrate deep into the giant's envelope, it is unlikely to trigger the release of much recombination energy. For the model shown in Figure 6, these conditions read leate in the AGB evolution, corresponding to radii of \(\sim 520-888R_{\odot}\) and initial separations of \(\sim 4.4-7.7\)AU. #### 6.0.1 Efficiency of recombination energy The calculation above consider two kinds of internal energy: thermal energy and recombination energy. While thermal energy is commonly considered as an "extra" energy source, it is closely related to the gravitational potential energy through the virial theorem, and all the the thermal energy should be included in the binding energy by default (e.g. Ivanova et al., 2013). The inclusion of recombination energy is more uncertain. Some works have argued that most of the energy released by recombination will quickly be transported to the photosphere through radiation and/or convection and then radiated away (Sabach et al., 2017; Grichener et al., 2018). Meanwhile, Ivanova (2018) found that such energy transport is inefficient in typical AGB stars and that recombination is in fact a significant source of additional energy. Given this uncertainty, our previous assumption that all of the internal energy contributes to unbinding the envelope may be too optimistic. We can assess the sensitivity of our results to this assumption by splitting the internal energy into two components: \[E_{\rm bind}=E_{\rm grav}+\alpha_{\rm th}E_{\rm th}+\alpha_{\rm rec}E_{\rm rec} \tag{8}\] where \(E_{\rm th}\) and \(E_{\rm rec}\) are the thermal and recombination energies, and \(\alpha_{\rm th}\) and \(\alpha_{\rm rec}\) are the respective efficiencies. As described above, we set \(\alpha_{\rm th}=1\). MESA does not provide a simple way to individually track the thermal and recombination energies. We can, however, approximate the thermal energy using the ideal gas law, which is a reasonable approximation in the envelopes of AGB and RGB stars. In this case, the energy density is \((3/2)P/\rho\), where \(P\) is the pressure and \(\rho\) is the mass density. We subtract this from the total internal energy output by MESA to get an estimate of the recombination energy and see what value of \(\alpha_{\rm rec}\) would be required to produce wide orbits. As shown in Figure 7, we find that for a canonical value of \(\alpha_{\rm CE}=0.3\), we require \(\alpha_{\rm rec}\gtrsim 0.5\), and for \(\alpha_{\rm CE}=1\), we require \(\alpha_{\rm rec}\gtrsim 0.25\) for \(a_{f}\gtrsim 0.15\) AU. Thus, our models point towards a relatively large fraction of recombination energy being needed to produce wide PCEBs. This is not unexpected, as recombination energy dominates the internal energy of the envelopes of cool stars. For typical stellar compositions, recombination energy dominates over internal energy for temperatures below \(\approx 2\times 10^{5}\) K (Ivanova et al., 2013). This boundary lies deep in the envelope of our models on the AGB, at roughly 20% of the stars' radii. If a large fraction of recombination energy actually escapes, our results would imply that other sources of energy must be invoked to produce wide PCEBs (e.g. jets from the accreting star; Sabach et al., 2017; Moreno Mendez et al., 2017). We defer more detailed calculations and further discussion of this topic to future work. ### Models for lower- and higher-mass giants We performed similar calculations for a \(1\,M_{\odot}\) red giant, which might be a progenitor to a wide PCEB hosting a lower-mass (\(0.5-0.6\,M_{\odot}\)) WD. We evolved a \(1\,M_{\odot}\), solar-metallicity star using inlists from the 1H_pre_ms_to_wd calculation in the MESA test suite. The default wind parameters in that calculation are set so that the mass loss is unusually efficient on the AGB (Blocker_scaling_factor = 0.7). This speeds up the calculation by preventing the star from evolving far up the AGB and encountering thermal pulses, but it terminates the AGB phase unrealistically early. We instead set Blocker_scaling_factor = 0.05 following Farmer et al. (2015). The same plots as in Figure 6 but for the \(1\,M_{\odot}\) model are shown in Figure 8. Even in the case where we consider only the gravitational binding energy with \(\alpha_{\rm CE}=1\), there is a range of initial separations (\(\sim 1.5-4.5\) AU) for which \(a_{f}>0.15\) AU. This only occurs on the thermally-pulsating phase of the AGB (TP-AGB) where the envelope becomes very loosely bound. Wide separations can also result for much smaller values of \(\alpha_{\rm CE}\sim 0.1\) if MT starts at the tip of the AGB. This suggests that it is possible to produce PCEBs in wide orbits containing \(0.5-0.6\,M_{\odot}\) WDs (such as the self-lensing binaries) without the need to invoke additional energy sources, but only if the MT begins on the TP-AGB. It should, however, be kept in mind that the TP-AGB phase is a particularly difficult phase to model (e.g. Marigo and Girardi, 2007; Girardi and Marigo, 2007), so this conclusion may depend on the adopted stellar models. With the inclusion of internal energy, wide PCEB orbits are produced for a broad range of initial separations. The full range of initial separations for which the calculations predict \(a_{f}>0.15\) AU is \(\sim 0.9\) to \(4.5\) AU. For \(a_{i}\sim 3-4.5\) AU, the envelope's binding energy is positive when recombination energy is included, so the \(\alpha\) formalism does not straightforwardly predict a final separation. For separations \(a_{i}\lesssim 2.5\) AU, CEE would likely commence on the RGB, preventing the wide PCEB outcome from being realized in practice. Overall, this calculation suggests that efficient envelope ejection can similarly produce wide PCEBs with both low and high mass WDs. We also ran models of more massive stars, with initial masses of \(12\) and \(20\,M_{\odot}\), that become red supergiants and will leave behind neutron stars or black holes. We find that the envelopes of these stars are significantly more bound than that of the \(7\,M_{\odot}\) super-AGB star. This implies that it is difficult to form wide BH/NS + solar-type stars via CEE. Similar conclusions have been reached by other studies (e.g. Kalogera, 1999; Kiel and Hurley, 2006; Giacobbo and Mapelli, 2018; Fragos et al., 2019; El-Badry et al., 2023). ## 7 Discussion ### Formation through stable MT? Given that the binaries in our sample have wider orbits than traditional PCEBs, it is natural to wonder whether they could have formed through stable MT instead of CEE. We briefly discuss this possibility here. The onset of dynamically unstable MT is determined by the donor star's adiabatic response to mass loss. In general, a system in which the donor is more massive than the accretor will tend to be more unstable. The critical mass ratio above which mass transfer is unstable, \(q_{\rm crit}\), depends on the stellar structure and evolutionary state of the donor. But in the case of our systems with an ultramassive WD progenitor of mass \(\sim 6-7\,M_{\odot}\) and an intermediate MS companion of \(\sim 1\,M_{\odot}\), the mass ratio is large enough to exceed even conservative values of \(q_{\rm crit}\sim 3-4\) (donor/accretor; e.g. Hjellming & Webbink, 1987; Ge et al., 2010; Temmink et al., 2023). The non-zero eccentricities of our observed systems also points towards CEE, which is expected to be less efficient at tidal circularization than stable RLOF, and may actually drive small eccentricities during the plunge-in phase or via torques from circumbinary material (e.g. Ivanova et al., 2013). These arguments suggest that stable mass transfer is unlikely to have formed the systems in our sample. We would be remiss here to not mention the "\(\gamma\)"-formalism, another commonly used prescription of CEE. This was originally invoked to model the formation of double CO WD binaries, which were thought to require a widening of the orbit after the first CE phase, which cannot occur in the \(\alpha\)-formalism (Nelemans et al., 2000; Nelemans & Tout, 2005). The parameter \(\gamma\) can be understood as the ratio of the angular momentum lost per mass of ejected material to the average angular momentum per unit mass of the initial binary (Paczynski, 1976). While this formalism can produce the wide orbits seen in our systems, it should be emphasized that it was designed precisely for this purpose and does not fundamentally solve the issues associated with energy conservation, which must still hold. It has also be argued that the \(\gamma\)-formalism does not actually describe the CEE - the result of unstable MT - but instead a phase of stable, non-conservative MT. See Section 5 of Ivanova et al. (2013) for further discussion on this formalism. ### Relative frequency of wide and close PCEBs The small number of wide PCEBs discovered so far raises the question of whether they are intrinsically rarer than close PCEBs, or just more difficult to detect. Here we describe the selection biases against wide PCEBs in previous surveys. Given the complex and thus far poorly understood selection function of the _Gaia_ DR3 binary sample, we do not attempt to infer the space density of wide PCEBs here. Instead, we compare the distances to various samples of PCEBs as a rough diagnostic of their relative frequencies. We cross-match the sample of literature PCEBs compiled by Zorotovic et al. (2010, also shown in our Figure 4) to _Gaia_ DR3 to obtain their parallaxes. We find that the median distance to SDSS PCEBs within that sample is 328 pc, which is significantly farther than the median distance of 108 pc for non-SDSS PCEBs in the sample. This likely reflects the fact that most of the non-SDSS objects were discovered serendipitously from all-sky studies of bright stars, in many cases having been recognized as binaries via photometric variability. In contrast, the SDSS objects were discovered spectroscopically from a parent sample that is deep but only observed a small fraction of all stars. Our targets have distances ranging from 80 to 510 pc, with a median of 308 pc (Table 1). At 80 pc, J1314+3818 is nearer than any of the SDSS PCEBs. IK Peg, another wide PCEB, is at 46 pc which is nearer than the majority of the close PCEBs in the literature. Figure 6: _Left_: HR diagram showing the evolution of a \(7\,M_{\odot}\) star starting from pre-MS to the AGB. The blue sections indicate what we refer to as the RGB (but which also includes the SGB) and the end sections represents the AGB. _Center_: Plots of the final separation \(a_{f}\) (i.e. birth period at the end of CEE) over a range of initial separations \(a_{i}\). \(a_{i}\) is taken to be the orbital semi-major axis when the giant (WD progenitor) fills its Roche lobe. We mark \(a_{f}=0.01\) AU \(\sim 2R_{\odot}\) (red dashed line) below which the MS star would not fit in the orbit and a PCEB cannot form. The orange dashed line is the case where only the gravitational binding energy is considered and \(\alpha_{\rm CE}=1\). We see that in this case, no values of \(a_{i}\) result in \(a_{f}>0.15\) AU (gray dashed line) which is approximately the minimum separation of our objects. The other three lines are the cases where internal energy is added to the binding energy for \(\alpha_{\rm CE}=0.3\), 0.6, and 0.9. We see that these lines lie above the dashed line for some range of \(a_{i}\). _Right_: Zoom in on the region where \(a_{f}>0.15\) AU. We see that overall, \(a_{i}\sim 3.5-4.4\) AU can result in the wide orbital separations of our systems. Figure 7: The same plot as in the central panel of Figure 6 but separating out the thermal and recombination components of the internal energy \(E_{\rm int}\). As in Figure 6, we plot the case where only \(E_{\rm grav}\) is considered with \(\alpha_{\rm CE}=1\) (orange dashed line). The pink and green lines show cases where \(\alpha_{\rm CE}=0.3\) and 1, and \(\alpha_{\rm rex}=0.5\) and 0.25 respectively, where \(a_{f}\) just exceeds the 0.15 AU mark. A particularly interesting case to consider is the binary G 203-47 (Delfosse et al., 1999). That system contains a \(\approx 0.27\,M_{\odot}\) MS star orbiting a dark companion that is almost certainly a WD in a period of 14.7 days, similar to the wide PCEBs studied here. At a distance of only \(7.5\,\)pc, G 203-47 is one of the 10 nearest known WDs, and probably the nearest PCEB! It is 3 times nearer - corresponding to a 27 times smaller search volume - than the nearest short-period PCEB, RR Cae, but has been largely overlooked by works attempting to constrain CE physics with PCEBs. While it is dangerous to draw population-level conclusions from a single object, this strongly suggests that wide PCEBs are quite common. ### Comparison to other surveys While the SDSS survey for PCEBs (Rebassa-Mansergas et al., 2007) was highly effective at finding WD + M dwarf PCEBs in tight orbits (\(\lesssim 1\) day), it was biased against finding systems like the ones presented here. This is because the sample was selected based on RV variations detected in low-resolution BOSS spectra, which are more easily detected in close binaries with short orbital periods. Furthermore, the SDSS PCEB survey identified candidates by searching for sources with composite spectra in which contributions of both the WD and the MS companion were detectable. This leads to a strong bias in favor of low-mass (M dwarf) main-sequence companions. The White Dwarf Binary Pathways Survey conducted a search for WD + AFGK PCEBs. They first selected AFGK MS stars from the RAVE and LAMOST surveys, and then cross-matched them to GALEX, identifying objects with UV excess as candidates for having a WD companion (Parsons et al., 2016). From these WD+MS binary candidates, they selected PCEB candidates as those binaries with RV variations detectable in their low-resolution multi-epoch spectra, mainly from LAMOST (Rebassa-Mansergas et al., 2017). This also leads to a strong bias in favor of short periods. The White Dwarf Binary Pathways Survey did find three binaries with orbital periods of several weeks, but Lagos et al. (2022) concluded that they were likely contaminants. The binaries in question had significant eccentricities (\(e=0.266-0.497\); see Table 1 of Lagos et al., 2022), atypical of PCEBs. Based on HST spectra and high contrast imaging, they concluded that at least two are hierarchical triples in which the WD is a distant tertiary. Our objects would likely not have been found by their search because they have negligible UV excess. ## 8 Conclusions We presented five post-common envelope binaries (PCEBs) containing ultra-massive WD candidates and intermediate mass MS stars with long orbital periods (18 - 49 days). These were discovered as part of a broader search for compact object binaries from the _Gaia_ DR3 NSS catalog. Previous surveys identified PCEBs using a combination of RV variability, photometric variability, and UV excess, which made them biased towards finding PCEBs with M dwarfs in close orbits. Systems like the ones presented here pose a potential challenge in simplified models of common envelope evolution (CEE) as their formation requires loosely bound donor envelopes which can be quickly ejected, leaving them in wide orbits with non-zero eccentricities. Our main findings are as follows: 1. _Nature of the unseen companions_: The companions are dark objects with masses of \(1.2-1.4\,M_{\odot}\) - more massive than the solar-type stars orbiting them. The simplest explanation is that they are WDs. We consider two possible alternatives: (1) a tight binary containing two \(\sim 0.65\,M_{\odot}\) MS stars. In the most pessimistic case, such an inner binary could escape detection in 4 of our 5 targets. However, the near-circular orbits we observe - which would be a natural consequence of tidal circularization if the companions are WDs - are not expected in this hierarchical triple scenario. No tight hierarchical triples with outer MS stars and circular outer orbits are known, and very few triples are known with outer periods below 1000 days. (2) a neutron star (NS). This is also unlikely due to the circular orbits of our systems, as NSs are expected to be born with natal kicks that send them to highly eccentric orbits. Given these considerations, we proceed under the assumption that the unseen companions are WDs. 2. _WD masses_: From RVs (Section 4), we measure orbital solutions and mass functions. Combining this with the masses of the luminous components obtained from SED fitting (Section 3.6), we obtain minimum WD masses. These range from to \(1.244^{+0.027}_{-0.027}\) to \(1.418^{+0.033}_{-0.033}\,M_{\odot}\), all consistent with masses just below the Chandrasekhar limit. One object, J1314+3818, has a Gaia astrometric solution, which we fit simultaneously with the RVs to constrain the inclination. For this object, we obtain a precise mass of \(1.324\pm 0.037\,M_{\odot}\) Figure 8: Same as Figure 6 but for a \(1\,M_{\odot}\) model. We omit the \(\alpha_{\rm CE}=0.6\) case to avoid over-crowding. Even in the case where just \(E_{\rm grav}\) is considered (orange), there is a range of initial separations for which which wide PCEBs can be produced with \(\alpha_{\rm CE}=1\). When \(E_{\rm init}\) is included, there is a wide range of initial separations for which wide PCEBs can be produced, even with \(\alpha_{\rm CE}<1\) (green and purple). The models producing wide PCEBs are those in which MT begins during the TP-AGB phase, when envelope is very weakly bound. Assuming the dark companions are in fact WDs, they are among the most massive WDs known. * _Comparison to other PCEBs_: Our newly discovered systems have longer periods and host more massive WDs and MS stars than most known PCEBs (Figure 4). The only similar system previously known is IK Peg. However, it is important to note that selection effects of most previous searches strongly favored short-period PCEBs with low-mass MS stars. * _Comparison to MSP + WD binaries_: We find that our objects have large eccentricities relative to the bulk of the MSP+WD binaries (Figure 5). However, they have similar eccentricities at fixed orbital period to MSP + CO WD binaries, which likely also formed through CEE. * _Evolutionary models of the WD progenitors_: We ran MESA models of a \(7\,M_{\odot}\) MS star up the RGB and AGB (Section 6), following its internal energy (thermal + recombination) and calculating the expected final separation according to the \(\alpha\) formalism. We find that there is no point in the evolutionary phase of the star where final separations comparable to those of our objects (\(\gtrsim 0.15\) AU) are predicted if internal energy does not aid in unbinding the envelope. In the case where internal energy is included, there is a range in initial separations (\(\sim 3.5-4.5\) AU) where final separations exceed \(\sim 0.15\) AU. For initial separations wider than 4.5 AU, the binding energy of the envelope is positive when CEE begins, such that a range of (wide) final separations are plausible. * _Space density_: We compare distances of literature PCEBs to those of our objects and a few others in wider orbits. At \(\sim 80\) pc, J1314+3818 is nearer than any of the SDSS PCEBs (Zorotovic et al., 2010). The median distance of objects in our sample is comparable to that of all literature PCEBs. The nearest known PCEB, G 203-47, has a period of 15 days, much wider than typical PCEBs in the literature with \(P<1\) day. A detailed estimate of the space density of wide PCEBs will have to wait for a better characterized _Gaia_ selection function, but these early discoveries suggest that it is comparable to or larger than that of close PCEBs. ## Acknowledgements We would like to thank Matthias Schreiber and Monica Zorotovic for providing feedback that helped to improve this manuscript, and Thomas Tauris for enlightening discussions. We also thank Thomas Masseron and Keith Hawkins for assistance with BACCHUS. Finally, we thank Hans-Walter Rix and Eleonora Zari for observing some of our objects under their FEROS programs. N.Y. and K.E. were supported in part by NSF grant AST-2307232. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. This work has made use of the VALD database, operated at Uppsala University, the Institute of Astronomy RAS in Moscow, and the University of Vienna. ## Data Availability The data underlying this article are available upon reasonable request to the corresponding author.
2301.00294
Modular Hamiltonian of the scalar field in the semi infinite line: dimensional reduction for spherically symmetric regions
We focus our attention on the one dimensional scalar theories that result from dimensionally reducing the free scalar field theory in arbitrary d dimensions. As is well known, after integrating out the angular coordinates, the free scalar theory can be expressed as an infinite sum of theories living in the semi-infinite line, labeled by the angular modes $\{l, \vec{m}\}$. We show that their modular Hamiltonian in an interval attached to the origin is, in turn, the one obtained from the dimensional reduction of the modular Hamiltonian of the conformal parent theory in a sphere. Remarkably, this is a local expression in the energy density, as happens in the conformal case, although the resulting one dimensional theories are clearly not conformal. We support this result by analyzing the symmetries of these theories, which turn out to be a portion of the original conformal group, and proving that the reduced modular Hamiltonian is in fact the operator generating the modular flow in the interval. By studying the spectrum of these modular Hamiltonians, we also provide an analytic expression for the associated entanglement entropy. Finally, extending the radial regularization scheme originally introduced by Srednicki, we sum over the angular modes to successfully recover the conformal anomaly in the entropy logarithmic coefficient in even dimensions, as well as the universal constant $F$ term in $d = 3$.
Marina Huerta, Guido van der Velde
2022-12-31T21:30:19Z
http://arxiv.org/abs/2301.00294v3
Modular Hamiltonian of the scalar in the semi infinite line: dimensional reduction for spherically symmetric regions ###### Abstract We focus our attention on the one dimensional scalar theories that result from dimensionally reducing the free scalar field theory in arbitrary \(d\) dimensions. As is well known, after integrating out the angular coordinates, the free scalar theory can be expressed as an infinite sum of theories living in the semi-infinite line, labeled by the angular modes \(\{\ell,\vec{m}\}\). We show that their modular Hamiltonian in an interval attached to the origin is, in turn, the one obtained from the dimensional reduction of the modular Hamiltonian of the conformal parent theory in a sphere. Remarkably, this is a local expression in the energy density, as happens in the conformal case, although the resulting one-dimensional theories are clearly not conformal. We support this result by analyzing the symmetries of these theories, which turn out to be a portion of the original conformal group, and proving that the reduced modular Hamiltonian is in fact the operator generating the modular flow in the interval. By studying the spectrum of these modular Hamiltonians, we also provide an analytic expression for the associated entanglement entropy. Finally, extending the radial regularization scheme originally introduced by Srednicki, we sum over the angular modes to successfully recover the conformal anomaly in the entropy logarithmic coefficient in even dimensions, as well as the universal constant \(F\) term in \(d=3\). ## 1 Introduction: Modular flow and modular Hamiltonian The successful application of information theory tools to quantum field theory (QFT) along the last decades, has given place to the solid current consensus that these tools must be definitively incorporated into the usual QFT machinery. In this context, the study of quantities related to different information measures for quantum field theories gains relevance and with them, the study of states reduced to a region. These states are described by reduced (local) density matrices that live in the core of the definition of all the information measures referenced to spatial regions \(R\). From the quantum field algebraic perspective [1], each region \(R\) is attached to the algebra of the degrees of freedom localized in \(R\). The reduced state to a local algebra of operators in a region can be expressed, in presence of a cutoff, as a density matrix \[\rho=\frac{e^{-K}}{\mathrm{tr}e^{-K}}\,, \tag{1}\] where the exponent \(K\) is the modular Hamiltonian operator. This convenient way of encoding the reduced state admits an interesting interpretation of the entanglement entropy as the thermodynamic entropy of a system in equilibrium at temperature \(1\), but with respect to the modular Hamiltonian \(K\). Moreover, there is a time notion associated to the state through the modular Hamiltonian, whose evolution is implemented by the unitary operator in the algebra \[U(\tau)=\rho^{i\tau}\sim e^{-i\tau K}\,. \tag{2}\] The induced evolution of operators \(O(\tau)=U(\tau)OU(-\tau)\) is called the modular flow. This is a purely quantum transformation, which becomes trivial in the classical limit. Historically, the earliest recognition of the structural importance of modular flows can be found in the algebraic formulation of QFT [2, 3] and more recently, in the framework of the study of different information measures and statistical properties of reduced states in QFT [4, 5, 6]. The modular Hamiltonian is a fundamental constitutive part of the relative entropy and plays an essential role in the entropy bounds formulations and proof of several energy conditions [7, 8, 9, 10, 11, 12]. Besides, profiting that entanglement and relative entropy have well established geometric duals for holographic QFT [13, 14, 15], modular Hamiltonians have also been used to clarify localization properties of degrees of freedom in quantum gravity [16, 17, 18]. Currently, our knowledge of the explicit form of modular Hamiltonians reduces mostly to some examples where the modular flow is local, and it is primarily determined by spacetime symmetries. This is the case for the Rindler wedge \(x^{1}>|t|\) in Minkowski space and any QFT. Choosing the causal region to be the half spatial plane \(x^{1}>0\) and \(t=0\) then, the rotational symmetry of the euclidean theory allows us to express the reduced density matrix corresponding to the vacuum state in terms of the energy density \(T_{00}\) \[\rho=k\,e^{-2\pi\int_{x^{1}>0}d^{d-1}x\,x^{1}T_{00}(x)}\,. \tag{3}\] The above expression manifestly reveals a non trivial connection between entanglement in vacuum and energy density. Moreover, in equation (3), the exponent corresponds to the modular Hamiltonian for half space which results to be an integral of a local operator. \(K\) is in fact \(2\pi\) times the generator of boosts restricted to act only on the right Rindler wedge \[K=-2\pi\int_{x^{1}>0}d^{d-1}x\,x^{1}T_{00}(x)\,. \tag{4}\] The modular flow \(\rho^{i\tau}\) moves operators locally following the orbits of the one parameter group of boost transformations. On the other hand, it is interesting to note that from equation (3), the vacuum state in half space corresponds to a thermal state of inverse temperature \(2\pi\) with respect to the boost operator. This is directly connected to the Unruh's effect [19] according to which accelerated observers see the vacuum as a thermally excited state. For an observer following a trajectory given by a boost orbit, the state looks like a thermal state with respect to the proper time \(\tilde{\tau}\). For these trajectories, the proper time and the boost parameter \(s\) are proportional \(s=a\tilde{\tau}\) with \(a\) the proper acceleration of the observer, constant along boost orbits. In turn, this implies there is a relation \(K=\tilde{H}/a\) between the boost operator and the proper time Hamiltonian \(\tilde{H}\) of the accelerated observer. For such an observer there is a thermal bath at (proper time) temperature \(T=\frac{2\pi}{a}\). The other very well known example where symmetries again facilitate the derivation of the exact modular Hamiltonian is the case of conformal field theories (CFT) for spheres in any dimensions. For a CFT, Poincare symmetries are enlarged to the conformal group. These theories are characterized by having a traceless, symmetric and conserved stress tensor. This enlarges the number of conserved currents related to space-time symmetries which in general can be written as \[j_{\mu}=a^{\nu}\,T_{\nu\mu}+b^{\alpha\nu}\,x_{\alpha}\,T_{\nu\mu}+c\,x^{\nu}\,T _{\nu\mu}+d_{\alpha}\,(x^{2}g^{\alpha\nu}-2\,x^{\alpha}x^{\nu})\,T_{\nu\mu}\,. \tag{5}\] The corresponding conserved charges depend on parameters \(a^{\mu}\), determining translations, the antisymmetric \(b^{\mu\nu}\), giving Lorentz transformations, \(c\), related to dilatations, and \(d^{\mu}\), for the so called special conformal transformations. Since there is a conformal transformation that maps the Rindler wedge to causal regions with spherical boundary, and the same transformation leaves the vacuum invariant for a CFT, then, the modular Hamiltonian is just the transformed Rindler modular Hamiltonian. It is easy to get \[K=2\pi\int_{|\vec{x}|<R}d^{d-1}x\,\frac{R^{2}-r^{2}}{2R}\,T_{00}(\vec{x})\,. \tag{6}\] In this example, \(K\) is again local and proportional to \(T_{00}\), with a proportionality weight function \(\beta(r)\equiv\frac{R^{2}-r^{2}}{2R}\). Except for the two examples discussed above, the vacuum of a QFT in the Rindler wedge and the vacuum of a CFT in the sphere, there are only some other few known modular Hamiltonians, either local or not. The local ones in general are derived profiting from symmetry transformations that leave the state invariant. This is for example the case of the modular Hamiltonian for CFTs in \(1+1\) dimensions in presence of a global or local quench [20, 21, 22, 23, 24, 25, 26]. However, on general grounds, from the point of view of quantum information we do not expect locality to hold. In general, \(K\) will be given by a non local and non linear combination of the field operators at different positions inside the region. An example of a non local modular Hamiltonian which has been explicitly computed is the one for the vacuum state of the free massless fermion in \(d=2\) for several disjoint intervals [27, 28, 29]. In this case \(K\) has a local term proportional to the energy density and an additional non local part given by a quadratic expression in the fermion field that connects in a very particular way points located in different intervals. In this paper we calculate the modular Hamiltonian for the vacuum state of non conformal \((1+1)\) dimensional theories in the interval \((0,R)\). These theories are defined in the semi infinite line, and result from the dimensional reduction of the \(d\) dimensional free massless scalar. Our strategy is to calculate the modular Hamiltonian of the reduced system by profiting of the known modular Hamiltonian of CFTs in spheres in any dimension. The free massless scalar in \(d\) space time dimensions can be dimensionally reduced to a sum of one dimensional theories, one for each angular mode. Since the reduction is obtained by integrating over the angular coordinates, these systems live in the semi infinite line. From the algebraic point of view, this is convenient when studying algebras assigned to spherical regions to calculate, for example, the entanglement entropy. In these coordinates, the local algebra assigned to the region can be easily written in terms of fields \(\phi(r,\Omega)\) with nice localization properties. For example, points in the semi infinite line correspond to shells in the original space and intervals connected to the origin, to \(d\)-spheres (see figure 1). Concretely, in the radial coordinate, the canonical Hamiltonian for the massless free scalar decomposes as a sum over angular modes \(H_{\ell\vec{m}}\) \[H=\sum_{\ell\vec{m}}H_{\ell\vec{m}}\,. \tag{7}\] with \((\ell\vec{m})\) the angular mode label. In fact, there is a family of one dimensional Hamiltonian \(H_{\ell\vec{m}}\) for each dimension. In turn, the same decomposition occurs for the modular Hamiltonian (6) \[K=\sum_{\ell\vec{m}}K_{\ell\vec{m}}\,. \tag{8}\] Taking into account that the vacuum state for a system composed by independent subsystems is a product of density matrices, here \(\rho=\otimes\rho_{\ell\vec{m}}\), then it is immediate to identify the modular Hamiltonian mode \(K_{\ell\vec{m}}\) with the modular Hamiltonian of the one dimensional reduced system \(H_{\ell\vec{m}}\). The Hamiltonian \(H_{\ell\vec{m}}\) does not correspond to a conformal relativistic theory due to an extra quadratic term proportional to \(1/r^{2}\), whose proportionality constant depends on the dimension of the original problem and the angular mode \(\ell\)1. Surprisingly, we find that \(K_{\ell\vec{m}}\) is still local and proportional to the energy density \(T_{00}\)2, with the same weight function \(\beta(r)\) that characterizes the modular Hamiltonian for CFTs in spheres. Our analytic results coincide with the suggested continuum limit of the entanglement Hamiltonian of blocks of consecutive sites in massless harmonic chains, recently studied in [31]. Footnote 1: For a different context in which a free scalar, albeit conformal, is obtained from dimensional reduction, see [30]. Footnote 2: Since translational invariance is lost, there is no conserved energy momentum tensor. The notation for the energy density is just a matter of convention. This article is organized as follows. In section 2 we explicitly carry out the dimensional reduction. We write the scalar field in a basis of hyper-spherical harmonics, and after integrating out the angular coordinates we are left with a Hamiltonian for the reduced systems \(H_{\ell\vec{m}}\) of the form \[H_{\ell\vec{m}}=\frac{1}{2}\int dr\left[\widetilde{\pi}_{\ell\vec{m}}^{2}+( \partial_{r}\widetilde{\phi}_{\ell\vec{m}})^{2}+\frac{\mu_{d}(\ell)}{r^{2}} \widetilde{\phi}_{\ell\vec{m}}^{2}\right], \tag{9}\] with \[\mu_{d}(\ell)=\frac{(d-4)(d-2)}{4}+\ell(\ell+d-3). \tag{10}\] In section 3 the same procedure is followed to find the modular Hamiltonian \[K_{\ell,\vec{m}}=2\pi\int_{|\vec{x}|<R}dr\,\frac{R^{2}-r^{2}}{2R}\,T_{00}^{ \ell,\vec{m}}(\vec{x})\,. \tag{11}\] In some way, the reduced theory, manifestly invariant under dilatations but non conformal, keeps the _memory_ of the conformal symmetry of the parent \(d\)-dimensional theory [32, 33], with the same local modular Hamiltonian as that representing the vacuum of a CFT in a sphere. We delve Figure 1: The sphere of radius \(R\) corresponds to intervals of length \(R\) with one edge in the origin in the radial semi infinite line. into this in section 4, where we show that the reduced theories preserve an \(SL(2,\mathbb{R})\) symmetry, and that the modular transformation belongs to this subgroup. The modular Hamiltonian (11) written as a Noether charge can be correctly interpreted as the local operator implementing the modular flow. In section 5 we solve the spectrum of the modular Hamiltonian (11) and compute the entanglement entropy in a segment connected to the origin. We find the analytic expression \[S(\ell,d)=\frac{1}{6}\log\frac{R}{\epsilon}-\frac{i\pi}{2}\int_{0}^{\infty}ds \frac{s}{\sinh^{2}(\pi s)}\log\bigg{(}\frac{4^{is}\Gamma\left[is\right]\Gamma \left[-1+d/2+\ell-is\right]}{\Gamma\left[-is\right]\Gamma\left[-1+d/2+\ell+is \right]}\bigg{)}, \tag{12}\] which is logarithmically divergent, with coefficient \(1/6\) as expected for \((1+1)\) theories, and has a constant term that depends both on the mode \(\ell\) and the space time dimensions \(d\) of the original theory. Although the above integral cannot in general be solved analytically, we make some useful approximations to extract relevant information out of it. Moreover, by summing over \(\ell\) we are able to recover the conformal anomaly in the logarithmic coefficient for the free scalar field in even dimensions, as well as the constant universal \(F\) term in \(d=3\). In doing the sum over the angular modes \(\ell\), we introduce a novel regularization implemented by a damping exponential \(\exp[-\ell\epsilon/R]\), with the same cutoff \(\epsilon\) that regularizes the radial coordinate \(r\). This procedure generalizes the radial regularization scheme introduced by Srednicki in [34], where it is explicitly stated that for \(d\geqslant 4\) regularization by a radial lattice turns out to be insufficient and the sum over partial waves does not converge. We end the discussion with some concluding remarks. ## 2 Spherical coordinates The free scalar action in spherical coordinates reads \[S=\frac{1}{2}\int dtdrr^{d-2}d\Omega\left[-(\partial_{0}\phi)^{2}+(\partial_{ r}\phi)^{2}-\frac{\phi}{r^{2}}\Delta_{S^{d-2}}\phi\right]. \tag{13}\] With the aim of reducing the above to a single integral in the radial direction, we Fourier transform the scalar field in the angular coordinates, using the real hyper-spherical harmonics as basis functions, \[\phi(\vec{r})=\sum_{\ell m_{1}\ldots m_{d-3}}\phi_{\ell m_{1}\ldots m_{d-3}}( r)Y_{\ell}^{m_{1}\ldots m_{d-3}}(\hat{r}), \tag{14}\] with \[\Delta_{S^{d-2}}Y_{\ell}^{m_{1}\ldots m_{d-3}}(\hat{r})=-\ell(\ell+d-3)Y_{ \ell}^{m_{1}\ldots m_{d-3}}(\hat{r}), \tag{15}\] \[\int_{S^{d-2}}d\Omega Y_{\ell}^{m_{1}\ldots m_{d-3}}(\hat{r})Y_{\ell^{\prime} }^{m_{1}^{\prime}\ldots m_{d-3}^{\prime}}(\hat{r})=\delta_{\ell\ell^{\prime} }\delta_{m_{1}m_{1}^{\prime}}...\delta_{m_{d-3}m_{d-3}^{\prime}}. \tag{16}\] After integrating the angular coordinates, we are left with \[S=\frac{1}{2}\sum_{\ell\vec{m}}\int dtdrr^{d-2}\left[-(\partial_{0}\phi_{\ell \vec{m}})^{2}+(\partial_{r}\phi_{\ell\vec{m}})^{2}+\frac{\ell(\ell+d-3)}{r^{ 2}}\phi_{\ell\vec{m}}^{2}\right]. \tag{17}\] However, the theory looks simpler when defined in terms of the rescaled field \(\widetilde{\phi}_{\ell\vec{m}}=r^{\frac{d-2}{2}}\phi_{\ell\vec{m}}\), whose canonically conjugated momentum is \(\widetilde{\pi}_{\ell\vec{m}}\equiv\partial_{0}\widetilde{\phi}_{\ell\vec{m}}\), \[S=\frac{1}{2}\sum_{\ell\vec{m}}\int dtdr\left[-(\partial_{0}\widetilde{\phi}_ {\ell\vec{m}})^{2}+r^{d-2}\left(\partial_{r}\left(\frac{\widetilde{\phi}_{ \ell\vec{m}}}{r^{\frac{d-2}{2}}}\right)\right)^{2}+\frac{\ell(\ell+d-3)}{r^{ 2}}\widetilde{\phi}_{\ell\vec{m}}^{2}\right]. \tag{18}\] Functional variation with respect to the field leads to the equation of motion. Nevertheless, in order for the variational problem to be well posed we should impose specific boundary conditions at \(r=0\). In fact, \[\begin{split}\delta S=\sum_{\ell\vec{m}}&\left\{\int dtdr \left[\partial_{0}^{2}\widetilde{\phi}_{\ell\vec{m}}-\frac{1}{r^{\frac{d-2}{2} }}\partial_{r}\left(r^{d-2}\partial_{r}\left(\frac{\widetilde{\phi}_{\ell\vec{ m}}}{r^{\frac{d-2}{2}}}\right)\right)+\frac{\ell(\ell+d-3)}{r^{2}}\widetilde{ \phi}_{\ell\vec{m}}\right]\delta\widetilde{\phi}_{\ell\vec{m}}\\ &+\int dt\left[r^{\frac{d-2}{2}}\partial_{r}\left(\frac{ \widetilde{\phi}_{\ell\vec{m}}}{r^{\frac{d-2}{2}}}\right)\delta\widetilde{\phi }_{\ell\vec{m}}\right]\right|_{0}^{\infty}\right\}\,,\end{split} \tag{19}\] which requires either \(\delta\widetilde{\phi}_{\ell\vec{m}}(r=0,t)=0\) (Dirichlet boundary conditions) or \(r^{\frac{d-2}{2}}\partial_{r}\left(\frac{\widetilde{\phi}_{\ell\vec{m}}}{r^{ \frac{d-2}{2}}}\right)\to 0\) (analogous to the ordinary Neumann boundary conditions). In the following we will adopt the former. The second term in (19) can be further simplified, which leads to the saddle point \[\partial_{0}^{2}\widetilde{\phi}_{\ell\vec{m}}-\partial_{r}^{2}\widetilde{ \phi}_{\ell\vec{m}}+\frac{\mu_{d}(\ell)}{r^{2}}\widetilde{\phi}_{\ell\vec{m}}=0, \tag{20}\] with \[\mu_{d}(\ell)=\frac{(d-4)(d-2)}{4}+\ell(\ell+d-3). \tag{21}\] This partial differential equation can be solved by separation of variables, and expressed in terms of the original field \(\phi_{\ell\vec{m}}\), the radial eigenfunction problem is a Bessel equation, with solution \(j_{\ell}(r)\equiv\frac{1}{r^{(d-3)/2}}J_{\ell+\frac{d-3}{2}}(kr)\). Therefore, the solution is \[\widetilde{\phi}_{\ell\vec{m}}(t,r)=e^{\pm ikt}\sqrt{kr}J_{\ell+\frac{d-3}{2}} (kr), \tag{22}\] which means that \(\widetilde{\phi}_{\ell\vec{m}}\sim r^{\ell+d/2-1}\) near \(kr\sim 0\), in agreement with the boundary conditions. Having stated that, it is also possible to rewrite the second term in (18) by getting rid of a boundary term3. More explicitly, Footnote 3: We would be able to ignore the boundary term provided \(\widetilde{\phi}_{\ell\vec{m}}^{2}\) went to zero faster than \(r\). This is at least satisfied by the classical configuration (22). \[S=\sum_{\ell\vec{m}}S_{\ell\vec{m}} \tag{23}\] where \[S_{\ell\vec{m}}=\frac{1}{2}\int dtdr\left[-(\partial_{0}\widetilde{\phi}_{ \ell\vec{m}})^{2}+(\partial_{r}\widetilde{\phi}_{\ell\vec{m}})^{2}+\frac{\mu_ {d}(\ell)}{r^{2}}\widetilde{\phi}_{\ell\vec{m}}^{2}\right] \tag{24}\] can be thought of as the action for a free scalar living in the half line, satisfying Dirichlet boundary conditions at the origin. Note that, unlike the theory we started with, this is not a CFT because of the last term. The dimensional reduction of the free scalar Hamiltonian can be made following the same steps. But we can alternatively calculate the conserved charge due to time translations associated directly to the \(1+1\) dimensional action (24), yielding \[H=\frac{1}{2}\sum_{\ell\vec{m}}\int dr\left[\widetilde{\pi}_{\ell\vec{m}}^{2}+ (\partial_{r}\widetilde{\phi}_{\ell\vec{m}})^{2}+\frac{\mu_{d}(\ell)}{r^{2}} \widetilde{\phi}_{\ell\vec{m}}^{2}\right] \tag{25}\] Once again we stress that \(\widetilde{\pi}_{\ell\vec{m}}\) and \(\widetilde{\phi}_{\ell\vec{m}}\) satisfy canonical commutation relations \[\left[\widetilde{\phi}_{\ell\vec{m}}(r),\widetilde{\pi}_{\ell^{\prime}\vec{m} ^{\prime}}(r^{\prime})\right]=i\delta_{\ell,\ell^{\prime}}\delta_{\vec{m}, \vec{m}^{\prime}}\delta(r-r^{\prime}). \tag{26}\] The sphere modular Hamiltonian On the other hand, since the free scalar field theory in \(d\) spacetime dimensions is conformally invariant, when the whole system is in its ground state the modular Hamiltonian of a sphere is \[K=\frac{1}{2}\int_{|x|<R}dx^{d-1}\left(\frac{R^{2}-r^{2}}{2R}\right)T_{00}. \tag{27}\] However, although the stress tensor involved in this expression must be traceless, the canonical stress tensor of the free scalar field is \[T^{(c)}_{\mu\nu}=\partial_{\mu}\phi\partial_{\nu}\phi-\frac{1}{2}\eta_{\mu\nu} (\partial\phi)^{2}, \tag{28}\] which has non vanishing trace \(T^{\mu}_{\mu}=(1-d/2)\left(\partial\phi\right)^{2}=\frac{(1-d/2)}{2}\partial^ {2}(\phi^{2})\)4. Hence, it must be improved by adding a conserved symmetric tensor. A possible choice is Footnote 4: This identity holds on-shell. \[T^{\prime}_{\mu\nu}=T^{(c)}_{\mu\nu}-\frac{(1-d/2)}{2(1-d)}(\partial_{\mu} \partial_{\nu}-\eta_{\mu\nu}\partial^{2})\phi^{2}. \tag{29}\] Therefore, \[K=\frac{1}{2}\int_{|x|<R}dx^{d-1}\left(\frac{R^{2}-r^{2}}{2R}\right)\left[( \partial_{0}\phi)^{2}+(\partial_{i}\phi)^{2}-\frac{(1-d/2)}{(1-d)}\partial_{i }^{2}\phi^{2}\right]. \tag{30}\] Using the following identities: \[(\partial_{i}\phi)^{2}=(\partial_{r}\phi)^{2}-\frac{\phi}{r^{2}}\Delta_{S^{d-2 }}\phi, \tag{31}\] where we have partially integrated the angular piece, and \[\partial_{i}^{2}\phi^{2}=\frac{1}{r^{d-2}}\partial_{r}\left(r^{d-2}\partial_{ r}\phi^{2}\right)+\frac{1}{r^{2}}\Delta_{S^{d-2}}\phi^{2}, \tag{32}\] we arrive at \[\begin{split} K=\frac{1}{2}\sum_{\ell\bar{m}}\int drr^{d-2} \left(\frac{R^{2}-r^{2}}{2R}\right)&\left\{\pi^{2}_{\ell\bar{m} }+(\partial_{r}\phi_{\ell\bar{m}})^{2}+\frac{\ell(\ell+d-3)}{r^{2}}\phi^{2}_ {\ell\bar{m}}-\right.\\ &\left.-\frac{(1-d/2)}{(1-d)}\left[\partial_{r}^{2}\phi^{2}_{ \ell\bar{m}}+\frac{d-2}{r}\partial_{r}\phi^{2}_{\ell\bar{m}}\right]\right\}, \end{split} \tag{33}\] In terms of the canonically conjugated operators, \[\begin{split} K=\frac{1}{2}\sum_{\ell\bar{m}}\int dr\left(\frac{ R^{2}-r^{2}}{2R}\right)&\left\{\widetilde{\pi}^{2}_{\ell\bar{m}}+( \partial_{r}\widetilde{\phi}_{\ell\bar{m}})^{2}+\frac{\mu_{d}(\ell)}{r^{2}} \widetilde{\phi}^{2}_{\ell\bar{m}}-\right.\\ &\left.-\frac{d-2}{2(d-1)}\left[3\partial_{r}\left(\frac{ \widetilde{\phi}^{2}_{\ell\bar{m}}}{r}\right)+r\partial_{r}^{2}\left(\frac{ \widetilde{\phi}^{2}_{\ell\bar{m}}}{r}\right)\right]\right\},\end{split} \tag{34}\] Note that the second line of (34), together with the prefactor \((R^{2}-r^{2})\), is a total derivative in disguise. Hence, \[\begin{split} K=\frac{1}{2}\sum_{\ell\overline{m}}& \left\{\int_{0}^{R}dr\left(\frac{R^{2}-r^{2}}{2R}\right)\left[ \widetilde{\pi}_{\ell\overline{m}}^{2}+(\partial_{r}\widetilde{\phi}_{\ell \overline{m}})^{2}+\frac{\mu_{d}(\ell)}{r^{2}}\widetilde{\phi}_{\ell \overline{m}}^{2}\right]-\right.\\ &\left.-\frac{d-2}{2(d-1)}\left[\frac{(R^{2}-r^{2})}{2R}r \partial_{r}\left(\frac{\widetilde{\phi}_{\ell\overline{m}}^{2}}{r}\right)+R \left(\frac{\widetilde{\phi}_{\ell\overline{m}}^{2}}{r}\right)\right]\right|_{ 0}^{R}\right\},\end{split} \tag{35}\] The boundary terms (coming from the improving) can be interpreted in general, as an ambiguity in the definition of modular Hamiltonian in a region, and safely ignored as explained in [35]. Consequently, the modular Hamiltonian of the \(d\) dimensional free scalar is \[K=\sum_{\ell\overline{m}}K_{\ell\overline{m}}, \tag{36}\] where \[K_{\ell\overline{m}}=\frac{1}{2}\int_{0}^{R}dr\left(\frac{R^{2}-r^{2}}{2R} \right)\left[\widetilde{\pi}_{\ell\overline{m}}^{2}+(\partial_{r}\widetilde{ \phi}_{\ell\overline{m}})^{2}+\frac{\mu_{d}(\ell)}{r^{2}}\widetilde{\phi}_{ \ell\overline{m}}^{2}\right] \tag{37}\] can be interpreted as the modular Hamiltonian for the vacuum of (24) in a segment. This identification rests on the fact that the theory decomposes into independent sectors, labeled by the angular modes, so the state must write as the direct product of the states pertaining to each sector. But, most remarkably, this modular Hamiltonian is still local in the energy density. In other words, (37) agrees with the general expression (27) in spite of the reduced one dimensional theories being non conformal. In the next section we analyse this in detail, paying attention to the symmetries which survive the dimensional reduction. Provided that (37) defines the reduced state of a free field theory, Wick's theorem guarantees it can be expressed in terms of the two-point correlators. In fact, for a Gaussian state with modular Hamiltonian \[K=\int_{V}d^{d-1}x_{1}d^{d-1}x_{2}\left[\phi(x_{1})M(x_{1},x_{2})\phi(x_{2})+ \pi(x_{1})N(x_{1},x_{2})\pi(x_{2})\right], \tag{38}\] and correlators \[X=\left\langle\phi(x_{1})\phi(x_{2})\right\rangle,\quad P=\left\langle\pi(x_ {1})\pi(x_{2})\right\rangle, \tag{39}\] the following relation must be satisfied [36]5 Footnote 5: Here the product is a bi-local function constructed as \[\left[M.X\right](x_{1},x_{2})\equiv\int_{V}dyM(x_{1},y)X(y,x_{2})\] \[M.X=P.N \tag{40}\] In the case at hand, \[M(r,r^{\prime})=-2\pi\delta(r-r^{\prime})\left[\beta(r)\partial_{r}^{2}+ \partial_{r}\beta(r)\partial_{r}-\beta(r)\frac{\mu}{r^{2}}\right] \tag{41}\] and \[N(r,r^{\prime})=2\pi\delta(r-r^{\prime})\beta(r)\,. \tag{42}\] Meanwhile, the explicit form of the correlators for the one dimensional theory (24) is [37] \[X(r_{1},r_{2})=\frac{\Gamma\left[\ell+d/2-1\right]}{2\Gamma\left[\frac{1}{2} \right]\Gamma\left[\ell+\frac{d-1}{2}\right]}\left(\frac{r_{1}}{r_{2}}\right)^{ \ell+\frac{d}{2}-1}{}_{2}F_{1}\left[\frac{1}{2},\ell+\frac{d}{2}-1;\ell+\frac {d-1}{2};\left(\frac{r_{1}}{r_{2}}\right)^{2}\right]\,, \tag{43}\] \[\begin{split} P(r_{1},r_{2})&=\frac{2\Gamma(\ell+d/ 2)}{\Gamma\left[\frac{1}{2}\right]\Gamma\left[\ell+\frac{d-1}{2}\right](r_{2}^ {2}-r_{1}^{2})}\left(\frac{r_{1}}{r_{2}}\right)^{\ell+\frac{d}{2}-1}\left(A\ _{2}F_{1}\left[\frac{1}{2},\ell+\frac{d}{2};\ell+\frac{d-1}{2};\left(\frac{r_ {1}}{r_{2}}\right)^{2}\right]\right.\\ &\left.+B\ _{2}F_{1}\left[-\frac{1}{2},\ell+\frac{d}{2};\ell+\frac{d-1}{2}; \left(\frac{r_{1}}{r_{2}}\right)^{2}\right]\right)\,,\end{split} \tag{44}\] where \(A=\left(\ell+\frac{d-1}{2}\right)(1-r_{1}^{2}/r_{2}^{2})-1\), \(B=1-\ell-d/2\), and \(r_{1}<r_{2}\). Using these concrete expressions it is possible to check that (40) indeed holds. ## 4 Symmetries The locality of (37) suggests the existence of a symmetry with a conserved current such that the modular Hamiltonian is the corresponding Noether charge. This has to be an endomorphism in the causal wedge of the region, and must point in the time direction at \(t=0\). For CFTs in spheres in any dimensions, this is the conformal transformation that maps the spherical boundary in itself. For an interval \((0,R)\) in the half line, whose causal wedge is a half diamond, the symmetry transformation leaves the boundary point \(r=R\) fixed. The identification of this symmetry in the present case is the natural path to justify the locality of (37). With this aim, we first discuss the symmetries of the reduced theories with action (24). The symmetries of (24) are a subgroup of the conformal transformations inherited from higher dimensions, in particular those which involve only the time and radial coordinates, and that map the line \(r=0\) into itself. These are * Time translations: \[t\to t+t_{0}\] (45) * Dilatations: \[(t,r)\rightarrow(\lambda t,\lambda r)\] (46) * Special conformal transformations with parameter \(b^{\mu}=\frac{\alpha}{R}\hat{e}_{t}^{\mu}\): \[(t,r)\rightarrow\left(\frac{tR^{2}+\alpha R(t^{2}-r^{2})}{R^{2}+2\alpha Rt+ \alpha^{2}(t^{2}-r^{2})},\frac{rR^{2}}{R^{2}+2\alpha Rt+\alpha^{2}(t^{2}-r^{2} )}\right).\] (47) Infinitesimally, that is, if we set \(\alpha=\epsilon<<1\), then \[(t,r)\rightarrow\left(t-\epsilon(t^{2}+r^{2})/R,r-2\epsilon tr/R\right).\] (48) The generators of the transformations listed above are \(P_{0}=i\partial_{t}\), \(D=i(t\partial_{t}+r\partial_{r})\), and \(K_{0}=i\left((t^{2}+r^{2})\partial_{t}+2tr\partial_{r}\right)\) respectively. These close an \(sl(2,\mathbb{R})\) algebra, which can be expressed in a more suggestive way identifying \(L_{-1}\equiv P_{0}\), \(L_{0}\equiv D\), \(L_{1}\equiv K_{0}\), so that \[i\left[L_{m},L_{n}\right]_{LB}=(m-n)L_{m+n}. \tag{49}\] Just for completeness, we note that one would have expected the original conformal group \(SO(d,2)\) to break into \(SO(2,2)\sim SL(2,\mathbb{R})\otimes\overline{SL(2,\mathbb{R})}\)[32, 33], with six generators. In fact, besides the three generators already mentioned, there are three more that do not mix the angular coordinates with \((t,r)\), associated to * Translations in the radial direction: \[r\to r+r_{0}\] (50) * Boosts: \[(t,r)\rightarrow(t+\epsilon r,r+\epsilon t)\] (51) * Special conformal transformations with parameter \(b^{\mu}=\frac{\alpha}{R}\hat{e}^{\mu}_{r}\) \[(t,r)\rightarrow\left(\frac{tR^{2}}{R^{2}-2\alpha Rr-\alpha^{2}(t^{2}-r^{2})},\frac{rR^{2}+\alpha R(t^{2}-r^{2})}{R^{2}-2\alpha Rr-\alpha^{2}(t^{2}-r^{2}) }\right).\] (52) These are \(\hat{e}^{\mu}_{r}P_{\mu}\), \(\hat{e}^{\mu}_{r}M_{0\mu}\) and \(\hat{e}^{\mu}_{r}K_{\mu}\), respectively. However, it is easy to see that they fail to become symmetries of the dimensionally reduced theory. Then, the modular symmetry of the reduced theories we are looking for must be a particular composition of the identified symmetry transformations (45) - (47). On the other hand, we know that the modular symmetry for the parent conformal theory is associated to the generator of the boosts as seen from the domain of dependence of the ball [38], \[\zeta=\frac{\pi}{R}\left[(R^{2}-t^{2}-|\vec{x}|^{2})\partial_{t}-2tx^{i} \partial_{i}\right]\,. \tag{53}\] In fact, comparing with (45) and (47), we notice that this transformation in the semi infinite line is the composition of a time translation of parameter \(\epsilon\pi R\) and a special conformal transformation of parameter \(\epsilon\frac{\pi}{R}\). Let us check this explicitly. In spherical coordinates, the infinitesimal transformation reads \[t\longrightarrow t^{\prime}=t+\epsilon\frac{\pi}{R}(R^{2}-t^{2} -r^{2})\] \[r\longrightarrow r^{\prime}=r+\epsilon\frac{\pi}{R}(-2tr) \tag{54}\] \[\Omega\longrightarrow\Omega^{\prime}=\Omega\] Since the invariance of the kinetic term is guaranteed, we need only to check the invariance of the quadratic term \(dtdr/r^{2}\), which is less evident. On the one hand, we have \[dtdr = dt^{\prime}dr^{\prime}\left|\frac{\partial t}{\partial t^{\prime }}\right.\left.\frac{\partial t}{\partial r^{\prime}}\right|=dt^{\prime}dr^{ \prime}\left|1+2\pi\epsilon t^{\prime}/R+\mathcal{O}(\epsilon^{2})\right. \left.2\pi\epsilon r^{\prime}/R+\mathcal{O}(\epsilon^{2})\right| \tag{55}\] \[\sim dt^{\prime}dr^{\prime}(1+4\pi\epsilon t^{\prime}/R).\] On the other hand, \[\frac{1}{r^{2}}\sim\frac{1}{(r^{\prime}+2\pi\epsilon t^{\prime}r^{\prime}/R)^ {2}}=\frac{1}{r^{\prime 2}}(1-4\pi\epsilon t^{\prime}/R+\mathcal{O}(\epsilon^{2})). \tag{56}\] Hence, \[\frac{dtdr}{r^{2}}=\frac{dt^{\prime}dr^{\prime}}{{r^{\prime}}^{2}}+\mathcal{O}( \epsilon^{2}). \tag{57}\] By Noether's theorem, there must exist a conserved current associated to (54), which is of the form6 Footnote 6: We remove the tildes and the angular mode labels to avoid cluttering. \[j^{\mu}=\left(\frac{\delta\mathcal{L}}{\delta(\partial_{\mu}\phi)}\partial_{ \nu}\phi-\mathcal{L}\delta^{\mu}_{\nu}\right)\zeta^{\nu}, \tag{58}\] or, in components, \[j^{t}=\frac{1}{2}\left[(\partial_{t}\phi)^{2}+(\partial_{r}\phi)^{2}+\frac{ \mu}{r^{2}}\phi^{2}\right]\frac{(R^{2}-t^{2}-r^{2})}{R}-2\frac{tr}{R}\partial _{r}\phi\partial_{t}\phi, \tag{59}\] \[j^{r}=\frac{1}{2}\left[(\partial_{t}\phi)^{2}+(\partial_{r}\phi)^{2}-\frac{ \mu}{r^{2}}\phi^{2}\right]2\frac{tr}{R}-\partial_{r}\phi\partial_{t}\phi\frac {(R^{2}-t^{2}-r^{2})}{R}. \tag{60}\] Finally, the current above corresponds to a modular Hamiltonian \[\begin{split} K_{\ell\bar{m}}&=\int_{0}^{R}drj_{0} (t=0,r)\\ &=\frac{1}{2}\int_{0}^{R}dr\left(\frac{R^{2}-r^{2}}{2R}\right) \left[\widetilde{\pi}^{2}_{\ell\bar{m}}+(\partial_{r}\widetilde{\phi}_{\ell \bar{m}})^{2}+\frac{\mu_{d}(\ell)}{r^{2}}\widetilde{\phi}^{2}_{\ell\bar{m}} \right],\end{split} \tag{61}\] the same as (37) deduced in the previous section from different arguments. ## 5 Modular Hamiltonian and entropy In this section we study the spectrum of the modular Hamiltonian (37). Solving the eigenfunction problem allows us to compute the entanglement entropy for an interval attached to the origin, as a function of the angular mode \(\ell\) and the original spacetime dimension \(d\). Then we sum over the modes and compare the result with the entanglement entropy of the \(d\)-sphere. ### Eigenfunctions In general, given a quadratic modular Hamiltonian of a region \(V\), of the form \[K=\int_{V}d^{d-1}x\,d^{d-1}x^{\prime}\,\,\left(\phi(x)M(x,x^{\prime})\phi(x^{ \prime})+\pi(x)N(x,x^{\prime})\pi(x^{\prime})\right), \tag{62}\] with \(M\) and \(N\) real symmetric operators, the eigenfunctions are those of the right and left action of \(M.N\), namely \[(N.M)u_{s}=s^{2}u_{s} \tag{63}\] \[(M.N)v_{s}=s^{2}v_{s}. \tag{64}\] This leads to the alternative way of writing \(K\) \[K=\int_{V}d^{d-1}x\int_{0}^{\infty}ds\,\,u_{s}(x)\,\,s\,\,v_{s}^{*}(x). \tag{65}\] More concretely, the problem we are interested in is defined by (41) and (42), so the eigenfunctions \(u\) and \(v\) satisfy the following hypergeometric equations7 Footnote 7: For later convenience we renormalize the eigenvalues to absorb a factor \(1/(2\pi)^{2}\). \[\left[\beta^{2}\partial_{r}^{2}+\beta\partial_{r}\beta\partial_{r}-\beta^{2} \frac{\mu}{r^{2}}\right]u_{s}=-s^{2}u_{s} \tag{66}\] \[\left[\beta^{2}\partial_{r}^{2}+3\beta\partial_{r}\beta\partial_{r}+\left( \beta\partial_{r}^{2}\beta+(\partial_{r}\beta)^{2}-\beta^{2}\frac{\mu}{r^{2}} \right)\right]v_{s}=-s^{2}v_{s} \tag{67}\] The solutions of these equations are8 Footnote 8: There is an additional independent solution, but we dismiss it because it does not go to zero at \(r=0\), as mandated by the boundary conditions. \[\begin{split} u_{s}(r)&=N_{u}\left(\frac{r}{R} \right)^{-1+\frac{d}{2}+\ell}\left(\frac{R^{2}-r^{2}}{R^{2}}\right)^{-is}{}_{2 }F_{1}\left[\frac{1}{2}-is,-1+\frac{d}{2}+\ell-is,\frac{d}{2}-\frac{1}{2}+\ell,\frac{r^{2}}{R^{2}}\right]\\ v_{s}(r)&=N_{u}\frac{R}{\beta(r)}u_{s}(r),\end{split} \tag{68}\] where \(N_{u}\) is a normalization constant. Near \(r\sim 0\) the solutions behave as, \[u_{s}(r)\sim v_{s}(r)\propto r^{-1+\frac{d}{2}+\ell} \tag{69}\] in agreement with the classical profile (22), whereas near \(r\sim R\) they behave as \[\begin{split} u_{s}(r)&\sim N_{u}\left[\left(\frac{ R-r}{R}\right)^{-is}\alpha(s)+c.c.\right]\\ v_{s}(r)&\sim N_{u}\left[\left(\frac{R-r}{R}\right) ^{-1-is}\alpha(s)+c.c.\right],\end{split} \tag{70}\] with \[\alpha(s)=\frac{2^{-is}\Gamma\left[\frac{d-1}{2}+\ell\right]\Gamma\left[2is \right]}{\Gamma\left[is+\frac{1}{2}\right]\Gamma\left[is+\ell+\frac{d}{2}-1 \right]}. \tag{71}\] It is very important to keep in mind that there is a branch point at \(r=R\). In fact, since the eigenfunctions must satisfy the orthogonality relation \[\int_{0}^{R}dru_{s}(r)v_{s^{\prime}}^{*}(r)=\delta(s-s^{\prime})\,, \tag{72}\] in order to find out the normalization factor \(N_{u}\) we substitute in (72) the leading terms in their Taylor series expansion (70), because only the region near \(r\sim R\) can contribute with a Dirac delta function. That results in \[\int_{0}^{R}dru_{s}(r)v_{s^{\prime}}^{*}(r)\sim 2|N_{u}|^{2}\text{Re}\left[I(s-s ^{\prime})\alpha(s)\alpha^{*}(s^{\prime})+I(s+s^{\prime})\alpha(s)\alpha(s^{ \prime})\right]\,, \tag{73}\] where \[\begin{split} I(s)&\equiv\int_{0}^{R}dr\frac{R}{R- r}\exp\left[-is\log\left(\frac{R-r}{R}\right)\right]\\ &=R\left[\frac{i}{s}+\pi\delta(s)\right].\end{split} \tag{74}\] Hence, neglecting the finite terms\({}^{9}\), we have that (72) holds provided that \[N_{u}=\frac{1}{\sqrt{2\pi R}|\alpha(s)|}, \tag{75}\] save an overall phase that we set to one for convenience. ### The entropy As explained in [39] in the context of the free chiral scalar, we can take advantage of the orthogonality relation to simplify the computation of the entanglement entropy, which can be expressed as a regularized integral over a small region behind the end point \(r=R\), of the form \[\begin{split} S(\ell,d)&=\int_{0}^{R-\epsilon}dr\, \int_{0}^{\infty}ds\,u_{s}(r)g(s)v_{s}^{*}(r)\\ &=-\,\lim_{\delta s\to 0}\int_{R-\epsilon}^{R}dr\,\int_{0}^{ \infty}ds\,u_{s}(r)g(s)v_{s+\delta s}^{*}(r)\,,\end{split} \tag{76}\] with \[g(s)=\frac{1+\coth(\pi s)}{2}\log\left(\frac{1+\coth(\pi s)}{2}\right)+\frac{1 -\coth(\pi s)}{2}\log\left(\frac{\coth(\pi s)-1}{2}\right) \tag{77}\] Note that since we expect the entanglement entropy of a QFT to diverge due to the short range correlations between modes at both sides of the boundary, we regularized it by introducing a small UV cutoff \(\epsilon\). Furthermore, in going from the first to the second line of (76) we shifted the \(v\) sub index, summing over slightly off diagonal elements. For fixed \(\delta s\neq 0\) the integral defined on the whole interval vanishes because of (72), leading to an integral just behind the boundary. This trick allows us to substitute the expansion (70), which is much easier to integrate than the original solutions (68). Finally, we get \[S(\ell,d)=\frac{1}{6}\log\frac{R}{\epsilon}-\frac{1}{\pi}\int_{0}^{\infty}ds \,g^{\prime}(s)\text{Arg}(\alpha(s)), \tag{78}\] or, more explicitly, \[S(\ell,d)=\frac{1}{6}\log\frac{R}{\epsilon}-\frac{i\pi}{2}\int_{0}^{\infty}ds \frac{s}{\sinh^{2}(\pi s)}\log\left(\frac{4^{is}\Gamma\left[is\right]\Gamma \left[-1+d/2+\ell-is\right]}{\Gamma\left[-is\right]\Gamma\left[-1+d/2+\ell+is \right]}\right) \tag{79}\] The logarithmic coefficient \(1/6\) is the expected result for a \((1+1)\) dimensional theory. Meanwhile, the constant term is expressed in terms of an integral that cannot be solved explicitly. For later convenience, we write it as a sum of two contributions, one that does not depend neither on the dimension nor on the angular mode \[c\equiv-\frac{i\pi}{2}\int_{0}^{\infty}ds\frac{s}{\sinh^{2}(\pi s)}\log\left( \frac{4^{is}\Gamma\left[is\right]}{\Gamma\left[-is\right]}\right), \tag{80}\] and another which does depend on both parameters \[f(\ell,d)\equiv-\frac{i\pi}{2}\int_{0}^{\infty}ds\frac{s}{\sinh^{2}(\pi s)} \log\left(\frac{\Gamma\left[-1+d/2+\ell-is\right]}{\Gamma\left[-1+d/2+\ell+is \right]}\right) \tag{81}\] Although it is unfortunately impossible to find an analytic expression for the integral, for sufficiently large modes we can make use of the Stirling's approximation \[\log\Gamma(z)\sim z\log z-z+\frac{1}{2}\log\frac{2\pi}{z}+\sum_{n=1}^{N-1}\frac{ B_{2n}}{2n(2n-1)z^{2n-1}},\quad|z|\to\infty \tag{82}\] to write \[\begin{split}&\log\left(\frac{\Gamma\left[-1+d/2+\ell-is\right]}{ \Gamma\left[-1+d/2+\ell+is\right]}\right)\sim-2is\log\ell+\sum_{k=2}^{\infty} \sum_{m=1}^{\lfloor\frac{k+1}{2}\rfloor}\frac{(k-2)!}{\ell^{k-1}}a_{k,m}(s)\\ &+\frac{1}{2}\sum_{k=1}^{\infty}\sum_{m=1}^{\lfloor\frac{k+1}{2} \rfloor}\frac{(k-1)!}{\ell^{k}}a_{k,m}(s)+\sum_{n=1}^{\infty}\sum_{k=1}^{ \infty}\sum_{m=1}^{\lfloor\frac{k+1}{2}\rfloor}\frac{B_{2n}(2n+k-2)!}{(2n)! \ell^{2n+k-1}}a_{k,m}(s),\quad\ell>>1,\end{split} \tag{83}\] where \[a_{k,m}(s)=\frac{2i(-1)^{k+m}}{(2m-1)!(k+1-2m)!}\left(-1+\frac{d}{2}\right)^{ k+1-2m}s^{2m-1}. \tag{84}\] This means that the constant term grows logarithmically with the mode \(\ell\), with corrections that decay as positive powers of \(1/\ell\). In fact, performing the integration over the variable \(s\) order by order in the expansion, we can straightforwardly check that the first few leading terms read \[f(\ell,d)\sim-\frac{1}{6}\log\ell+\frac{a_{1}}{\ell}+\frac{a_{2}}{\ell^{2}}+ \frac{a_{3}}{\ell^{3}}+\frac{a_{4}}{\ell^{4}}+\mathcal{O}\left(\frac{1}{\ell^ {5}}\right), \tag{85}\] Figure 2: Constant term of the entropy at \(d=3\), as a function of the angular mode \(\ell\). The red dots represent the exact numerical value of (81), for \(\ell=\{1,2,5,10,15,20,30,40,100\}\). The blue curve corresponds to the fit \(f(\ell,d=3)=c_{0}+c_{1}\log\ell\), with \(c_{0}=1.345\times 10^{-5}\) and \(c_{1}=-0.1666\). with \[a_{1} = \frac{1}{4}-\frac{d}{12} \tag{86}\] \[a_{2} = \frac{7}{40}-\frac{d}{8}+\frac{d^{2}}{48}\] (87) \[a_{3} = \frac{3}{20}-\frac{7d}{40}+\frac{d^{2}}{16}-\frac{d^{3}}{144}\] (88) \[a_{4} = \frac{73}{560}-\frac{9d}{40}+\frac{21d^{2}}{160}-\frac{d^{3}}{32} +\frac{d^{4}}{384} \tag{89}\] Quite surprisingly, the logarithmic term already approximates \(f(\ell,d)\) at \(\ell\sim\mathcal{O}(1)\) very accurately, as shown in figure (2). In figure (3) we compare the numerical value of (81) with the one obtained from direct calculation in a radial lattice, again at \(d=3\). Since the constant term depends on the regularization scheme, we subtract the one corresponding to \(\ell=1\) and compare \(\Delta f(\ell,3)\equiv f(\ell,3)-f(\ell=1,3)\). Although it is very hard to achieve good precision in the lattice10, we find reasonable agreement. For example, for \(\ell=10\), numerical integration yields \(\Delta f=-0.3737\), while the lattice computation gives \(\Delta f=-0.3795\). Footnote 10: Roughly speaking, the value of \(\ell\) gives a lower bound for the meaningful radios \(R/\epsilon>>\ell\) ### Recovering the scalar entropy As discussed in section 3, the modular Hamiltonian of the free scalar in the sphere is equal to the sum over \(\ell\) of the modular Hamiltonian pertaining to each one dimensional theory in the segment. Consequently, we expect that summing (79) must necessarily reproduce the general structure for the entanglement entropy, \[S=\begin{cases}\#\left(\frac{R}{\epsilon}\right)^{d-2}+...+c_{\log}\log\frac{ R}{\epsilon},&d\quad\text{even}\\ \#\left(\frac{R}{\epsilon}\right)^{d-2}+...+F,&d\quad\text{odd}\end{cases} \tag{90}\] that is, an infinite contribution controlled by the area term, and a universal piece, either in the form of a logarithmic coefficient in even dimensions, which is precisely the trace anomaly Figure 3: \(\Delta f(\ell,3)\equiv f(\ell,3)-f(\ell=1,3)\). Blue: direct numerical integration of (81). Orange: calculation with a radial lattice regularization. \(\ell=\{1,2,5,10,20\}\) coefficient associated to the Euler density [40, 41, 42], or a constant term in odd dimensions [43, 44, 45]. To show that this indeed holds, we need to introduce a cutoff in \(\ell\) to regularize the sum. More concretely, we introduce a damping exponential so that \[S=\sum_{\ell=0}^{\infty}\lambda(\ell,d)S(\ell,d)e^{-\ell\epsilon/R}. \tag{91}\] where \[\lambda(\ell,d)=(2\ell+d-3)\frac{(\ell+d-4)!}{\ell!(d-3)!} \tag{92}\] is the density of states. Note that this grows as \(\ell^{d-3}\) for \(\ell>>1\). Given the complicated expression of the constant term \(f(\ell,d)\), we approximate it by its large \(\ell\) expansion, leading to \[S=\sum_{\ell=1}^{\infty}\lambda(\ell,d)\left(\frac{1}{6}\log\frac{R}{\epsilon }+c-\frac{1}{6}\log\ell+\sum_{j=1}^{j_{max}}\frac{a_{j}}{\ell^{j}}\right)e^{- \ell\epsilon/R}+\lambda(0,d)S(0,d)+\mbox{correction}. \tag{93}\] The correction above accounts for the error made when approximating \(f(\ell,d)\) by its series expansion, truncated at \(\mathcal{O}(\ell^{-j_{max}})\). It is straightforward to verify that (93) reproduces (90). For example, the divergent pieces come from terms with the general structure \[\sum_{\ell=1}^{\infty}\ell^{p}\left(\log\frac{R}{\epsilon}-\log\ell\right)e^{ -\ell\epsilon/R}=-\Gamma^{\prime}(p+1)\left(\frac{R}{\epsilon}\right)^{p+1}+ \zeta(-p)\log\frac{R}{\epsilon}+\zeta^{\prime}(-p), \tag{94}\] \[\sum_{\ell=1}^{\infty}\ell^{p}e^{-\ell\epsilon/R}=p!\left(\frac{R}{\epsilon} \right)^{p+1}+\zeta(-p), \tag{95}\] with \(\{p\mid p\in\mathbb{N}_{0}\wedge p\leq d-3\}\), and \[\sum_{\ell=1}^{\infty}\frac{1}{\ell}e^{-\ell\epsilon/R}=\log\frac{R}{\epsilon}. \tag{96}\] Note that the logarithmic term, only present in even dimensions, stems from (94) and (96). Based on this observation, it is worth pointing out that in order to compute the logarithmic coefficient we only need to take into account the first \(d-1\) terms in the expansion of \(f(\ell,d)\), that is, \(j_{max}=d-2\). Subleading corrections give finite contributions at most. Just to explicitly address some relevant specific cases, at \(d=6\) we get \[c_{\log}(d=6)=\frac{29}{540}+a_{1}+\frac{13}{6}a_{2}+\frac{3}{2}a_{3}+\frac{1 }{3}a_{4}, \tag{97}\] and, substituting (86), (87), (88), (89), \[c_{\log}(d=6)=\frac{1}{756}, \tag{98}\] in agreement with the expected anomaly value. On the other hand, at \(d=4\) we get \[c_{\rm log}(d=4)=\frac{1}{18}+a_{1}+2a_{2}, \tag{99}\] which leads to the expected anomaly coefficient \[c_{\rm log}(d=4)=-\frac{1}{90}. \tag{100}\] The case of \(d=3\) is different from the ones discussed above in that it has no logarithmic term and the universal piece in the entanglement entropy is associated to the constant term \(F\). In fact, direct calculation yields, using (86) \[c_{\rm log}(d=3)=2a_{1}=0. \tag{101}\] Regarding the constant \(F\), the infinite tail in the \(1/\ell\) expansion must in principle be taken into account. For that reason, we regularize the sum taking up to \(j_{max}=2\) and then add a finite contribution which corrects the approximation, giving the exact value for the constant term in the series of \(f(\ell,d)\). That is, \[\mbox{correction}=2\lim_{\ell_{max}\rightarrow\infty}\sum_{\ell=1}^{\ell_{max} }\left(f(\ell,d=3)+\frac{1}{6}\log\ell-\frac{a_{2}}{\ell^{2}}\right) \tag{102}\] In figure (4) we plot the above correction as a function of \(\ell_{max}\) and show that it converges very fast to \[\mbox{correction}\sim 0.00519641 \tag{103}\] According to (93), another term which contributes is \[f(0,3)=-\frac{i\pi}{2}\int_{0}^{\infty}ds\frac{s}{\sinh^{2}(\pi s)}\log\left( \frac{\Gamma\left[1/2-is\right]}{\Gamma\left[1/2+is\right]}\right)\sim 0.278435. \tag{104}\] Gathering all the pieces together, we finally get \[F=\frac{\pi^{2}}{3}a_{2}+\frac{\zeta^{\prime}(0)}{3}+f(0,3)+\mbox{correction} \sim-0.0638049, \tag{105}\] which is within \(0.003\%\) of the exact value [43; 45]. Note that the constant \(c\) does not contribute at all to \(F\). Figure 4: Error made in the computation of the constant term \(F\) when approximating \(f(\ell,3)\) by \(-\frac{1}{6}\log\ell+\frac{a_{2}}{\ell^{2}}\). \(\ell_{max}\) is the greatest angular momentum that is summed over. We see that the correction converges very fast to \(\sim 0.00519641\) Final remarks In this article, we focused on theories in the semi infinite line constructed from the dimensional reduction of a free scalar in \(d\) dimensions. Given that the decomposition of the parent theory \(H\) into independent sectors \(H_{\ell\bar{m}}\), labeled by the angular modes, also holds for the vacuum modular Hamiltonian in spheres \(K=\sum_{\ell\bar{m}}K_{\ell\bar{m}}\), and provided that the vacuum state of the system is the product \(\rho=\otimes\rho_{\ell\bar{m}}\), then, it is immediate to identify the modular Hamiltonian mode \(K_{\ell\bar{m}}\) with the modular Hamiltonian of the one-dimensional reduced system \(H_{\ell\bar{m}}\) in the interval \((0,R)\). Remarkably, the resulting modular Hamiltonian is local and proportional to the energy density, with the same weight function \(\beta(r)=\frac{R^{2}-r^{2}}{2R}\) as the one characteristic of a CFT in a sphere \(R\). We complemented the previous analysis with the study of the symmetries inherited from the \(d\) dimensional conformal theory. This approach evidences the fact that the symmetry behind the locality of the reduced modular Hamiltonian is just the restriction to the semi infinite line of the original modular symmetry in \(d\) dimensions. We identified the conserved current associated to this symmetry transformation and checked that the \(K_{\ell\bar{m}}\) found by dimensional reduction coincides with the Noether charge. On the other hand, the spectral decomposition of the modular Hamiltonian leads to an analytic expression for the corresponding entanglement entropy (EE) which in turn, after summing over the angular modes, allowed us to recover the EE of the original \(d\) dimensional theory in the sphere. To make sense of the sum, we used a novel regularization implemented by a damping exponential parametrized by the same cutoff \(\epsilon\) that regularizes the radial coordinate. As we mentioned in the introduction, in a way, this procedure generalizes the one introduced by Srednicki in [34] and provides an additional tool to calculate analytically the EE logarithmic coefficient in even dimensions, the universal constant term in \(d=3\), among others. It would certainly be interesting to explore in the future the modular Hamiltonian of non-conformal theories constructed from the dimensional reduction of other free theories, fermions for example. We expect that the decomposition of the parent theory into independent sectors must carry over unaltered, as well as the symmetry arguments which justify the resulting modular Hamiltonian is a conserved charge. ## Acknowledgments We thank H.Casini, C.Fosco, E.Tonni and G.Torroba for discussions while this work was being carried out. This work was supported by CONICET, CNEA and Universidad Nacional de Cuyo, Instituto Balseiro, Argentina.
2307.16778
KoBBQ: Korean Bias Benchmark for Question Answering
The Bias Benchmark for Question Answering (BBQ) is designed to evaluate social biases of language models (LMs), but it is not simple to adapt this benchmark to cultural contexts other than the US because social biases depend heavily on the cultural context. In this paper, we present KoBBQ, a Korean bias benchmark dataset, and we propose a general framework that addresses considerations for cultural adaptation of a dataset. Our framework includes partitioning the BBQ dataset into three classes--Simply-Transferred (can be used directly after cultural translation), Target-Modified (requires localization in target groups), and Sample-Removed (does not fit Korean culture)-- and adding four new categories of bias specific to Korean culture. We conduct a large-scale survey to collect and validate the social biases and the targets of the biases that reflect the stereotypes in Korean culture. The resulting KoBBQ dataset comprises 268 templates and 76,048 samples across 12 categories of social bias. We use KoBBQ to measure the accuracy and bias scores of several state-of-the-art multilingual LMs. The results clearly show differences in the bias of LMs as measured by KoBBQ and a machine-translated version of BBQ, demonstrating the need for and utility of a well-constructed, culturally-aware social bias benchmark.
Jiho Jin, Jiseon Kim, Nayeon Lee, Haneul Yoo, Alice Oh, Hwaran Lee
2023-07-31T15:44:15Z
http://arxiv.org/abs/2307.16778v2
# KoBBQ: Korean Bias Benchmark for Question Answering ###### Abstract _Warning: This paper contains examples of stereotypes and biases._ The BBQ (Bias Benchmark for Question Answering) dataset enables the evaluation of the social biases that language models (LMs) exhibit in downstream tasks. However, it is challenging to adapt BBQ to languages other than English as social biases are culturally dependent. In this paper, we devise a process to construct a non-English bias benchmark dataset by leveraging the English BBQ dataset in a culturally adaptive way and present the KoBBQ dataset for evaluating biases in Question Answering (QA) tasks in Korean. We identify samples from BBQ into three classes: Simply-Translated (can be used directly after cultural translation), Target-Modified (requires localization in target groups), and Sample-Removed (does not fit Korean culture). We further enhance the cultural relevance to Korean culture by adding four new categories of bias specific to Korean culture and newly creating samples based on Korean literature. KoBBQ consists of 246 templates and 4,740 samples across 12 categories of social bias. Using KoBBQ, we measure the accuracy and bias scores of several state-of-the-art multilingual LMs. We demonstrate the differences in the bias of LMs in Korean and English, clarifying the need for hand-crafted data considering cultural differences. ## 1 Introduction BBQ (Bias Benchmark for Question answering) (Parrish et al., 2022) is an English dataset designed to measure the social bias of language models (LMs) based on specific contexts and questions related to real-world situations. Although this is a valuable dataset for bias measurement in English, there are significant challenges when applying the BBQ dataset to Korean culture. It contains US-centric biases and contexts that create cultural disparities that must be modified. As illustrated in Figure 1, the US-centric bias in BBQ associates low Socioeconomic Status (SES) with the social bias of _drug usage_. Conversely, in Korea, people with high SES are associated with the same social bias. These cultural differences necessitate a more nuanced and culturally-adaptive approach to social bias dataset construction for another culture. As several studies have shown, it is crucial to avoid relying solely on machine-translated datasets. Instead, constructing culturally-sensitive data requires careful consideration of appropriate social context and keywords (Lin et al., 2021; Ponti et al., 2020). In this paper, we propose a process for developing culturally-adaptive datasets to overcome these challenges. Our methodology builds upon the English BBQ dataset while at the same time taking into account the specific cultural nuances and social biases that exist in Korean society. This process will enable researchers to create datasets more aligned with the cultural context of different languages, Figure 1: BBQ and KoBBQ assess LMs’ bias by asking the model discriminatory questions with ambiguous or disambiguated context. Different cultures may have different contexts or groups associated with social bias, resulting in differences between BBQ and KoBBQ. leading to more accurate and comprehensive bias measurement. Moreover, we build **KoBBQ** (Korean Bias Benchmark for Question Answering) that depicts Korean-centric situations and social biases based on the process proposed and make it publicly available. This will serve as a valuable resource to assess and understand bias in the Korean language context. The first step is categorizing BBQ samples into three groups considering their adaptability and cultural relevance. The groups are: Simply-Translated, directly usable after cultural translation; Target-Modified, requiring modification of the target group for the given social bias; and Sample-Removed, including situations and biases not inherent to Korean culture. We exclude Sample-Removed samples (48 templates) from the dataset to ensure a comprehensive representation of Korean culture. We recruit a professional translator for accurate and culturally-sensitive human-moderated translations of Simply-Translated and Target-Modified samples. The target groups for the Target-Modified samples are modified based on survey results from Korean citizens. This results in building 3,993 samples with 172 templates distributed among eight categories. Additionally, we enrich the dataset by adding 747 samples with 74 templates with four new categories (_Domestic Area of Origin_, _Family Structure_, _Political Orientation_, and _Educational Background_), labeling them as Newly-Created. The four types are presented in Figure 2. The final KoBBQ contains 4,740 samples with 246 templates spread across 12 categories. Our research proposes diverse approaches for analyzing social bias within LMs. Using KoBBQ, we evaluate and compare various existing multilingual Large Language Models (LLMs) and Korean-specialized LLMs. We simultaneously assess Question Answering (QA) performance and bias by utilizing a bias score correlating with accuracy. Figure 2: Examples of 4 types in KoBBQ. The yellow box indicates the answer to the biased question. A dotted box refers to the target groups that align with the relevant social bias. Any modified parts from BBQ are marked with strike-lines, while cultural-sensitive translation parts are underlined. By comparing BBQ, machine-translated BBQ, and KoBBQ, we find distinctive patterns in model performance and bias score, highlighting the importance of a hand-built dataset in bias detection in non-English languages. Our research also indicates that most LLMs have poor performance and high bias scores on Newly-Created samples, implying that KoBBQ addresses culture-specific situations that existing LMs have overlooked. In summary, the main contributions of this work are as follows: * We propose a method for cultural adaptation of existing social benchmark datasets into another language. * We present KoBBQ, a hand-built dataset for measuring intrinsic social biases of LMs considering social contexts in Korean cultures following the proposed process. * We evaluate and provide comprehensive analyses on existing state-of-the-art Korean and multilingual LMs in diverse ways by measuring performances and bias scores. * We posit the need for constructing data considering cultural characteristics rather than machine-translating existing NLP datasets in high-resource languages. ## 2 Related Work ### Cross-Cultural NLP Several approaches for cultural considerations in LMs have been proposed. These include cross-cultural analysis in downstream tasks Lin et al. (2018); Lee et al. (2023), and culturally-sensitive dataset constructions Liu et al. (2021); Yin et al. (2021); Jeong et al. (2022). Recent studies have also presented methods for translating existing data in a culturally-sensitive manner by automatically removing examples with'social keywords.' These include words related to social behaviors such as weddings Lin et al. (2021). Another method is performing cross-cultural translation with human translators by substituting or paraphrasing original concepts into similar meaning Ponti et al. (2020). Our approach builds upon these methods by adapting cross-cultural translation, manually eliminating samples that do not fit Korean culture, and incorporating culturally-fit target groups and hand-crafted samples into a Korean-specific bias benchmark dataset. ### Bias and Stereotype Datasets English DatasetsThe BBQ Parrish et al. (2022) dataset is designed to evaluate models for bias and stereotypes using a multiple-choice QA format. It includes real-life scenarios and associated questions to address social biases inherent in LMs. As the QA format is highly adaptable for evaluating BERT-like models and generative LMs, it is used for assessing state-of-the-art LMs Liang et al. (2022); Srivastava et al. (2023). However, BBQ mainly contains US-centric stereotypes, which poses challenges for direct implementation in Korean culture. UnQover Li et al. (2020) also quantifies biases in a QA format with underspecified questions. However, it only measures the model likelihood within masked language modeling (MLM) models and restricts the answer candidates to two incorrect options, biased and counter-biased. This differs from BBQ or KoBBQ, as these provide evaluation methods applicable to MLM and generative models and include the correct 'unknown' choice for ambiguous contexts. Other datasets for stereotypical bias measuring include StereoSet Nadeem et al. (2021) and CrowS-Pairs Nangia et al. (2020). However, the dataset designs and evaluation metrics proposed within these papers are only limited to those that apply to MLM models. Winogender Rudinger et al. (2018) and WinoBias Zhao et al. (2018) specifically shed light on gender bias with the use of gender pronouns (i.e., he, she, they), which is hardly applicable to Korean. Korean DatasetsK-StereoSet Song et al. (2021) is a machine-translated and post-edited version of StereoSet Nadeem et al. (2021) development set, whose data are noisy and small. KoSBi Lee et al. (2023) is an extrinsic evaluation dataset to assess the outputs of generative LMs. They created their samples in a machine-in-the-loop system considering target groups revealing Korean cultures. They classified unsafe labels into three: stereotype, prejudice, and discrimination. Still, it is challenging to extract rules of thumb regarding Korean cultures and stereotypes from those samples. ## 3 KoBBQ Dataset ### Task Description KoBBQ contains template-based data split into 12 categories, as shown in Table 1. Each of the templates has two versions, containing biased and counter-biased contexts. Each version is comprised of two types of contexts (ambiguous and disambiguated), two types of questions (biased and counter-biased), two kinds of answer candidates (targeted and non-targeted groups), relevant social value, and the known stereotyped group(s) corresponding to the social bias. Therefore, each template has a total of eight context-question pairs, with four different context types and two different question types. Biased and Counter-Biased QuestionsBiased questions ask which group conforms to a given social bias, while counter-biased questions ask which group goes against it. For instance, if a biased question asks, "_Who has been rejected after applying for a company job and not making it to the interview stage?_", then the corresponding counter-biased question would be, "_Who has made it to the interview stage of a company job?_" The example refers to the social value of '_difficulty finding a job_' from the _educational background_ category. It's important to note that social biases don't always have to be negative, as they can also be positive. Therefore, biased questions may not necessarily be focused on negative stereotypes. We use the predicted answers to these questions from LMs to evaluate the QA performance and calculate the bias scores in Section 4.2. Ambiguous and Disambiguated ContextAmbiguous context mentions the targeted and non-targeted groups without sufficient information to answer the two questions accurately. Conversely, the corresponding disambiguated context adds the required context to answer the questions correctly, finally making the questions answerable. Correct answers exist among the two options, which refer to the targeted and non-targeted groups. One of the two versions in each template includes a biased context where the disambiguated context relates to a situation where the targeted group conforms to relevant social values. The other version includes a counter-biased context where the non-targeted group conforms to the opposite of societal stereotypes. ### Data Construction The dataset curation process of KoBBQ consists of 5 steps: (1) sample annotation, (2) translation, (3) category construction of the original BBQ dataset, (4) template generation for new samples and categories, and (5) a large-scale survey to collect target groups and investigate the consensus of the public. Each of the steps will be further explained below. #### 3.2.1 Sample Annotation The authors, comprised of four Korean natives, annotate existing samples from the original BBQ dataset (Parrish et al., 2022) into three classes: Simply-Translated, Target-Modified, and Sample-Removed. Simply-Translatedindicates samples revealing stereotypical biases that match Korean cultural background. These samples only go through cultural-sensitive translation when transformed into samples of KoBBQ. Target-Modifieddenotes samples whose inherent biases exist in Korean cultures but are stereotyped towards target different groups. Therefore, in addition to cross-cultural translation, we conduct a large-scale survey to collect the targeted groups within the Korean culture from the Korean public. Sample-Removedrefers to specific samples that are unnatural in the Korean cultural context, therefore, not included in KoBBQ. All samples are labeled into three classes based on agreements between at least three authors. We rule out Sample-Removed samples to construct a bias benchmark dataset that fully reflects Korean cultures. #### 3.2.2 Translation We initially utilize DeepL Translator1 to machine-translate Simply-Translated and Target-Modified samples. However, Peskov et al. \begin{table} \begin{tabular}{l|r r r r|r|r} \hline \hline & \multicolumn{3}{c|}{\# of Templates} & \multicolumn{2}{c|}{\# of} & \multicolumn{2}{c}{\# of} & \multicolumn{1}{c}{\# of} \\ & ST & TM & SR & NC & \multicolumn{1}{c|}{Templates} & \multicolumn{1}{c}{Samples} \\ \hline Age & 17 & 0 & 1 & 2 & 19 & 299 \\ _Disability Status_ & 18 & 0 & 0 & 0 & 18 & 140 \\ _Gender Identity_ & 20 & 0 & 0 & 0 & 20 & 81 \\ _Physical Appearance_ & 17 & 0 & 3 & 1 & 18 & 479 \\ _Race/Ethnicity/Notations_ & 0 & 28 & 17 & 10 & 38 & 1,032 \\ _Belgin_ & 4 & 8 & 10 & 7 & 19 & 69 \\ _Socio-Economy Status_ & 14 & 1 & 7 & 9 & 24 & 1,813 \\ _Scausal Orientation_ & 6 & 3 & 10 & 7 & 16 & 80 \\ _Domestic Area of Origin_ & 0 & 0 & 0 & 23 & 23 & 193 \\ _Family Structure_ & 0 & 0 & 0 & 18 & 18 & 118 \\ _Political Orientation_ & 0 & 0 & 0 & 9 & 9 & 31 \\ _Educational Background_ & 0 & 0 & 0 & 24 & 24 & 405 \\ \hline Total & 96 & 40 & 48 & 103 & 246 & 4,740 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of KoBBQ. ST, TM, SR, NC indicate Simply-Translated, Target-Modified, Sample-Removed, and Newly-Created, respectively. (2021) pointed out that translated sentences may lack cultural context, highlighting the need for the adaptation of entities to the target culture, which is known as adaptation in the translation field (Vinay and Darbelnet, 1995). This methodology is one of the methods of cross-cultural translation (Sperber et al., 1994). To ensure a high-quality translation with Korean cultural contexts, we request a professional translator to perform culturally-sensitive human-moderated translations. We specifically ask the translator to utilize Korean culture-familiar words, such as E-Mart2 instead of Walmart, bleached hair instead of dark hair 3, and basketball instead of rugby 4, to avoid awkwardness in Korean situations. Footnote 2: One of the largest discount stores in Korea ([https://company.emart.com/en/company/business.do](https://company.emart.com/en/company/business.do)) Footnote 3: Typically, the natural hair color of Korean individuals is dark. (Im et al., 2017) Footnote 4: Most popular sports activities in South Korea as of March 2023 ([https://www.statista.com/forecasts/1389015/most-popular-sports-activities-in-south-korea](https://www.statista.com/forecasts/1389015/most-popular-sports-activities-in-south-korea)) #### 3.2.3 Category Construction We reconstruct the stereotyped group categories of the original BBQ based on the categories and demographic groups of KoSBi (Lee et al., 2023), which refers to UDHR 5 and NHRCK6. We (1) merge _race/ethnicity_ and _nationality_ into a single category and (2) add four categories that reflect unique social contexts of Korean cultures: _domestic area of origin_, _educational background_, _family structure_, and _political orientation_. The reason behind merging the two categories was that the distinction between _race/ethnicity_ and _nationality_ is vague in Korea, considering that Korea is an ethnically homogeneous nation compared to the US (Han, 2007). Moreover, by adding new categories, the dataset covers a wide range of social biases and corresponding target groups embedded within Korean society. Footnote 5: Universal Declaration of Human Rights (UDHR) #### 3.2.4 Template Generation To create a fair and representative sample of Korean culture and to balance the number of samples across categories that were newly built or contained Sample-Removed samples, the authors manually create templates and label them as Newly-Created. These templates follow the same structure as existing BBQ samples, and references are provided to support any stereotypical biases the template conveys. To ensure the reliability of the stereotypes in our templates, we exclusively utilize sources backed by solid evidence. These sources include research articles that feature in-depth interviews with representatives of the target groups, statistical reports derived from large-scale surveys conducted by the Korean public, and news articles that provide expert analysis of statistical findings. We strongly rely on statistical data to validate and support social values associated with the target groups, as we firmly believe that the trends depicted in the results will gradually harmonize with societal stereotypes. #### 3.2.5 Large-scale Survey We conduct a large-scale survey to probe whether the stereotypical biases revealed through templates and samples match the general cognition of the Korean public in cooperation with Macromill Embrain7. We recruit crowd-workers so that at least 100 people participate in each template's qualifying and target group annotating process while preserving the Korean public's demographic balance, considering gender and age. The survey covers all templates within the KoBBQ dataset, but there exist some differences in the survey design among the classes of each template. Footnote 7: Korean company specialized in online research and panels ([https://embrain.com/](https://embrain.com/)). Simply-Translated and Newly-Created samples have reliable references from the original BBQ or the template generation process. However, there is insufficient evidence that the corresponding social biases are evident in Korean society. To address this issue, we only show the ambiguous context with the biased question and ask the participants to choose between the target and non-target groups that seem more appropriate to answer the biased question. We also provide no stereotype exists' choice for the people with no bias related to the corresponding social value of the template. Only the templates with a ratio of people that chose the targeted group as the answer for the biased question higher than 0.8 are included in KoBBQ. Moreover, for _religion_, _domestic area of origin_, and _race/ethnicity/nationality_, we asked for the participants to choose the most likely non-target groups. As a result, unlike BBQ that included all other possible options for the non-target group candidates, KoBBQ includes only groups that are not perceived as conforming to the given stereotype as the non-target groups. For Target-Modified samples, we provide both ambiguous and disambiguated contexts alongside the biased question. As mentioned in Section 3.2.1, adapting these samples into KoBBQ involves reconstructing target groups. To achieve this, the context only includes the relevant social value from the original template, with special tokens such as '[stereotyped group]' and '[non-stereotyped group]' inserted where target and non-target groups were previously mentioned. Workers are then prompted to select the target and non-target groups from a list of all possible target groups within the category for the social stereotype. Groups that over 80% of the annotators agreed to include as target groups or non-target groups will be incorporated into the final KoBBQ dataset. The platform and the detailed questions for the survey are described in Appendix A. ### Data Statistics Table 1 shows the number of templates per class mentioned in Section 3.2.1 and the number of samples per category. Each template consists of multiple samples, as each target group and the non-target group is substituted with several specific examples of them. ## 4 Experiments We evaluate state-of-the-art generative large language models on KoBBQ, and compare them with the patterns on BBQ. ### Experimental Settings We design our experiments, providing a model with a multiple-choice QA problem as input, which consists of context, a question, and three choices: A, B, and C (target, non-target, and 'Unknown' options), and asking the model to choose the appropriate answer. ModelWe only include the models that are capable of question answering in the zero-shot settings, since the fine-tuning or few-shot settings might affect the bias of the models. The following models are used in the experiments: GPT-3 8(Brown et al., 2020), ChatGPT 9, Bard 10, Claude 11(Bai et al., 2022), HyperCLOVA (Kim et al., 2021), KULLM 12, Alpaca 13(Taori et al., 2023), and KoAlpaca 14. GPT-3 and HyperCLOVA are pretrained models without utilizing advanced training approaches like finetuning or Reinforcement Learning from Human Feedback (RLHF). Detailed model settings are stated in Appendix B. Footnote 8: [https://platform.openai.com/docs/models](https://platform.openai.com/docs/models) Footnote 9: [https://chat.openai.com/](https://chat.openai.com/) Footnote 10: [https://bard.google.com/](https://bard.google.com/) Footnote 11: [https://www.anthropic.com/product](https://www.anthropic.com/product) Footnote 12: [https://github.com/nlpai-lab/KULLM](https://github.com/nlpai-lab/KULLM) Footnote 13: [https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca) PromptWe use 5 different prompts with different instructions and different 'unknown' expressions. They are listed in Appendix B. Following Izacard et al. (2022), the cyclic permutation of the three choices (A, B, and C) is applied to each prompt. Evaluation setEach template has multiple attributes for target groups and alternative expressions. Since the whole dataset containing all combinations of attributes is very large and unbalanced, we use a test set gathering a randomly-sampled example from each template. ### Evaluation Metrics In this paper, we provide three metrics: (1) accuracy for question-answering performance, (2) diff-bias score for measuring the bias in the model outputs with respect to the accuracy, and (3) template-level bias score for evaluating how the model acts in response to the different types of context and question. AccuracyThe accuracy refers to the ratio of the correct predictions in the question-answering task. In the ambiguous context, the answer is always 'Unknown', so the accuracy \(\texttt{Acc}_{A}\) means the proportion of questions that are answered as 'Unknown'. On the other hand, in the disambiguated context, the accuracy \(\texttt{Acc}_{D}\) refers to the ratio of selecting either the target group or the non-target group that corresponds to the correct answer to the question given the context. Since generative models may occasionally produce responses that do not exactly match the given choices, we set the criteria for accepting an answer as follows: i) only one alphabet is mentioned in the given options, ii) a response exactly matches the term provided in the options, iii) any specific expressions clearly intend to provide the answer, such as _'answer is -'_ or iv) the answer is clearly stated first as i) - iii), followed by an explanation. Any response that does not meet the criteria is classified as an _out-of-choice_ answer. Diff-bias ScoreThe bias score measures how frequently the model answers a question based on its bias. In BBQ format, the bias is dependent on the QA performance. For example, if the model perfectly answers the question based on the given context, then the model cannot have any bias. We suggest defining the bias scores that have clear relationships with the QA performance and considering the model bias and accuracy together. In ambiguous contexts, the diff-bias score \(\text{Diff-bias}_{A}\) is defined as the difference between the ratio of generating biased answers and that of generating counter-biased answers (Equation 1). A higher bias score implies the model outputs answers that are aligned with social biases to nondeterministic discriminatory questions. Its range depends on accuracy as Equation 2, since the accuracy in ambiguous context means the ratio of generating 'Unknown' expressions. \[\begin{split}\text{Diff-bias}_{\text{A}}=&\text{( ratio of biased answer)}\\ &-&\text{(ratio of counter-biased answer)}\end{split} \tag{1}\] \[\begin{split}|\text{Diff-bias}_{\text{A}}|\leq 1-\text{Acc}_{\text{A}} \quad(0\leq\text{Acc}_{\text{A}}\leq 1)\end{split} \tag{2}\] We define the diff-bias score \(\text{Diff-bias}_{D}\) of disambiguated context as the difference between accuracy under biased context and accuracy under counter-biased context (Equation 3). A higher bias score indicates that the accuracy is higher when the context is biased compared to when it is counter-biased. This implies the model's inherent bias is influencing its responses to the given question. As \(\text{Diff-bias}_{D}\) refers to the subtraction of the aforementioned accuracies whereas the mean of them is _'accuracy in disambiguated context'_, the range of \(\text{Diff-bias}_{D}\) is expressed as Equation 4. \[\begin{split}\text{Diff-bias}_{\text{D}}=&\text{( accuracy in biased context)}\\ &-&\text{(accuracy in counter-biased context)}\end{split} \tag{3}\] \[\begin{split}|\text{Diff-bias}_{\text{D}}|\leq\left\{\begin{array}{ ll}2\text{Acc}_{\text{D}}&(0\leq\text{Acc}_{\text{D}}\leq 0.5)\\ 2-2\text{Acc}_{\text{D}}&(0.5<\text{Acc}_{\text{D}}\leq 1)\end{array}\right. \end{split} \tag{4}\] Template-level Bias Score DistributionThere remain limitations in accuracy and Diff-bias scores, as they fail to indicate whether a model consistently exhibits bias across all eight context-question pairs within a single template. To address this issue, we compute template-level bias scores and examine their distribution throughout the dataset. Table 2 presents all possible scores from a single pair. Each bias score is computed by measuring the distance between the model's response and the correct answer. By aggregating the scores from all eight pairs, we obtain a single score for each template, which ranges from -8 to +8. Additionally, we calculate the average score across all templates for each model. A higher average score indicates the model's greater consistency in displaying biased answers. For ambiguous contexts (where the answer is 'Unknown'), if the model responds with 'Unknown', the bias score is 0. In other scenarios, a bias score of +1 is given when the model provides a wrong biased answer, and -1 is assigned for a wrong counter-biased answer. For example, for a given question _'Who was forgetful?'_, if the model chooses _'78-year-old'_, which is a biased answer, it gets +1. Conversely, if it outputs _'22-year-old'_, which is a counter-biased answer, it earns -1. In disambiguated contexts, the scoring differs between counter-biased and biased contexts. For counter-biased contexts, where the correct answer is the counter-biased one, the model earns +1 if it selects 'Unknown', and earns +2 if it chooses the biased answer. Conversely, in biased contexts, where the correct answer is the biased one, the model earns -1 if it picks 'Unknown' and -2 if it chooses the counter-biased answer. For instance, consider the same question from above under a counter-biased disambiguated context. If the model selects _'78-year-old'_ as the answer, it will receive a +2 score. On the other hand, in a biased disam \begin{table} \begin{tabular}{c|c|c c c} \hline \hline Context & \multicolumn{4}{c}{Bias score Measure} \\ \hline \multirow{4}{*}{Ambiguous} & Answer Type & Counter-Biased/Biased Context \\ \cline{2-5} & Choice & Counter-biased & Unknown & Biased \\ & Answer & & ✓ & \\ & Bias Score & -1 & 0 & 1 \\ \hline \multirow{4}{*}{Disambiguated} & Answer Type & \multicolumn{4}{c}{Counter-Biased Context} \\ \cline{2-5} & Choice & Counter-biased & Unknown & Biased \\ & Answer & ✓ & & \\ \cline{1-1} \cline{2-5} & Bias Score & 0 & 1 & 2 \\ \cline{1-1} \cline{2-5} & Answer Type & \multicolumn{4}{c}{Biased Context} \\ \cline{1-1} \cline{2-5} & Choice & Counter-biased & Unknown & Biased \\ & Answer & & ✓ & \\ \cline{1-1} \cline{2-5} & Bias Score & -2 & -1 & 0 \\ \hline \hline \end{tabular} \end{table} Table 2: A Method of measuring bias scores at template-level. This table represents scores for four context-question pairs that can arise from a single question. An answer type means the correct answer to a given question. biquated context, if the model chooses _'22-year-old'_ as the answer to the same question, it will be assigned a -2 score. ### Experimental Results In this section, we present the experimental results of 8 LLMs on KoBBQ. In general, Korean-centric LLMs show higher performance on KoBBQ than BBQ, while the other multilingual LLMs show higher performance on BBQ. The proportions of _out-of-choice_ on KoBBQ range from 0.0109 to 0.2025. The accuracy and diff-bias scores on BBQ and the _out-of-choice_ ratio on KoBBQ and BBQ are reported in Appendix C. Accuracy and Diff-bias scoreAs shown in Figure 3, all models show higher accuracy in disambiguated contexts compared to ambiguous contexts. ChatGPT, Claude, and Bard models achieve an accuracy of over 0.7 in disambiguated contexts. However, in ambiguous contexts, they tend to produce biased outputs, resulting in lower accuracy and higher bias score. On the other hand, other models show lower accuracy. These models produce somewhat random outputs, leading to both low accuracy and bias. In particular, Alpaca and GPT-3 show a tendency to randomly choose one of the two groups, excluding 'Unknown'. As a result, their accuracy is significantly lower than 0.3 in ambiguous contexts and around 0.5 in disambiguated contexts. Distribution of Template-level Bias ScoreFigure 4 shows the distribution of template-level bias scores of each model. The more accurate and unbiased a model is, the more it exhibits a distribution with a high frequency at 0. The template-level bias scores of the models are mostly distributed within the range of -2 to 2. ChatGPT, Bard, and Claude show right-skewed distributions, indicating a tendency to lean toward answers that align with certain biases. On the other hand, the distribution of bias scores of Korean-centric models, HyperCLOVA, KULLM, and KoAlpaca are not right-skewed. GPT-3, KULLM, and KoAlpaca have mean values close to zero, seeming to be an unbiased model without leaning toward one direction. However, their bias score distributions are evenly skewed on both sides, and this says that they often generate answers that are biased towards either side. Figure 4: Bias score distribution at template-level of models. X-axis and y-axis indicate the bias score and frequency, respectively. A distribution skewed to the right indicates the model frequently responds to the target group as an answer. If the distribution is centered around 0, it implies that the model has produced answers that are not biased toward one particular group. The uniform model in the last row is the distribution from random answer selection. Figure 3: The bias score and accuracy of models. Dash lines indicate the max bias score depending on the accuracy. ## 5 Discussion To highlight the need for building a hand-crafted bias benchmark for different cultures, we compare BBQ and machine-translated BBQ with KoBBQ. Why is BBQ not enough for Korean bias benchmark? (KoBBQ vs. BBQ)The accuracy scores of multilingual models vary between KoBBQ and BBQ, with higher bias scores in KoBBQ (as shown in Figure 4(a)). Interestingly, this trend is evident, especially in Simply-Translated samples in both datasets that have identical contexts (as seen in Figure 4(b)). These findings indicate that even within a single model, the level of bias can differ across different languages. Under ambiguous contexts, as shown in Figure 4(b), the Newly-Created samples have comparatively lower accuracy and higher bias scores. This demonstrates that the samples the authors added identify the presence of unexamined inherent bias in LMs. Furthermore, the accuracy difference between KoBBQ and BBQ is more significant in Target-Modified samples than in Simply-Translated samples. This suggests that the change in target groups tailored to Korean culture significantly impacts the model's performance, in addition to language differences. Why is machine translation not enough? (KoBBQ vs. machine-translated BBQ)One of the easiest methods for translating BBQ samples is using machine translation. As BBQ samples are created by filling in attributes in templates, one way to translate them is to translate templates and attribute candidates separately (t-mtBBQ). Another way is to fill in attributes from the candidates first and then perform sample-level translation (s-mtBBQ) to mitigate the issue of lacking information on parts of speech (PoS) and postposition in templates and attributes. In this section, we compare the LMs' performance on machine-translated BBQs (t-mtBBQ and s-mtBBQ) to manually-translated samples (Simply-Translated and Target-Modified) in KoBBQ. Overall, as shown in Figure 4(a), the scores on machine-translated BBQs are significantly different from those on KoBBQ or original BBQ, with t-mtBBQ and s-mtBBQ demonstrating similar scores with each other under ambiguous contexts. In Figure 4(b), model bias is higher when using KoBBQ or BBQ with only ambiguous context compared to machine-translated BBQs. Displaying higher bias in an ambiguous context indicates that the model shows a higher tendency to choose the target group as an answer to the biased question even when there is not enough context. On the other hand, when the disambiguated context is additionally given, the model accuracy is always higher when using KoBBQ or BBQ compared to the machine-translated versions. As question answering under the disambiguated context is similar to machine reading comprehension tasks, it can be understood that manual translation helps the LMs' comprehension of the context. This once again highlights the limitations of using machine-translation methods for converting an existing dataset into another language. Figure 5: Accuracy and bias score of models (a) by dataset type, and (b) by label type. Figure 5 (a) shows the difference between the original BBQ, machine-translated versions of BBQ, and KoBBQ. Figure 5 (b) shows the difference between the sample types in different dataset types. Note that Sample-Removed only exists in BBQ, and Newly-Created only exists in KoBBQ. Both figures contain performances under the ambiguous context and the disambiguated context. For simplification, only the performances of Claude and ChatGPT are shown. Conclusion We presented a Korean bias benchmark (KoBBQ) that contains question-answering data with situations related to biases existing in Korea. From BBQ dataset, the existing US-centric bias benchmark, we divided its samples into three classes (Simply-Translated, Target-Modified, and Sample-Removed) to make it culturally adaptive. Additionally, we added new four categories that depict biases prevalent in Korean culture. KoBBQ involves 4,740 samples across 12 categories of social bias. We conducted various experiments and analyses using 8 Korean-specialized and multilingual LLMs. Overall, the multilingual LLMs tended to produce biased answers on the KoBBQ. The biases of LLMs on KoBBQ are different from those on BBQ, suggesting that inherent bias varies depending on the language within a model. However, the Korean-centric models showed much lower performance on question answering with instruction following. Korean-centric generative models with better question-answering performance are expected to derive more meaningful results with KoBBQ. We believe KoBBQ will serve as a helpful resource in developing benchmarks that measure biases of Korean cultural perspectives. ### Limitations During the data preprocessing phase, the authors manually classified original BBQ samples into three classes: Simply-Translated, Target-Modified, and Sample-Removed. Although all the authors are Korean citizens, there may be some inherent bias within this pool. However, the authors filtered the samples based on the results of a large-scale survey conducted among Korean citizens. This approach ensured a more objective and representative dataset of Korean culture as a result. In this paper, the Korean-specialized LLMs utilized did not achieve the necessary accuracy for conducting a meaningful analysis of bias scores. Considering the three possible answer choices, these models demonstrated performance levels comparable to random selection, with scores close to or lower than 0.33. These results highlight the constraints of the existing Korean models. However, we remain eager to rerun the experiment when new Korean models become available, expecting them to exhibit improved QA performance. With models that inherit the social biases prevalent in Korean society, we can conduct a more comprehensive and insightful analysis of bias scores in models tailored to Korean culture. ### Ethics Statement We expect that our KoBBQ can considerably contribute to the improvement of the safe usage of LLMs' applications by assessing the inherent social biases present in the models. All studies in this research project were performed under our institutional review board (IRB) approval. We have thoroughly addressed ethical considerations throughout our study, focusing on (1) constructing the data, (2) validating the data with crowdworkers, and (3) releasing the data. This paper builds upon and complements the original BBQ dataset in English by creating Korean-specific and Korean-customized bias benchmark dataset called KoBBQ. The generation of all samples in KoBBQ follows previous literature provided with reliable references. To ensure the quality and reliability of our data, we recruited a sufficient number of crowdworkers in the validation process. There was no discrimination when recruiting and selecting crowdworkers regarding any demographics, including gender and age. During the survey process, we informed all crowdworkers that the content might be stereotypical or biased. We set the wage per session to be above the minimum wage in the Republic of Korea in 2023 (KRW 9,260 \(\approx\) USD 7.25) 15. Footnote 15: [https://www.minimumwage.go.kr/](https://www.minimumwage.go.kr/) We acknowledge the potential risk associated with releasing a dataset that contains stereotypes and biases. This dataset must not be used as training data to automatically generate and publish biased languages targeting specific groups. However, by publicly releasing it, we acknowledge that we cannot entirely prevent all malicious use. To address this concern, we will explicitly state the terms of use in that we do not condone any malicious use. We strongly encourage researchers and practitioners to utilize this dataset in beneficial ways, such as mitigating bias in existing LMs. ## Acknowledgements This work was supported by NAVER Cloud.
2309.08954
Recent advances in cosmological singularities
The discovery of universe's late-time acceleration and dark energy has overseen a great deal of research into cosmological singularities and in this brief review, we discuss all the prominent developments in this field for the best part of the last 2 decades. We discuss the fundamentals of space-time singularities after which we discuss about all the different forms of cosmological singularities which have been discovered in recent times in detail. We then talk about methods and techniques to avoid or moderate these singularities in various theories and discuss how these singularities can occur in non-conventional cosmologies too. We then discuss a useful dynamical systems approach to deal with these singularities and finish up with some outlooks for the field. We hope that this work serves as a good resource to anyone who wants to update themselves with the developments in this very exciting area.
Oem Trivedi
2023-09-16T11:09:47Z
http://arxiv.org/abs/2309.08954v2
# Recent advances in cosmological singularities ###### Abstract The discovery of universe's late-time acceleration and dark energy has overseen a great deal of research into cosmological singularities and in this brief review, we discuss all the prominent developments in this field for the best part of the last 2 decades. We discuss the fundamentals of space-time singularities after which we discuss about all the different forms of cosmological singularities which have been discovered in recent times in detail. We then talk about methods and techniques to avoid or moderate these singularities in various theories and discuss how these singularities can occur in non-conventional cosmologies too. We then discuss a useful dynamical systems approach to deal with these singularities and finish up with some outlooks for the field. We hope that this work serves as a good resource to anyone who wants to update themselves with the developments in this very exciting area. ## 1 Introduction Observations of late time acceleration of the Universe came as a huge surprise to the cosmological community [1] and ever since then a lot of work has been done in order to explain this expansion. The cosmological expansion problem has been addressed from multiple facets till now, which include the standard approaches of the Cosmological constant [2, 3, 4] alongside more exotic scenarios like Modified gravity theories [5, 6, 7], scalar field driven late-time cosmic acceleration scenarios [8, 9, 10, 11, 12, 13, 14]. Several approaches to quantum gravity have also weighed in on the cosmic-acceleration puzzle, ranging from the Braneworld cosmology of string theory to the likes of loop quantum cosmology and asymptotically safe cosmology [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. This, however, has also sprung up some discrepancies which seem to be pointing towards the limits of our current understanding of the universe, most famous of which is arguably the Hubble tension which refers to the disagreements between the values of the Hubble constant measured from. Detailed Cosmic Microwave Background Radiation(CMB) maps, combined with Baryon Acoustic Oscillations data, and those from the SNeIa data [26, 27, 28]. Hence, the current epoch of the universe has certainly provided us with a wide range of questions and looks set to become an avenue where advanced gravitational physics will lead the way towards better understanding of cosmology. There has also been an expansive literature in recent times which has been devoted to the study of various types of singularities that could occur during the current and far future of the Universe, with the observation of late-time acceleration having given a significant boost to such works [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51].Even the term singularity comprises many different definitions and with regards to cosmological cases, until the end of the 20th century, the only popular possibilites of singularty formation were the initial Big Bang singularity and, in the case of spatially closed cosmological models, the final Big Crunch singularity. The definition of a singular point in cosmology was given by Hawking and Penrose, and most of the theorems proven by them make use of the null energy condition and also of the facts that at a singular point of the spacetime, geodesics incompleteness occurs and also the curvature scalars diverge. Although in modified gravity the null energy condition may be different in general in comparison to the Einstein-Hilbert case (see for example), it is generally accepted that the geodesic incompleteness and also that the divergence of the curvature invariants, strongly indicate the presence of a crushing singularity. The singularities in cosmology vary in their effects, and a complete classification of these was performed in. While one can treat singularities as points at which a cosmological theory somewhat fails, one might also consider them as windows to new physics and thus have a different kind of appealing interest with it. Particularly finite-time singularities (those which happen in a finite time) could be viewed as either flaws the classical theory, or alternatively as a doorway towards a quantum description of general relativity. This is due to the fact that these cannot be dressed in a similar way to the spacelike singularities of black holes for instance, and so one is left to ponder about the accuracy of the predictions of classical gravitational theories. Hence studying singularities in cosmological contexts and how they could be (possibly) removed, provides a way towards a deeper understanding of relating quantum descriptions of cosmology with its classical ones. These cosmological singularities which have been discussed in recent times can be classified broadly into two types ; strong and weak (such a classification was initially put forward by [52]). Strong singularities are those singularities which can distort finite objects and can mark either the beginning or the end of the universe, with the big bang being the one for the start of the universe and the so called "big rip" signalling the end of the universe. Weak singularities, as the name might suggest, are those which do not have such far reaching implications and do not distort finite objects in the same sense as their strong counterparts. We can discuss these various singularities in more detail as follows, in accordance to the classification provided in [32, 53] : * Type -1 ("Grand Bang/ Grand rip") : In this scale the scale factor becomes null (Bang) or diverges (rip ) for \(w=-1\)[54] * Type 0 ("Big Bang") : In this case, the scale factor becomes null for \(w\neq-1\) * Type I ("Big Rip") : In this case, the scale factor, effective energy density and effective pressure density diverges for \(w\neq-1\). This is a scenario of universal death, wherein everything which resides in the universe is progressively torn apart [55]. * Type II ("Sudden/Quiescent singularity") : In this case, the pressure density diverges and so does the derivatives of the scalar factor from the second derivative onwards [56]. The weak and strong energy conditions hold for this singularity. Also known as quiescent singularities, but this name originally appeared in contexts related to non-oscillatory singularities [57]. A special case of this is the big brake singularity [58]. * Type III ("Big Freeze") : In this case, the derivative of the scale factor from the first derivative on wards diverges. These were detected in generalized Chaplygin gas models [59]. * Type IV ("Generalized sudden singularities"): These are finite time singularities with finite density and pressure instead of diverging pressure. In this case, the derivative of the scale factor diverges from a derivative higher than the second [32, 60]. * Type V ("w-singularities") : In this case, the scale factor, the energy and pressure densities are all finite but the barotropic index \(w=\frac{p}{\rho}\) becomes singular [61]. * Type \(\infty\)(" Directional singularities "): Curvature scalars vanish at the singularity but there are causal geodesics along which the curvature components diverge [62] and in this sense, the singularity is encountered just for some observers. * Inaccessible singularities: These singularities appear in cosmological models with toral spatial sections, due to infinite winding of trajectories around the tori. For instance, compactifying spatial sections of the de Sitter model to cubic tori. However, these singularities cannot be reached by physically well defined observers and hence this prompts the name inaccessible singularities [63]. * Type -1 ("Grand Bang/Rip") : In this case, the scale becomes null or diverges for \(w=-1\)[64]. All of these singularities discussed above have been studied in a variety of different contexts and in this review, we would like to summarize works primarily of the past two decades on these topics and discuss the current status quo of such singularities. In section II, we would discuss in detail all the singularities we have listed above and how these have been shown to form in different cosmological scenarios. In Section III, we will discuss some lesser known singularities which are more special cases of the previously listed singularities while in section IV, we would discuss various ways which have been shown to remove such singularities (in some cases). In Section V we would discuss a particular dynamical system analysis method (known as the Goriely-Hyde method ) which has been shown to be very useful for cosmological singulartity discussions. Finally in setion VI, we would summarize our brief review and discuss the future outlooks for cosmology with regards to singularities. ## 2 An overview on space-time singularities After Einstein proposed the general theory of relativity, which describes gravity in terms of spacetime curvature, the field equations were introduced to relate the geometry of spacetime to the matter content of the universe. Early solutions included the Schwarzschild metric and the Friedmann models. These Figure 1: The classification of cosmological singularities summarized models described the gravitational field around isolated objects and the overall geometry of the universe, respectively. These models exhibited spacetime singularities where curvatures and energy densities became infinitely high, leading to a breakdown of the physical description. The Schwarzschild singularity at the center of symmetry could be eliminated by a coordinate transformation, but the genuine curvature singularity at \(r=0\) remained. It was initially believed that these singularities were a result of the high symmetry in the models. However, further research by Hawking, Penrose, Geroch, and others demonstrated that spacetimes could have singularities under more general conditions. Singularities are an inherent feature of the theory of relativity and also apply to other gravitational theories based on spacetime manifolds. These singularities indicate super ultra-dense regions in the universe where physical quantities become infinitely large.In classical theories of gravity, singularities are an unavoidable aspect of describing physical reality. The behavior of these regions is beyond the scope of classical theory, and a quantum theory of gravity is needed to understand them.The field of gravitational physics saw significant developments in the 1960s due to observations of high-energy astrophysical phenomena and advancements in the study of spacetime structure and singularities. These advancements led to progress in black hole physics, relativistic astrophysics, and cosmology. Singular behavior is observed in space-time models described by general relativity. Examples include the Friedmann-Robertson-Walker (FRW) cosmological models and the Schwarzschild space-time. These models exhibit singularities where energy density and curvatures become infinitely large, leading to a breakdown of the conventional description of space-time.The Schwarzschild space-time displays an essential curvature singularity at \(r=0\), where the Kretschmann scalar \(\alpha=R^{ijkl}R_{ijkl}\) diverges along any non-spacelike trajectory approaching the singularity. Similarly, for FRW models with \(\rho+3p>0\) at all times (where \(\rho\) is total energy density and \(p\) is pressure), a singularity arises at \(t=0\), representing the origin of the universe. Along past-directed trajectories approaching this singularity, both \(\rho\) and the curvature scalar \(R=R_{ij}R^{ij}\) become infinite. In both cases, past-directed non-spacelike geodesics are incomplete, and these essential singularities cannot be eliminated through coordinate transformations. These singularities represent profound anomalies in space-time, where the usual laws of physics fail. Geodesic incompleteness implies that a timelike observer will cease to exist in the space-time after a finite amount of proper time.While singular behavior can occur without extreme curvature, such cases are considered artificial. An example is the Minkowski space-time with a removed point, where timelike geodesics encounter the hole and become future incomplete. However, it is desirable to exclude such situations by requiring the space-time to be "inex-tendible," meaning it cannot be isometrically embedded into a larger space-time as a proper subset. Nevertheless, non-trivial examples of singular behavior exist, such as conical singularities. These singularities do not involve diverging curvature components but are characterized by a Weyl-type solution. An example is the metric given by \(ds^{2}=-dt^{2}+dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta\,d\phi^{2})\) with the identification \(\phi=0\) and \(\phi=a\) (with \(a\neq 2\pi\)), creating a conical singularity at \(r=0\).The fundamental question is whether such singularities persist in general models and under what conditions they arise. Precisely defining a singularity in a general space-time reveals that singularities likely exist in a broad range of space-times, subject to reasonable conditions. These singularities can emerge as the endpoint of gravitational collapse or in cosmological scenarios, such as the origin of the universe. The initial observation to make here is that, by its very definition, the metric tensor must possess a well-established meaning at every typical point within the spacetime. However, this principle ceases to hold at a spacetime singularity, like those previously discussed. Such a singularity cannot be considered a standard point within the spacetime; instead, it is a boundary point connected to the manifold. Consequently, difficulties arise when attempting to characterize a singularity based on the requirement that curvatures become infinite in proximity to it. The issue stems from the fact that, since the singularity lies outside the spacetime domain, it is not feasible to define its vicinity in the usual sense, which is essential for discussing the behavior of curvature quantities in that specific region. An alternative approach might involve defining a singularity in relation to the divergence of elements within the Riemann curvature tensor along trajectories that do not follow spacelike directions. However, a challenge arises here as well: the behavior of these elements can change depending on the reference frames employed, rendering this approach less useful. One might consider utilizing curvature scalars or scalar polynomials involving the metric and Riemann tensor, demanding that they reach exceedingly large values. Instances of such divergence are encountered in models such as Schwarzschild and Friedmann. However, it remains possible that such a divergence only occurs at infinity for a given nonspacelike path. In a broader sense, it seems reasonable to expect some form of curvature divergence to occur along nonspacelike trajectories that intersect a singularity. Nevertheless, attempting to universally characterize singularities through curvature divergence encounters various complications. Taking into account these scenarios and analogous ones, the presence of nonspacelike geodesic incompleteness is widely accepted as a criterion indicating the existence of a singularity within a spacetime. Although this criterion may not encompass all potential forms of singular behavior, it is evident that the occurrence of incomplete nonspacelike geodesics within a spacetime manifold signifies definite singular behavior. This manifests when a timelike observer or a photon abruptly vanishes from the spacetime after a finite interval of proper time or a finite value of the affine parameter. The singularity theorems, which emerge from an analysis of gravitational focusing and the global attributes of a spacetime, establish this incomplete nature for a broad array of spacetimes under a set of relatively general conditions. From a physical standpoint, a singularity in any physics theory typically indicates that the theory becomes invalid either in the vicinity of the singularity or directly at the singularity. This implies a need for a broader and more comprehensive theory, necessitating a revision of the existing framework. Similar reasoning applies to spacetime singularities, suggesting that a description involving quantum gravity is warranted within these regions of the universe, rather than relying solely on a classical framework. The existence of an incomplete nonspacelike geodesic or an inextendible nonspacelike curve with a finite length, as measured by a generalized affine parameter, implies the presence of a spacetime singularity. The concept of "generalized affine length" for such a curve is defined as : \[L(\lambda)=\int_{0}^{a}\left[\sum_{i=0}^{3}(X^{i})^{2}\right]^{1/2}ds\] which remains finite. The components \(X^{i}\) represent the tangent to the curve in a tetrad frame propagated in parallel along the curve. Each incomplete curve defines a boundary point of the spacetime, which is singular.To be considered a genuine physical singularity, it is expected that such a singularity is associated with unbounded growth of spacetime curvatures. If all curvature components and scalar polynomials involving the metric and Riemann curvature tensor remain finite and well-behaved as the singularity is approached along an incomplete nonspacelike curve, the singularity might be removable by extending the spacetime with relaxed differentiability requirements [65]. Different formalizations are possible for this requirement. A "parallely propagated curvature singularity" is one where the components of the Riemann curvature tensor are unbounded in a parallely propagated frame, forming the endpoint of at least one nonspacelike curve. Conversely, a "scalar polynomial singularity" occurs when a scalar polynomial involving the metric and Riemann tensor takes on infinitely large values along a nonspacelike curve ending at the singularity. This includes cases like the Schwarzschild singularity, where the Kretschmann scalar (\(R^{ijkl}R_{ijkl}\) becomes infinite as \(r\) approaches 0.Curvature singularities, as further elucidated, also arise in various spacetime scenarios involving gravitational collapse. The strength of singularities and their potential to cause tidal forces on extended bodies can be assessed, and various criteria are available to determine this aspect [52]. These criteria all involve representing a finite object at each point along a causal geodesic as a volume defined by three independent Jacobi fields in the hyperspace, with the velocity of the curve as the normal vector. Tipler's criterion [66] deems a singularity as strong if this volume tends to zero as the singularity is approached along the geodesic. On the other hand, Krolak's criterion [67] stipulates that the derivative of this volume with respect to the normal parameter must be negative. Consequently, some singularities can be strong according to Krolak's criterion while being weak according to Tipler's, such as type III or Big Freeze singularities. Another criterion is outlined in [68]. Working with Jacobi fields can be demanding as it involves solving the Jacobi equation along geodesics. Nevertheless, conditions for lightlike and timelike geodesics, satisfying both criteria, have been established [65]. These conditions are expressed in terms of integrals of the Ricci and Riemann curvatures of the spacetime metric along these curves: * Lightlike geodesics: According to Tipler's criterion, a singularity is strong along a lightlike geodesic if and only if the integral \[\int_{0}^{\tau}d\tau^{\prime}\int_{0}^{\tau^{\prime}}d\tau^{\prime\prime}R_{ ij}u^{i}u^{j}\] diverges as the normal parameter \(\tau\) approaches the singularity. Krolak's criterion states that the singularity is strong if and only if the integral \[\int_{0}^{\tau}d\tau^{\prime}R_{ij}u^{i}u^{j}\] diverges as \(\tau\) approaches the singularity. * Timelike geodesics: For timelike geodesics, [65] presents various necessary and sufficient conditions, but not a single characterization. Adhering to Tipler's criterion, a singularity is strong along a timelike geodesic if the integral \[\int_{0}^{\tau}d\tau^{\prime}\int_{0}^{\tau^{\prime}}d\tau^{\prime\prime}R_{ ij}u^{i}u^{j}\] diverges on approaching the singularity. Conforming to Krolak's criterion, the singularity is strong if the integral \[\int_{0}^{\tau}d\tau^{\prime}R_{ij}u^{i}u^{j}\] diverges on approaching the singularity. Additional necessary conditions exist, although they are not utilized for our purposes. In passing, it is also of interest to talk of the cosmic censorship conjecture [69], which is the idea that all singularities arising from gravitational collapse will always be hidden by an event horizon. There are actually two versions of this conjecture ; The weak version is that dynamical singularities in general relativity are generically not visible to observers at infinity, while the strong version is that dynamical singularities in general relativity are generically not visible to any observer. Singularities in violation of the weak version are dubbed globally naked, while those in violation of the strong version are dubbed locally naked. The conjectures have not yet been proven and have been a topic of recurring debates and has sprung up a lot of work the topic of naked singularities. In principle, one could think of cosmological singularities as naked singularities as well given that there is no need of an event horizon in such cases and there is no development of such horizon in cases in which such singularities develop too. Several examples of spacetimes containing naked singularities have been found in recent times [70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82]. When such singularities develop in gravitational collapse, they give rise again to extremely intriguing physical possibilities and problems. The opportunity offered in that case is, we may have the possibility to observe the possible ultra-high energy physical processes occurring in such a region of universe, including the quantum gravity effects. In fact, Loop quantum gravity in particular has a very amicable view of naked singularities and has been shown to be in favour of their existence [83, 84]. Such observations of ultra-high energy events in the universe could provide observational tests and guide our efforts for a possible quantum theory of gravity and so naked singularities can also be a good avenue for testing out the predictions of such theories. Very recently another very interesting work has been done with regards to naked singularities was performed in [85]. The authors there a performed general relativistic ray-tracing and radiative transfer simulations to generate synchrotron emission images utilising thermal distribution function for emissivity and absorptivity. They investigate effects in the images of JMN-1 naked singularity and Schwarzschild black hole by varying inclination angle, disk width and frequency. Their results give further motivation of naked singularities being a realistic scenario. ## 3 Types of singularities ### Strong singularities As mentioned before, strong singularities are those singularities which can distort finite objects in space-time and now we would like to discuss of the prominent singularities in this regard. #### 3.1.1 Big Bang singularity(Type 0) Classical models of the universe generically feature an initial or 'big bang' singularity. This is when we consider progressively earlier and earlier stages of the universe, observable quantities stop behaving in a physically reasonable way. A more precise mathematical characterisation of the cosmic big bang singularity can be made in terms of both a global notion of incompleteness of inextendible causal (i.e., non-spacelike) past-directed curves and a local notion of the existence of a curvature pathology. Models of inflation also feature massive moving particles seeing a singularity in a finite proper time. The idea of the big bang hence, as is popularly known is, the singularity at the very beginning of the universe. #### 3.1.2 Big rip singularity (Type 1) In the case of a big rip singularity, the scale factor of the universe becomes infinite at a finite time in the future, the energy density and the pressure density of the universe also becomes infinite. In a big rip singularity, the dark energy density becomes so large that it causes the expansion of the universe to accelerate at an ever-increasing rate. As a result, the scale factor of the universe increases without bound, and the universe becomes infinitely large at the time of the big rip. The energy density and pressure density of the universe also become infinite at the time of the big rip.The thing to note here is that interestingly the big rip was proposed as a possible phantom scenario for the universe [55], which means that the equation of state \[w=\frac{p}{\rho}<-1\] The phantom conclusion is interesting from the point of view that this presents some peculiar properties, like the energy density of phantom energy increasing with time or the fact that a phantom scenario violates the dominant energy condition [86]. Despite the fact that sound waves in quintessence travel at the speed of light, it should not be automatically assumed that disturbances in phantom energy must propagate faster than the speed of light. Indeed, there exist several scalar-field models for phantom energy where the sound speed is actually subluminal [87, 88, 89, 90]. Phantom constructions have also been discussed in the context of quantum gravitational theories, for example in various string theoretic realizations of dark energy. [15, 91, 92]. So it seems in principle interesting to look for a late time universe scenario with phantom dominance and that is where big rip comes in. It is also worth discussing the subtleties of the big rip and how it would unfold.In a universe resembling one with a cosmological constant, the scale factor's expansion is faster than the Hubble distance, leading galaxies to gradually vanish beyond our observable horizon. If we introduce the concept of phantom energy, the expansion rate (Hubble constant) increases over time, causing the Hubble distance to shrink. Consequently, galaxies disappear at an accelerated pace as the cosmic horizon approaches. What's even more intriguing is the potential of enhanced dark energy density to eventually tear apart objects held together by gravity. In the framework of general relativity, the gravitational potential's source stems from the volume integral of the sum of energy density (\(\rho\)) and three times the pressure (\(p\)), denoted as \(\rho+3p\). For instance, a planet in orbit around a star with mass \(M\) and radius \(R\) becomes unbound approximately when the condition \(-(4\pi/3)(\rho+3p)R^{3}\approx M\) is satisfied. In cases where the equation \(-(\rho+3p)\) decreases over time due to a parameter \(w\) greater than or equal to -1, if \(-(4\pi/3)(\rho+3p)R^{3}\) is smaller than \(M\) at present, it will continue to remain smaller indefinitely. This implies that any currently gravitationally bound system, such as the solar system, the Milky Way, the Local Group, and galaxy clusters, will remain bound in the future. However, when dealing with phantom energy, the quantity \(-(\rho+3p)\) increases with time. Consequently, at a certain point in time, every gravitationally bound system will eventually disintegrate. Analyzing the time evolution of the scale factor and the dependence of phantom-energy density on time, we deduce that a gravitationally bound system with mass \(M\) and radius \(R\) will undergo disintegration around a time \(t\approx P\sqrt{2|1+3w|}/[6\pi|1+w|]\). Here, \(P\) represents the period of a circular orbit at radius \(R\) within the system. This process occurs prior to the Big Rip, with the earliest estimate of the Big Rip time being 35 billion years. Big rip even rips apart molecules, atoms and even nuclei are dissociated ( which makes it fitting that the name of the singularity is the big rip). However, it is not all gloomy as various works have explored way to avoid the big rip too (for example [93]) and we will be discussing those later on here. A lot more works have been done on various aspects of big rip singularities over the years, see [94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106]. The comparison of Big Rip from Dark energy and modified gravity was done for the first time in [107]. Less drastic variants of the big rip have also been found in recent years [108, 109, 110] which have been discussed in detail in Appendix B. #### 3.1.3 Grand bang and Grand rip singularities (Type \(-1\)) The grand bang, although apparently different in name, is quite the same as the big bang singularity with null scale factors and a diverging pressure and energy density with the one difference being that the singularity occurs with the equation of state parameter being equal to -1 [64]. This type of singularity was found initially by using a series ansatz for the scale factor. An understanding of the grand bang and grand rip singularities is quite intricately linked with each other and so we shall discuss that now. To discuss these grand singularities, we note that the equation for the parameter \(w\) is given by: \[w=\frac{p}{\rho}=-\frac{1}{3}-\frac{2}{3}\frac{a\ddot{a}}{\dot{a}^{2}}.\] This expression holds true specifically for flat models. When considering curvature, additional terms need to be included. The equation of state (EOS) parameter \(w\) has a close connection with the deceleration parameter \(q\): \[q=-\frac{a\ddot{a}}{\dot{a}^{2}}=\frac{1+3w}{2},\] assuming flat models. Otherwise, the relationship between these parameters becomes more intricate, involving the Hubble parameter \(H=\dot{a}/a\). This enables a direct translation of results from the EOS parameter to the deceleration parameter. Alternatively, one can view this equation as the differential equation governing the evolution of the scale factor for a given time-dependent barotropic index \(w(t)\). It is advantageous to introduce the variable \(x=\ln a\): \[\frac{\ddot{x}}{\dot{x}^{2}}=-\frac{3}{2}(w+1)=-(q+1),\] allowing us to define \[h(t):=\frac{3}{2}(w(t)+1)=q(t)+1\] as a correction around the case of a pure cosmological constant: \[w(t)=-1+\frac{2}{3}h(t),\quad q(t)=-1+h(t).\] This change of variables assists in reducing the order of the differential equation: \[h=-\frac{\ddot{x}}{\dot{x}^{2}}=\left(\frac{1}{\dot{x}}\right)^{\cdot}\Rightarrow \dot{x}=\left(\int h\,dt+K_{1}\right)^{-1},\] In terms of this, one finds the scale factor to be \[a(t)=\exp\left(\int\frac{dt}{\int h(t)\,dt}\right), \tag{1}\] If one then assumes a power series form of \(h(t)\) ( which has become quite a well motivated ansatz for the scale factor in various cosmological studies, see ) \[h(t)=h_{0}t^{\eta_{0}}+h_{1}t^{\eta_{1}}+\cdots,\qquad\eta_{0}<\eta_{1}<\cdots \tag{2}\] It can then be found out that the energy and pressure densities can be written as \[\rho(t)=\left\{\begin{array}{ll} 3\left(\frac{\eta_{0}+1}{h_{0}} \right)^{2}t^{-2(\eta_{0}+1)}+\cdots&\mbox{if }-1\neq\eta_{0}\neq 0\\ \\ \frac{3}{h_{0}^{2}}\frac{1}{\ln^{2}|t|}+\cdots&\mbox{if }\eta_{0}=-1\\ \\ \frac{3t^{-2}}{h_{0}^{2}}+\cdots&\mbox{if }\eta_{0}=0,\end{array}\right.\] and the pressure, \[p(t)=\left\{\begin{array}{ll}\frac{2(\eta_{0}+1)^{2}}{h_{0}}t^{- \eta_{0}-2}+\cdots&\mbox{if }-1\neq\eta_{0}<0\\ \\ \frac{2}{h_{0}}\frac{1}{t\ln^{2}|t|}+\cdots&\mbox{if }\eta_{0}=-1\\ \\ \frac{2h_{0}-3}{h_{0}^{2}}t^{-2}+\cdots&\mbox{if }\eta_{0}=0\\ \\ -3\left(\frac{\eta_{0}+1}{h_{0}}\right)^{2}t^{-2(\eta_{0}+1)}+ \cdots&\mbox{if }\eta_{0}>0,\end{array}\right.\] This presents us with intriguing possibilities, where our specific focus will be on the case where \(\eta_{0}>0\). In this scenario, we observe that at \(t=0\), \(\rho\) and \(p\) exhibit divergences following \(t^{-2(\eta_{0}+1)}\), and the parameter \(w\) converges to the value of \(-1\). The consideration of such a singularity has not been explored within previous frameworks. The reason behind this omission is rooted in its incompatibility with the classifications established in [111] and [112]. This is due to the behavior of the scale factor (which is an exponential of rational functions); it doesn't lend itself to convergent power expansions, whether generalized or not, with a finite number of terms featuring negative powers. However, the function \(x(t)\) does exhibit such behavior. The nature of the singularity is governed by the sign of the coefficient \(h_{0}\). This is evident in the approximation of \(a(t)\) as \[a(t)\approx e^{-\mathrm{sgn}\,(h_{0})\alpha/t^{\eta_{0}}},\quad \alpha=\frac{\eta_{0}+1}{\eta_{0}|h_{0}|}>0,\quad t>0,\] Based on this, we make the following observations: * For \(h_{0}>0\): In this scenario, the exponential term in equation (1) decreases as \(t\) increases, and the scale factor \(a\) approaches zero as \(t\) approaches \(0\). This resembles an exponential-type Big Bang singularity or, if we swap \(t\) for \(-t\), a Big Crunch. Given that \(h_{0}\) is positive, the barotropic index \(w\) consistently remains below the phantom divide near \(t=0\). Specifically, the value \(w=-1\) is approached from values below it. These types of singularities are known as grand bang singularities. * For \(h_{0}<0\): Conversely, in this case, the exponential term increases as \(t\) increases, causing the scale factor \(a\) to diverge to infinity as \(t\) approaches \(0\). This resembles an exponential-type Big Rip singularity at \(t=0\), which, when considering the future, can be located by substituting \(t\) with \(-t\). In this instance, the barotropic index \(w\) consistently remains above the phantom divide, and the value \(w=-1\) is approached from values above it. This scenario is termed the grand rip singularity. #### 3.1.4 Directional singularities (Type \(\infty\)) We follow the discussion of [62] in order to understand how directional singularities were initially found in cosmological models. If we start with the flat FLRW metric in the form indicated by Eq. (1): \[ds^{2}=-dt^{2}+a^{2}(t)\left(dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta d \phi^{2}\right)\right) \tag{3}\] it is evident that the equations governing the trajectories of geodesics, followed by observers not subject to acceleration (\(\delta=1\)) and light-like particles (\(\delta=0\)) possessing a specific linear momentum \(P\), can be simplified to: \[\frac{dt}{d\tau}=\sqrt{\delta+\frac{P^{2}}{a^{2}(t)}} \tag{4}\] \[\frac{dr}{d\tau}=\pm\frac{P}{a^{2}(t)} \tag{5}\] assuming constant \(\theta\) and \(\phi\) due to the symmetry inherent in these models. Here, \(\tau\) represents the intrinsic or proper time as measured by the observer.In the context of null geodesics, we find that: \[\Delta\tau=\frac{1}{P}\int_{-\infty}^{t}a(t)dt \tag{6}\] Consequently, to ensure that the initial _event_\(t=-\infty\) corresponds to a finite proper time interval \(\Delta\tau\) from an event at \(t\), the requirement is: \[\int_{-\infty}^{t}a(t)\,dt<\infty. \tag{7}\] Therefore, the emergence of singular behavior exclusively at \(t=-\infty\) is possible if the scale factor can be expressed as an integrable function of coordinate time. This condition necessitates that \(a(t)\) tends towards zero as \(t\) approaches \(-\infty\), although this alone is not sufficient. Similarly, for timelike geodesics with non-zero \(P\): \[\Delta\tau=\int_{-\infty}^{t}\frac{dt}{\sqrt{1+\frac{P^{2}}{a^{2}(t)}}}<\frac {1}{P}\int_{-\infty}^{t}a(t)dt \tag{8}\] indicating that the proper time interval to \(t=-\infty\) is finite provided the time interval for light-like geodesics is also finite. Consequently, \(t=-\infty\) is reachable for these observers. As a result, condition (7) implies that both light-like and timelike geodesics with non-zero \(P\) experience \(t=-\infty\) within a finite proper time interval in their past. Conversely, comoving observers tracing timelike geodesics with \(P=0\) exhibit \(d\tau=dt\), which leads to \(t=-\infty\) corresponding to an infinite proper time interval in their past, and thus, they cannot encounter the singularity. This dichotomy is responsible for the directional nature of Type \(\infty\) singularities, as they are accessible to causal geodesics except those with \(P=0\). Ultimately, it can be concluded that Type \(\infty\) singularities can manifest in three scenarios: * For a finite \(\int_{-\infty}h\,dt\) with \(h(t)>0\): \(a_{-\infty}=0\), \(\rho_{-\infty}=\infty\), \(p_{-\infty}=-\infty\), \(w_{-\infty}=-1\). These differ from the "little rip" model in the sign of \(h(t)\), and are termed "little bang" if they denote an initial singularity, or "little crunch" if they represent a final singularity [113]. Instances of this case encompass models with a scale factor \(a(t)\propto e^{-\alpha(-t)^{p}}\) where \(p>1\) and \(\alpha>0\). * When \(h_{-\infty}=0\) and \(|h(t)|\gtrsim|t|^{-1}\) with \(h(t)<0\): \(a_{-\infty}=0\), \(\rho_{-\infty}=0\), \(p_{-\infty}=0\), \(w_{-\infty}=-1\). Changing the sign of \(h(t)\) gives rise to a variant of the "little rip" scenario, featuring an asymptotically vanishing energy density and pressure. Models with a scale factor \(a(t)\propto e^{-\alpha(-t)^{p}}\) where \(p\in(0,1)\) and \(\alpha>0\) exemplify this case. * For a finite \(h_{-\infty}\in(-1,0)\): \(a_{-\infty}=0\), \(\rho_{-\infty}=0\), \(p_{-\infty}=0\), and a finite \(w_{-\infty}\neq-1\). This case applies to models like \(a(t)\propto t^{-p}\) with \(p>1\), as explored in [62]. While they have recently been discussed in the context of inflationary models [113],not much work has been done in the regard of Type \(\infty\) singularities since their discovery with regards to their avoidance or their emergence in more exotic cosmological models. ### Weak singularities #### 3.2.1 Sudden singularities (Type II) In the case of such type II singularities, one has the pressure density diverging or equivalently, the derivatives of the scale factor diverging from the second derivative onwards. Let's start by examining informally whether there is a potential for the emergence of singularities in which a physical scalar quantity becomes unbounded at a finite future comoving proper time \(t_{s}\). This might occur when the scale factor \(a(t)\) approaches a non-zero or infinite value \(a(t_{s})\) and the Hubble parameter \(H(t)\) approaches a finite value \(H_{s}\) (where \(H_{s}\) is positive and not infinite). If such scenarios are feasible, the following conditions need to be satisfied: \[\rho\to 3H_{s}^{2}+\frac{k}{a_{s}^{2}}=\rho_{s}<\infty \tag{9}\] and \[\frac{\ddot{a}}{a}\rightarrow\frac{p}{2}\ -\frac{H_{s}^{2}}{2}-\frac{k}{6a_{s}^{2}} \tag{10}\] \[\dot{\rho}\rightarrow-3H_{s}(\rho_{s}+p) \tag{11}\] Hence, it becomes apparent that the density must inevitably remain finite at \(t_{s}\). However, there is still a possibility for a singularity in pressure to arise, manifested as: \[p(t)\rightarrow\infty \tag{12}\] as \(t\to t_{s}\), consistent with the conditions outlined in Equation (10). In such instances, the pressure singularity is concomitant with an infinite acceleration. To illustrate an example for this, we take the most primitive example of such singularities which was put forward by Barrow in [56]. In this regard, assume that it is physically reasonable to expect that the scale factor can be written in the form of this ansatz 1 Footnote 1: Appendix A provides a detailed overview behind the motivations that allow us to make such a consideration for the scale factor \[a(t)=A+Bt^{q}+C(t_{s}-t)^{n}, \tag{13}\] where \(A>0\), \(B>0\), \(q>0\), \(C\), and \(n>0\) are constants that we will determine. We set the origin of time such that \(a(0)=0\), leading to \(A=-Ct_{s}^{n}>0\). Consequently, we find the expression for the Hubble parameter \(H_{s}\): \[H_{s}=\frac{qBt_{s}^{q-1}}{A+Bt_{s}}. \tag{14}\] For simplicity, we use the freedom to scale the Friedmann metric by dividing by \(A\) and set \(A\equiv 1\) and \(C\equiv-t_{s}^{n}\). This yields the simplified form of \(a(t)\): \[a(t)=\left(\frac{t}{t_{s}}\right)^{q}\left(a_{s}-1\right)+1-\left(1-\frac{t}{ t_{s}}\right)^{n}, \tag{15}\] where \(a_{s}\equiv a(t_{s})\). As \(t\) approaches \(t_{s}\) from below, the behavior of the second derivative of \(a\) can be described: \[\ddot{a}\to q(q-1)Bt^{q-2}-\frac{n(n-1)}{t_{s}^{2}(1-\frac{t}{t_{s}})^{2-n}} \rightarrow-\infty, \tag{16}\] whenever \(1<n<2\) and \(0<q\leq 1\). This solution is valid for \(0<t<t_{s}\). Consequently, as \(t\) approaches \(t_{s}\), \(a\) approaches \(a_{s}\), \(H_{s}\) and \(\rho_{s}>0\) (as long as \(3q^{2}(a_{s}-1)^{2}t_{s}^{-2}>-k\)) remain finite, while \(p_{s}\rightarrow\infty\). When \(2<n<3\), \(\ddot{a}\) remains finite but \(\ \dddot{a}\rightarrow\infty\) as \(t\) approaches \(t_{s}\). Here, \(p_{s}\) remains finite, but \(\dot{p}_{s}\rightarrow\infty\).In contrast, there exists an initial strong-curvature singularity, where both \(\rho\) and \(p\) tend to infinity as \(t\) approaches \(0\). Importantly, in this scenario, both \(\rho\) and \(\rho+3p\) remain positive. Such behavior can even arise in a closed universe (\(k=+1\)), where the pressure singularity prevents expansion from reaching a maximum. This is the most primitive example of a pressure singularity but ever since the work in [56], such singularities have been discussed in loads of various different settings, both from modified gravity perspectives and other phenomenological considerations. Work has also been done on ways to escape such singularities and we will be discussing them later in our discussions. #### 3.2.2 Big Freeze singularity (Type III) The Big Freeze singularity is similar to the Big rip but is still quite different from it. This singularity was firstly shown in a phantom generalized Chaplygin gas cosmology (PGCG) in [59] and we shall quickly see how it unfolds in such a scenario. The equation of state governing PGCG closely resembles that of the conventional generalized Chaplygin gas. It can be succinctly expressed as: \[p=-\frac{A}{\rho^{\alpha}},\] where the symbol \(A\) represents a positive constant and \(\alpha\) signifies a parameter. In the scenario where \(\alpha=1\), the equation assumes the form of a simple Chaplygin gas equation of state. This relationship is crucially connected to the continuity equation, given by: \[\dot{\rho}+3H(p+\rho)=0,\] from which emerges the expression for energy density \(\rho\): \[\rho=\left(A+\frac{B}{a^{3(1+\alpha)}}\right)^{\frac{1}{1+\alpha}},\] where \(B\) stands as a constant parameter. In a noteworthy observation made in Ref. [98], it was discerned that a negative \(B\) renders the perfect fluid, with the equation of state \(p=-\frac{A}{\rho^{\alpha}}\), unable to uphold the null energy condition, that is, \(p+\rho<0\). Intriguingly, under these conditions, the energy density rather escalates as the Universe expands, contrary to redshift behavior, thus earning the term "phantom generalized Chaplygin gas" (PGCG). Further insights from the works of [114, 98] unveil that for PGCG with \(\alpha>-1\), a FLRW Universe hosting this fluid can evade the impending big rip singularity. As the scale factors attain magnitudes far beyond, the Universe eventually approximates an asymptotically de Sitter state. In stark contrast, during the Big Freeze scenario, the PGCG energy density responds by amplifying as the scale factor matures. Especially, as the scale factor approaches minuscule values (\(a\to 0\)), \(\rho\) tends towards \(A^{\frac{1}{1+\alpha}}\), while it experiences a surge at a finite scale factor \(a_{\max}\): \[a_{\max}=\left|\frac{B}{A}\right|^{\frac{1}{3(1+\alpha)}}.\] As a consequence, a FLRW Universe saturated with PGCG is destined to confront a finite-radius future singularity. Notably, the vicinity of this singularity lends itself to a cosmological evolution described by the relation: \[a\simeq a_{\max}\left\{1-\left[\frac{1+2\alpha}{2(1+\alpha)}\right]^{\frac{2( 1+\alpha)}{1+2\alpha}}A^{\frac{1}{1+2\alpha}}|3(1+\alpha)|^{\frac{1}{1+2 \alpha}}(t_{\max}-t)^{\frac{2(1+\alpha)}{1+2\alpha}}\right\}.\] Remarkably, this singularity not only emerges at a finite scale factor but also at a distinct future cosmic time. Conversely, the history of a FLRW Universe permeated with this fluid traces back to an asymptotically de Sitter state in the past. Expressing this temporal journey succinctly: \[a\simeq a_{0}\exp\left(A^{\frac{1}{2(1+\alpha)}}t\right),\] where \(a_{0}\) signifies a minute scale factor. Additionally, the Universe embarks on its odyssey from a bygone infinity of cosmic time, as \(a\to 0\) and \(p+\rho\to 0^{-}\). Remarkably, the homogeneous and isotropic nature of the Universe propels it into a phase of super-accelerated expansion, denoted by: \[\dot{H}=-\frac{3}{2}(p+\rho)>0,\] up until it culminates at the singularity \(a=a_{\rm max}\). It's imperative to recall that the PGCG eludes satisfaction of the null energy condition [98], as embodied in \(p+\rho<0\). A great amount of work has been done on Big Freeze singularities since the intial one by in [59] and it has been shown that one can encounter such singularities in a lot of exotic cosmological settings and also there have been works probing how one can avoid such singularities too [115, 116, 117, 118, 119, 120, 121, 122]. #### 3.2.3 Generalized sudden singularities (Type IV) These singularities were firstly discussed in [31] and have since then been found in a diverse variety of cosmological settings. So here we will briefly discuss the primary cases in which type IV singularities were shown. In fact, this example will illustrate all the prominent singularities we have discussed so far. We start with an equation of state of the form \[p=-\rho-f(\rho) \tag{17}\] This sort of equation of state with \(f(\rho)=A\rho^{\alpha}\) for \(\alpha\) being an arbitrary constant was first proposed in [60] and was investigated in detail in [94] and there can be diverse physical motivations behind such an equation of state. This form of EOS can also be equivalent to bulk viscosity [123]. This type of an equation of state can also come about due to modified gravity effects [30]. We now consider the following ansatz for the scale factor \[a(t)=a_{0}\left(\frac{t}{t_{s}-t}\right)^{n}\,. \tag{18}\] where \(n\) is a positive constant and \(0<t<t_{s}\). The scale factor diverges within a finite time (\(t\to t_{s}\)), resembling the phenomenon of the Big Rip singularity. Consequently, \(t_{s}\) represents the universe's lifetime. When \(t\ll t_{s}\), the evolution of \(a(t)\) follows \(t^{n}\), leading to an effective EOS given by \(w=-1+2/(3n)>-1\). Conversely, when \(t\sim t_{s}\), the effective EOS assumes \(w=-1-2/(3n)<-1\). The Hubble rate in this case can be expressed as \[H=n\left(\frac{1}{t}+\frac{1}{t_{s}-t}\right)\,. \tag{19}\] Utilizing Equation (19) one can deduce the relation \[\rho=\frac{3n^{2}}{\kappa^{2}}\left(\frac{1}{t}+\frac{1}{t_{s}-t}\right)^{2}\,. \tag{20}\] As a result, both \(H\) and \(\rho\) exhibit minima at \(t=t_{s}/2\), characterized by the values \[H_{\min}=\frac{4n}{t_{s}}\,\quad\rho_{\min}=\frac{48n^{2}}{\kappa^{2}t_{s}^{2} }. \tag{21}\] Next, we examine a specific form for \(f(\rho)\) given by \[f(\rho)=\frac{AB\rho^{\alpha+\beta}}{A\rho^{\alpha}+B\rho^{\beta}} \tag{22}\] where \(A\), \(B\), \(\alpha\), and \(\beta\) are constants. As we shall see, this dark energy scenario harbors a complex structure with respect to singularities. In scenarios where \(\alpha\) surpasses \(\beta\), we observe that \[f(\rho)\rightarrow\begin{cases}A\rho^{\alpha}&\text{as }\rho\to 0\\ B\rho^{\beta}&\text{as }\rho\rightarrow\infty\end{cases}. \tag{23}\] For non-unit values of \(\alpha\) and \(\beta\), we obtain \[a=a_{0}\exp\left\{-\frac{1}{3}\left[\frac{\rho^{-\alpha+1}}{(\alpha-1)A}+ \frac{\rho^{-\beta+1}}{(\beta-1)B}\right]\right\}\,. \tag{24}\] The realm of possibilities in this cosmology is extensive. If \(1>\alpha>\beta\) and \(A,B>0\) (\(A,B<0\)), the scale factor has a minimum (maximum) at \(\rho=0\), extending to infinity (vanishing) as \(\rho\rightarrow\infty\). When \(\alpha>1>\beta\) and \(A<0\) while \(B>0\) (\(A>0\) and \(B<0\)), the scale factor features a minimum (maximum) at a non-trivial (non-vanishing) \(\rho\) value, reaching infinity (zero) as \(\rho\) approaches zero or a positive infinity. For \(\alpha>1>\beta\) and \(A,B>0\) (\(A,B<0\)), the scale factor becomes infinite (vanishes) as \(\rho\rightarrow\infty\) (\(\rho\to 0\)), and it vanishes (increases) as \(\rho\to 0\) (\(\rho\rightarrow\infty\)). When \(\alpha>\beta>1\), the scale factor approaches \(a_{0}\) as \(\rho\rightarrow\infty\). Additionally, if \(A>0\) (\(A<0\)), the scale factor tends to \(0\) (\(\infty\)) as \(\rho\to 0\). With \(A,B>0\) (\(A,B<0\)), the scale factor demonstrates monotonic growth (decrease) concerning \(\rho\). In the case of \(A>0\) and \(B<0\) (\(A<0\) and \(B>0\)), the scale factor attains a nontrivial maximum (minimum) at a finite \(\rho\) value. To summarize, the possibilities for singularity formation in this cosmological model are remarkably diverse. It's worth noting that some of the identified singularities may violate one or more energy conditions. These energy conditions encompass: \[\rho\geq 0\quad\rho\pm p\geq 0\qquad\text{"dominant energy condition"} \tag{25}\] \[\rho+p\geq 0\qquad\text{"null energy condition"}\] (26) \[\rho\geq 0\quad\rho+p\geq 0\qquad\text{"weak energy condition"}\] (27) \[\rho+3p\geq 0\quad\rho+p\geq 0\qquad\text{"strong energy condition"} \tag{28}\] With these considerations, we can succinctly summarize the findings for the cosmological model defined by the \(f(\rho)\) function as follows: * For \(A/B<0\), a type II singularity is inevitable, irrespective of the values of \(\beta\). * Regardless of the sign of \(A/B\), the nature of singularities varies according to the values of \(\beta\). 1. \(0<\beta<1/2\): A type IV future singularity is evident. The parameter \(w\) approaches infinity \((-\infty)\) for \(B<0\) (\(B>0\)). 2. \(\beta>1\): A type III future singularity emerges, accompanied by a breach of the dominant energy condition. The parameter \(w\) approaches infinity \((-\infty)\) for \(B<0\) (\(B>0\)). 3. \(3/4<\beta<1\): A type I future singularity emerges if \(A>0\). The dominant energy condition is violated for \(A>0\), and \(w\) approaches \(-1+0\)\((-1-0)\) for \(A<0\) (\(A>0\)). 4. \(1/2\leq\beta\leq 3/4\): No finite future singularity is present. 5. \(\beta=0\): A finite future singularity is absent, yet as \(\rho\to 0\), \(w\) approaches infinity \((-\infty)\) for \(B<0\) (\(B>0\)). 6. \(\beta<0\): A type II future singularity emerges. The dominant energy condition is broken, though the strong energy condition remains intact for \(B<0\). The parameter \(w\) approaches infinity \((-\infty)\) for \(B<0\) (\(B>0\)). So with this example (as was discussed in [31] ) shows us how one can find not only type IV singularities but also the other singularities we have discussed so far. Another interesting thing to note is that there comes out to be qualitative differences when one considers singularities in Jordan and Einstein frames, something which was discussed in detail and discovered in [124, 125]. It is also worth noting that when one considers viscous fluids, as in [126], then it may give rise to different types of singularities. The occurrence of singularities in an oscillating universe has been also been discussed, firstly in [127]. Singularities have also been considered in detail bounce cosmologies [128, 129, 130]. The realisation of all known 4 types of future singularities (Type I-Type IV) has also been found in very exotic modified gravity theories, for example in an f(R) version of Horava-Lifschitz gravity [131], while also in Teleparallel constructions like the one considered in [132]. A crucial point to that we should note here in passing with regards to all the singularities we have discussed so far is that the tidal forces which manifest for these singularities as the (infinite) impulse which reverses (or stops) the increase of separation of geodesics and the geodesics themselves can evolve further ; the universe can continue its evolution through a singularity. Moreover, it's intriguing to consider the potential consequences of these singularities on the constructs of quantum gravity. Although there exists a considerable body of literature exploring the emergence of cosmological singularities in quantum gravitational scenarios like Braneworlds, for instance, a more profound inquiry pertains to the influence of such singularities on fundamental entities like strings. If we contemplate an elongated structure such as a classical string, modeled using the Polyakov formalism [133]: \[S=-\frac{T}{2}\int d\tau d\sigma\eta^{\mu\nu}g_{ab}\partial_{\mu}X^{a}\partial_{ \nu}X^{b} \tag{29}\] (with \(T\) denoting string tension, and \(\tau,\sigma\) representing the string's worldsheet coordinates, \(\eta^{\mu\nu}\) corresponding to the worldsheet metric, \(\mu,\nu=0,1\), and \(g_{ab}\) standing for the spacetime metric), the scenario involves the string interacting with a non-BB singularity [134]. The crux of the matter is that a measurable property of the string, its invariant size \(S(\tau)=2\pi a(\eta(\tau))R(\tau)\) (assuming a circular assumption with radius \(R\)), reveals certain characteristics. Specifically, at a Big-Rip singularity, the string undergoes infinite stretching (\(S\rightarrow\infty\)), resulting in its destruction. In contrast, at a type II singularity, the scale factor remains finite at the \(\eta\)-time, consequently maintaining a finite invariant string size. Analogously, the same holds true for Type III and Type IV singularities. This implies that strings remain intact when encountering such singularities. 2 Footnote 2: This also underscores the ”weakness” of these singularities in the aspect that they don’t display geodesic incompleteness. As a result, particles [135], and even more extensive entities like extended objects [134], can traverse them without obstruction. Hence, they lack a ”dangerous” quality, which explains their potential emergence in the relatively proximate future (for instance, around 10 million years for Type II or the idea that a pressure singularity has happened in the recent past) [136, 137, 138, 39]. #### 3.2.4 w singularities (Type V) As the name suggests, w singularities occur when the equation of state parameter (w) blows up in some cosmological models. The singularities were firstly introduced in [139] and then expanded upon in later works [140, 61]. The authors in [139] arrived at w-singularities by firstly choosing the scale factor ansatz as \[a(t)=A+B\left(\frac{t}{t_{s}}\right)^{\frac{2}{3\gamma}}+C\left(D-\frac{t}{t_ {s}}\right)^{n}. \tag{30}\] It contains seven arbitrary constants: \(A\), \(B\), \(C\), \(D\), \(\gamma\), \(n\), and \(t_{s}\). The last of the constants \(t_{s}\) is the time when we expect the singularity. Having the scale factor (30), they imposed the following conditions \[a(0)=0,\;a(t_{s})=const.\equiv a_{s},\;\dot{a}(t_{s})=0,\;\ddot{a}(t_{s})=0. \tag{31}\] The first of the conditions (31) is chosen in order for the evolution to begin with a standard big-bang singularity at \(t=0\) (note that in order to have a big-rip, one would have to impose \(a(0)=\infty\), which is equivalent to taking \(\gamma<0\)). One can see that after introducing (31), the energy density and the pressure vanish at \(t=t_{s}\). The model does not admit a singularity of the higher derivatives of the Hubble parameter since \(\dot{H}(t_{s})\neq 0\) in \(\ddot{H}\), and so it is not of the type IV singularity according to the classification of Ref. [31]. On the other hand, even though both \(\ddot{a}(t_{s})\) and \(\dot{a}(t_{s})\) vanish in the limit \(t\to t_{s}\), the deceleration parameter blows-up to infinity, i.e., \[q(t_{s})=-\frac{\ddot{a}(t_{s})a_{s}}{\dot{a}^{2}(t_{s})}\rightarrow\infty \tag{32}\] and consequently as one can find the EOS parameter to be related with the deceleration parameter as \[w(t)=\frac{c^{2}}{3}\left[2q(t)-1\right] \tag{33}\] one finds that \(w(t_{s})\rightarrow\infty\). Then, we face a very strange singularity. It has vanishing pressure and energy density, a constant scale factor, but the deceleration parameter and, in particular, a time-dependent barotropic index \(w(t)\) are singular. Another ansatz for the scale factor which can give w-singularities was proposed by Dabrowski and Marosek in [141], which has an exponential form. That ansatz is given by \[a(t)=a_{s}\left(\frac{t}{t_{s}}\right)^{m}\exp\left(1-\frac{t}{t_{s}}\right)^{n} \tag{34}\] where \(a_{s}\) has the units of length and is a constant while m and n are also constants 3. The scale factor is zero (a=0) at t=0, thus signifying the big bang singularity. One can write the first and second derivatives of the scale factor as Footnote 3: While the ansatz on the surface looks quite different from a power series one which we will consider later on, it can be a sub case of a series ansatz within certain limits as well \[\dot{a}(t)=a(t)\left[\frac{m}{t}-\frac{n}{t_{s}}\left(1-\frac{t}{t_{s}}\right) ^{n-1}\right] \tag{35}\] \[\ddot{a}(t)=\dot{a}(t)\left[\frac{m}{t}-\frac{n}{t_{s}}\left(1-\frac{t}{t_{s} }\right)^{n-1}\right]+a(t)\left[-\frac{m}{t^{2}}+\frac{n(n-1)}{t^{2}}\left(1- \frac{t}{t_{s}}\right)^{n-2}\right] \tag{36}\] where the overdots now denote differentiation with respect to time. From this, one can see that for \(1<n<2\)\(\dot{a}(0)\rightarrow\infty\) and \(\dot{a}(t_{s})=\frac{ma_{s}}{t_{s}}=\) const., while \(a(t_{s})=a_{s}\), \(\ddot{a}(0)\rightarrow\infty\) and \(\ddot{a}(t_{s})\rightarrow-\infty\) and we have sudden future singularities. Furthermore, it was shown in [141] that for the simplified case of the scale factor (20) with \(m=0\), one can get w-singularities for \(n>0\) and \(n\neq 1\). Finally, yet another ansatz to get w-singularities was provided in [61] and is of a power series form given by \[a(t)=c_{0}+c_{1}(t_{s}-t)^{n1}+c_{2}(t_{s}-t)^{n2}..... \tag{37}\] where \(t_{s}\) is the time of the singularity. In order for pressure to be finite, \(n_{1}>1\). There have of course been a significant amount of works which have considered how these singularities can occur in non-standard cosmologies and how they can be avoided too. But in passing, a discussion is in order over the cosmological significance of w-singularities. While Type I-Type IV singularities deal with more direct cosmological parameters like the scale factor, Hubble parameter alongside energy and pressure densities, type V singularities deal with a somewhat indirect parameter in the form of w. This is not to say, however, that these singularities cannot occur in cosmological and in particular, dark energy models. For example [142] discussed how w-singularities can occur in interacting dark energy models(while the background cosmology in this case was still general relativistic and the continuity equation had its usual form), while in [143] it was showed how varying Chaplygin gas models can also have w-singularities. The occurrence of w-singularities in various other contexts has also been discussed in [144, 145, 146, 147, 103]. Hence while type V singularities deal primarily with a more indirect cosmological parameter, it by no means diminishes its cosmological importance and it does appear in a variety of cosmological expansion scenarios. ## 4 Singularity removal/avoidance methods With huge influx of interest in finding singularities in cosmological models, a natural interest also grew in investigating ways in which such singularities could either be completely removed or at least mildly alleviated/avoided in some cases. This has also resulted in an impressive amount of literature ( for example, look at [53] for a detailed account of avoiding singularities in both Jordan and Einstein frames). What we would like to do here is to discuss some of the prominent works in this regard, focusing on the use of quantum effects and modified gravity effects to deal with singularities. ### Conformal anomaly effects near singularities The effect of quantum backreaction of conformal matter around Type I, Type II and Type III singularities were taken into consideration in the works of Nojiri and Odintsov [29, 31, 148]. In these cases, the curvature of the universe becomes large around the singularity time \(t=t_{s}\), although the scale factor a is finite for type II and III singularities. Since quantum corrections usually contain the powers of the curvature or higher derivative terms, such correction terms are important near the singularity. At this point, it becomes important to add a bit of context about what conformal anomalies are and how they are usually perceived in high energy physics. It is fair to assume that there are many matter fields during inflation in the early universe because the Standard Model of particle physics has almost 100 fields, and this number may increase by two if the Standard Model is contained in a supersymmetric theory. Although the behaviour of these (massless) matter fields--scalars, the Dirac spinors, and vectors in curved space-time--is conformal invariant, some divergences are observed because of the presence of the one-loop vacuum contributions. In the renormalized action, some counterterms are required to break the matter action's conformal invariance in order to cancel the poles of the divergence component. From the classical point of view, the trace of the energy momentum tensor in a conformally invariant theory is null. But renormalization procedures can lead to the trace of an anomalous energy momentum tensor, which is the so-called quantum anomaly or the conformal anomaly(we would recommend the reader [149, 150, 151, 152] for more details on conformal anomaly effects). The conformal anomaly we have described be considered to have the following form [31] \[T_{A}=b\left(F+\frac{2}{3}\Box R\right)+b^{\prime}G+b^{\prime\prime}\Box R \tag{38}\] where \(T_{A}\) is the trace of the stress-energy tensor, F is the square of the 4d Weyl tensor and G is a Gauss-Bonet curvature invariant, which are given by \[F=(1/3)R^{2}-2R_{ij}R^{ij}+R_{ijkl}R^{ijkl} \tag{39}\] \[G=R^{2}-4R_{ij}R^{ij}+R_{ijkl}R^{ijkl} \tag{40}\] b and \(b^{\prime}\) on the other hand are given by \[b=\frac{N+6N_{1/2}+12N_{1}+611N_{2}-8N_{HD}}{120(4\pi)^{2}} \tag{41}\] \[b^{\prime}=-\frac{N+11N_{1/2}+62N_{1}+1411N_{2}-28N_{HD}}{360(4\pi)^{2}} \tag{42}\] with N scalar, \(N_{1/2}\) spinor, \(N_{1}\) vector fields, \(N_{2}\) (= 0 or 1 ) gravitons and \(N_{HD}\) being higher derivative conformal scalars. For usual matter, \(b>0\) and \(b^{\prime}<0\) except for higher derivative conformal scalars while \(b^{\prime\prime}\) can be arbitrary. Quantum effects due to the conformal anomaly act as a fluid with energy density \(\rho_{A}\) and pressure \(p_{A}\). The total energy density is \(\rho_{tot}=\rho+\rho_{A}\). The conformal anomaly, also known as the trace anomaly, can be given by the trace of the fluid stress-energy tensor \[T_{A}=-\rho_{A}+3p_{A} \tag{43}\] The conformal anomaly corrected pressure and energy densities still obey the continuity equation (2.10) and using that, we can write \[T_{A}=-4\rho_{A}-\frac{\dot{\rho_{A}}}{H}\left(\frac{1}{2\rho_{A}}-1\right) \tag{44}\] The conformal anomaly corrected pressure and energy densities still obey the continuity equation and using that, we can write [31] \[T_{A}=-4\rho_{A}-\frac{\dot{\rho_{A}}}{H} \tag{45}\] One can then express \(\rho_{A}\) as an integral in terms of \(T_{A}\) as \[\rho_{A}=-\frac{1}{a^{4}}\int a^{4}HT_{A}dt \tag{46}\] Furthermore \(T_{A}\) can be expressed in terms of the Hubble parameter as \[T_{A}=-12b\dot{H}^{2}+24b^{\prime}(-\dot{H}^{2}+H^{2}\dot{H}+H^{4})-(4b+6b^{ \prime\prime})(H^{(3)}+7H\ddot{H}+4\dot{H}^{2}+12H^{2}\dot{H}) \tag{47}\] And using this, one can have an expression for \(\rho_{A}\) taking into account conformal anomaly effects near the singularity \[\rho_{A}=-\frac{1}{a^{4}}\int dt\ a^{4}HT_{A}\\ =-\frac{1}{a^{4}}\int dta^{4}H\Bigl{[}-12b\dot{H}^{2}+24b^{\prime} (-\dot{H}^{2}+H^{2}\dot{H}+H^{4})-(4b+6b^{\prime\prime})\left(\dddot{H}+7H\ddot {H}+4\dot{H}^{2}+12H^{2}\dot{H}\right)\Bigr{]} \tag{48}\] While the quantum corrected Friedmann equation 4 is Footnote 4: Note that to maintain consistency with the notation used in [31] we are considering the Friedmann equation to be of the form \[H^{2}=\frac{\kappa^{2}}{3}(\rho+\rho_{m})\,. \tag{49}\] Since the curvature is expected to be large near the time of the singularity, one can be warranted to think that \((3/\kappa^{2})H^{2}\ll|\rho_{A}|\). Then \(\rho\sim-\rho_{A}\) from (48), which gives \[\dot{\rho}+4H\rho=H\Bigl{[}-12b\dot{H}^{2}+24b^{\prime}(-\dot{H}^{2}+H^{2} \dot{H}+H^{4})-(4b+6b^{\prime\prime})\left(\dddot{H}+7H\ddot{H}+4\dot{H}^{2}+1 2H^{2}\dot{H}\right)\Bigr{]} \tag{50}\] Finally then, the continuity equation \(\dot{\rho}+3H\left(\rho+p\right)=0\) for \(p=-\rho-f(\rho)\), gives \[H=\frac{\dot{\rho}}{3f(\rho)}\,. \tag{51}\] Now we can appreciate the implications of these effects on both strong and weak singularities, where firstly we consider the big rip. The first attempt to address the issue of big rip with conformal anomalies was done in [153, 154] For this, we consider the model given by \[f(\rho)\sim B\rho^{\beta} \tag{52}\] with \(1/2<\beta<1\) when \(\rho\) is large and in this case there exists the Big Rip singularity, as we had discussed previously too. in Sec. IV. We note that the classical evolution is characterized by \(\rho\propto\left(t_{s}-t\right)^{\frac{2}{1-2\beta}}\) and \(H\propto\left(t_{s}-t\right)^{\frac{1}{1-2\beta}}\), both of which exhibit divergence for \(\beta>1/2\). When quantum corrections are taken into account, it is natural to assume that near the singularity \(\rho\) behaves as \[\rho=\rho_{0}\left(t_{s}-t\right)^{\bar{\gamma}} \tag{53}\] As \(\rho\) may diverge at \(t=t_{s}\), we consider negative values of \(\tilde{\gamma}\). Since \(\tilde{\gamma}\left(1-\beta\right)<0\) in this case, we might expect that (50) would give the following approximate relation around \(t=t_{s}\): \[\rho\sim 6b^{\prime}H^{4} \tag{54}\] The term on the r.h.s. grows as \(H^{4}\propto(t_{s}-t)^{-4+4\tilde{\gamma}(1-\beta)}\), but this does not give a consistent result, since \(\rho\) becomes negative for \(b^{\prime}<0\). This tells that our assumptions should be wrong and \(\rho\) does not become infinite. If \(\rho\) has an extremum, (51) tells that \(H\) vanishes there since \(\dot{\rho}=0\). Furthermore, the authors in [31] showed numerically that in this scenario the Hubble rate approaches zero in finite time, thus coming to the conclusion that conformal anomaly effects can alleviate the Big rip in this case. Let us again consider the model in (52) but now for the range \(\beta>1\), in which case we see that a type III singularity develops with \(\rho\propto(t_{s}-t)^{\frac{2}{1-2\beta}}\). Again, we consider that near the singularity \(\rho\) behaves as (53). Using (51), one finds \[H=-\frac{\tilde{\gamma}\rho_{0}^{1-\beta}}{3B}\left(t_{s}-t\right)^{-1+\tilde {\gamma}(1-\beta)} \tag{55}\] Since we are considering the case \(\beta>1\) and \(\tilde{\gamma}<0\), we have that \(\tilde{\gamma}\left(1-\beta\right)>0\). By picking up most singular term in the r.h.s of (50), it follows \[\dot{\rho}\sim-6\left(\frac{2}{3}b+b^{\prime\prime}\right)H\dddot{H} \tag{56}\] Then substituting (53) and (55) for (56), we obtain \[\tilde{\gamma}=\frac{4}{1-2\beta} \tag{57}\] This means that \(\rho\) and \(H\) evolve as \[\rho\propto(t_{s}-t)^{\frac{4}{1-2\beta}}\,,\ \ \ \ H\propto(t_{s}-t)^{\frac{3-2 \beta}{1-2\beta}}\,, \tag{58}\] around \(t=t_{s}\). Numerically solving the background equations shows that in the presence of quantum corrections one has \(H\propto(t_{s}-t)^{1/3}\) around \(t=t_{s}\), which means that \(H\) approaches zero. Meanwhile in the absence of quantum corrections we have \(H\propto(t_{s}-t)^{-1/3}\), thereby showing the divergence of \(H\) at \(t=t_{s}\). From (55) we obtain \[a\sim a_{0}\exp\left[\frac{\rho_{0}^{1-\beta}}{3B\left(1-\beta\right)}\left(t _{s}-t\right)^{\tilde{\gamma}(1-\beta)}\right] \tag{59}\] where \(a_{0}\) is a constant. Comparing the classical case [\(\tilde{\gamma}=2/(1-2\beta)\)] with the quantum corrected one [\(\tilde{\gamma}=4/(1-2\beta)\)], we find that the power of \((t_{s}-t)\) is larger in the presence of quantum corrections. Then the scale factor approaches a constant \(a_{0}\) more rapidly if we account for the quantum effect, implying that the spacetime tends to be smooth, although the divergence of \(\rho\) is stronger. Thus quantum effects moderate the classical singularity. But conformal anomaly effects may not always be of a huge help in order to alleviate singularities, take for example the case of an asymptotically safe cosmology which was considered in [155]. The capacity to build gravitational RG flow approximations outside of perturbation theory is necessary for conceptually testing asymptotic safety. a very strong framework for doing these calculations is the functional renormalization group equation (FRGE) for the gravitational effective average action \(\Gamma_{k}\) \[\partial_{k}\Gamma_{k}[g,\overline{g}]=\frac{1}{2}Tr\left[(\Gamma_{k}^{(2)}+ \mathcal{R}_{k})^{-1}\partial_{k}\mathcal{R}_{k}\right] \tag{60}\] The construction of the FRGE uses the background field formalism, where the metric \(g_{\mu\nu}\) is split into a fixed background \(\overline{g}_{\mu\nu}\) and fluctuations \(h_{\mu\nu}\) ( see [156] for a more detailed on asmyptoitcally safe cosmologies ). The authors in [155] considered the simplest approximation of the gravitational RG flow, which could be obtained from projecting the FRGE onto the Einstein-Hilbert action approximating \(\Gamma_{k}\) by [156] \[\Gamma_{k}=\frac{1}{16\pi G_{k}}\int d^{4}x\sqrt{-g}\left[-R+2\Lambda_{k} \right]+\mbox{gauge-fixing and ghost terms} \tag{61}\] where R, \(\Lambda_{k}\) and \(G_{k}\) are the Ricci Scalar, the running cosmological constant and the running Newton's gravitational constant. The scale-dependence of these couplings can be written in terms of their dimensionless counterparts as \[\Lambda_{k}=k^{2}\lambda_{*} \tag{62}\] \[G_{k}=g_{*}/k^{2} \tag{63}\] where \(g_{*}=0.707\) and \(\lambda_{*}=0.193\). Considering a background FLRW metric and a perfect fluid for the stress energy tensor \(T_{\mu}^{\nu}=\mbox{diag}[-\rho,p,p,p]\), one can get the Friedmann equation and the continuity equation in this scenario to be \[H^{2}=\frac{8\pi G_{k}}{3}+\frac{\Lambda_{k}}{3} \tag{64}\] \[\dot{\rho}+3H(\rho+p)=-\frac{\dot{\Lambda_{k}}+8\pi\dot{G_{k}}}{8\pi G} \tag{65}\] Where the continuity equation comes about from the Bianchi identity which is satisfied by Einstein's equations \(D^{\mu}[\lambda(t)g_{\mu\nu}-8\pi G(t)T_{\mu\nu}]=0\), which has the usual meaning that the divergence \(D^{\mu}\) of the Einstein tensor vanishes. The extra terms of the right hand side in (65) can be interpreted as an illustration of the energy transfer between gravitational degrees of freedom and matter. Using this new continuity equation, we can write the conformal anomaly term in this case as \[T_{A}=-4\rho_{A}-\frac{\dot{\rho_{A}}}{H}\left(\frac{1}{2\rho_{A}}-1\right) \tag{66}\] We note that in the conventional cosmology, one could represent the conformal anomaly corrections to the pressure in the form of an integral but it is clear that this could not be the case for the asymptotically safe cosmology as But obtaining a corresponding integral for \(\rho_{A}\) for the equation (66) is not possible in the same way. Hence its not feasible to address a possible removal of Type I- Type III singularities using conformal anomaly effects in this asymptotically safe cosmology. ### Varying constants approach Cosmologies with varying physical constants, like speed of light or the gravitational constant [157] have been shown to regularize cosmological singularities in certain scenarios [141, 158, 159, 160]. Here we shall discuss briefly about the fundamentals of such theories and how they can be helpful in alleviating both strong and weak cosmological singularities. Examining the generalized Einstein-Friedmann equations within the context of the theories involving the varying speed of light \(c(t)\) (VSL) and varying gravitational constant \(G(t)\) (VG) as presented by Barrow in their work [157], one can deduce the following expressions for mass density \(\varrho(t)\) and pressure \(p(t)\): \[\varrho(t)=\frac{3}{8\pi G(t)}\left(\frac{\dot{a}^{2}}{a^{2}}+\frac{kc^{2}(t) }{a^{2}}\right) \tag{67}\] \[p(t)=-\frac{c^{2}(t)}{8\pi G(t)}\left(2\frac{\ddot{a}}{a}+\frac{\dot{a}^{2}}{ a^{2}}+\frac{kc^{2}(t)}{a^{2}}\right) \tag{68}\] These equations highlight the influence of varying \(c\) and \(G\) on mass density and pressure. For instance, if \(\dot{a}\) approaches infinity while \(G(t)\) increases more rapidly than \(\dot{a}\), the singularity in \(p(t)\) can be eliminated. In the case of flat models, a direct relationship between pressure \(p\) and mass/energy density \(\rho/\varepsilon\) can be established, albeit with a time-dependent equation of state parameter, expressed as: \[p(t)=w(t)\varepsilon(t)=w(t)\varrho(t)c^{2}(t) \tag{69}\] Here, the parameter \(w(t)\) is defined as \(w(t)=\frac{1}{3}\left[2q(t)-1\right]\), with \(q(t)=-\ddot{a}a/\dot{a}^{2}\) being the dimensionless deceleration parameter. Notably, the variation of the speed of light \(c(t)\) brings about a key distinction between mass density \(\varrho\) and energy density \(\varepsilon=\varrho c^{2}\), impacting the Einstein mass-energy relationship \(E=mc^{2}\), which is transformed here into the mass density-pressure formula \(p=\varrho c^{2}\) after division by volume. The variability of physical constants can be explored through the scale factor, allowing for the examination of scenarios like Big Bang, Big Rip, Sudden Future, Finite Scale Factor, and \(w\)-singularities, as expressed by the scale factor equation: \[a(t)=a_{s}\left(\frac{t}{t_{s}}\right)^{m}\exp\left(1-\frac{t}{t_{s}}\right)^{n} \tag{70}\] The constants \(t_{s},a_{s},m,n\) are determined accordingly [141]. This approach illustrates how the varying constant concept aids in regularizing singularities. By inspecting Equations (67) and (68), it becomes evident that a time-dependent gravitational constant variation of the form \(G(t)\propto\frac{1}{t^{2}}\) eliminates a Type 0 Big Bang singularity in Friedmann cosmology, addressing both \(p\) and \(\varrho\) singularities. In Dirac's scenario [161], where \(G(t)\propto 1/t\), only the \(\varrho\) singularity is removed. Moreover, the time dependence of \(G=1/t^{2}\) is less constrained by geophysical limitations on Earth's temperature [162]. Another proposal suggests that if the scale factor (70) doesn't tend to zero as \(t\to 0\), it could be rescaled by a "regularizing" factor \(a_{rg}=(1+1/t^{m})\) (\(m\geq 0\)), resulting in: \[a_{sm}=\left(1+\frac{1}{t^{m}}\right)\left(\frac{t}{t_{s}}\right)^{m}=\left( \frac{t}{t_{s}}\right)^{m}+\frac{1}{t_{s}^{m}} \tag{71}\] Consequently, a varying constant approach (in this case, related to the gravitational constant) can effectively eliminate a strong singularity, such as the Big Bang singularity. A scenario where the varying speed of light contributes to singularity regularization begins by considering a form for the ansatz of \(c(t)\). One common assumption regarding the speed of light's variation is that it follows the evolution of the scale factor [157]: \[c(t)=c_{0}a^{s}(t) \tag{72}\] With \(c_{0}\) and \(s\) as constants, the field equations (67) and (68) can be expressed as: \[\varrho(t)=\frac{3}{8\pi G(t)}\left(\frac{\dot{a}^{2}}{a^{2}}+kc_{0}^{2}a^{2(s -1)}\right) \tag{73}\] \[p(t)=-\frac{c_{0}^{2}a^{2s}}{8\pi G(t)}\left(2\frac{\ddot{a}}{a}+\frac{\dot{a }^{2}}{a^{2}}+kc_{0}^{2}a^{2(s-1)}\right) \tag{74}\] In the presence of the time dependence of \(c(t)\) as given by (72), and for the choice of \(a(t)=t^{m}\), it is possible to eliminate a pressure singularity (Type II) if certain conditions are met: \(s>1/m\) for \(k=0\), \(m>0\), and \(s>1/2\) or \(m<0\), \(s<1/2\) for \(k\neq 0\). ### Modified gravity effects/Quantum gravitational cosmologies In recent times, there has been a wide interest in dark energy models based in exotic non general relativistic regimes particularly because such theories display properties which are not evident in conventional cosmological models. For example,a lot of works have considered the possibility of viable scalar field based dark energy regimes in quantum gravity corrected cosmologies like the RS-II Braneworld and Loop Quantum Cosmology [15, 16, 17, 18, 19]. There has been substantial work on new dark energy models based on thermodynamic modifications like modified area-entropy relations [163, 164, 165, 166, 167, 168] or even more exotic possibilities like generalized uncertainty principles [169, 170, 171] or non-canonical approaches like DBI etc. as well [172, 173, 174, 175, 176, 177, 178].This vast dark energy literature has prompted the study of cosmological singularities in a wide range of cosmological backgrounds as well, as there have been multiple works which have discussed Type I-IV singularities in various cosmologies [179, 180, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 30, 31]. In this vast array of works, one has seen quite a few examples in which cosmologies affected by the effects of these modified gravity theories or quantum gravitational paradigms ( like the Braneworld or LQC for example ) have alleviated certain singularities. Here we would like to consider an example of how such effects can help in alleviating type V singularities, as we have not discussed ways to remove or moderate this till now. We would like to consider the treatment in [190] for our example here. We would like to again consider a model with an inhomogeneous EOS of the form \(p=-\rho-f(\rho)\). It was shown in [141] that for the simplified case of the scale factor (34) with \(m=0\), one can get w-singularities for \(n>0\) and \(n\neq 1\). The scale factor for the case \(m=0\) takes the form \[a(t)=a_{s}\exp\left(1-\frac{t}{t_{s}}\right)^{n} \tag{75}\] and we will be using this form of the scale factor for this example. The modified gravity theory we would be interested in is an f(R) gravity model with the action [191] \[S=\frac{m_{p}^{2}}{2}\int d^{4}x\sqrt{-g}\left(R-\frac{\alpha^{2}}{R}\right)+ \int d^{4}x\sqrt{-g}\mathcal{L}_{m} \tag{76}\] where \(\alpha\) is a constant which has the units of mass, \(\mathcal{L}_{m}\) is the Lagrangian density for matter and \(m_{p}\) is the reduced Planck's constant. The field equation for this action is \[\left(1+\frac{\alpha^{2}}{R^{2}}\right)R_{\mu\nu}-\frac{1}{2}\left(1-\frac{ \alpha^{2}}{R^{2}}\right)Rg_{\mu\nu}+\alpha^{2}\left[g_{\mu\nu}\nabla_{a} \nabla^{a}-\nabla(_{\mu}\nabla_{\nu})\right]R^{-2}=\frac{T_{\mu\nu}^{M}}{m_{p }^{2}} \tag{77}\] The Friedmann equation in this case can take the form \[\frac{6H^{2}-\frac{\alpha}{2}}{11/8-\frac{8H^{2}}{4\alpha}}=\frac{\rho}{3} \tag{78}\] where \(\rho\) is the total energy density. This \(F(R)\) gravity regime was used to explain late time cosmic acceleration as an alternative to dark energy in [191]. The use of f(R) gravity regimes for avoiding cosmological singularities by adding an \(R^{2}\) term was considered in detail in [192], with the same scenario later being extended in [32] and [181]. Moreover, based on properties of \(R^{2}\) term, non-singular modified gravity were proposed in [193]. The action (77) prompts one towards the notion that very tiny corrections to the usual Einstein Hilbert in the form of \(R^{n}\) with \(n<0\) can produce cosmic acceleration. As corrections of the form \(R^{n}\) with \(n>0\) can lead to inflation in the early universe [194], the authors in [191] proposed a purely gravitational paradigm through (76) to explain both the early and late time accelerations of the universe. Now we consider \(f(\rho)=\rho^{\alpha}\) and firstly see the status quo of w-singularities for such a model in the standard cosmology given by ( written here in natural units just for simplicity) \[H^{2}=\frac{\rho}{3}\] . We can write the w-parameter for this cosmology using as \[w=-3^{\alpha-1}\left(\frac{n^{2}\left(1-\frac{t}{t_{s}}\right)^{2(n-1)}}{t_{s }^{2}}\right)^{\alpha-1}-1 \tag{79}\] From this we can make the following observations : * For n = 1, no w- singularities occur as is the case in the usual scenario with conventional equation of state. * For \(\alpha<0\), w-singularities occur for all values of positive values of n besides unity but w-singularities do not occur for any negative values of n * For \(\alpha>0\) we see a very interesting behaviour. In this case, completely in contrast to what happens in the usual case, no w-singularities occur for positive values of n (\(n>0\) ) but they occur only when n has negative values (\(n<0\)). Hence, here we see the first sign of departure in the occurrence conditions of w-singularities when one considers inhomogeneous equations of state. So we see here that incorporating an inhomogeneous EOS can be of use in moderating w-singularities, but this still does not remove them per say as it only changes the conditions under which they occur with regards to what happens in the conventional cosmology. Now the w-parameter for (78) case is given by \[w=-12^{\alpha-1}\left(\frac{\eta\left(12n^{2}\left(1-\frac{t}{t_{s}}\right)^{ 2n}-\eta(t_{s}-t)^{2}\right)}{11\eta(t_{s}-t)^{2}-18n^{2}\left(1-\frac{t}{t_{s }}\right)^{2n}}\right)^{\alpha-1}-1 \tag{80}\] For the w-parameter as expressed above, we have the following observations : * For n = 1, contrary to the other cases we have considered till now, one can have a w-singularity but that is possible only in the extreme case that \(\alpha\rightarrow\infty\) which is not pretty realistic to expect but in principle singularities can appear in this case. * The most interesting thing that comes out when one considers this scenario is that w-singularities do not occur for any value of n and \(\alpha\)! For both positive and negative values of \(\alpha\) and n, the w-parameter remains regular and does not diverges. And so we see that just by incorporating the effects of a modified gravity theory, in this case a particular form of \(f(R)\) gravity, one can alleviate singularities too. Furthermore, \(f(R)\) gravity theories which been of great use in alleviating various other singularities too but we have discussed about other singularities quite extensively before and so it seems appropriate to discuss this example to illustrate how type V singularities could be moderated too. ## 5 Dynamical systems approach and the Goriely-Hyde method While it is seems pretty natural to study singularities and their avoidance methods in various cosmological settings like we have discussed so far,often it is very difficult to classify and study the cosmological singularities which may occur in extremely non-conventional cosmologies which are motivated by quantum gravitational/phenomenological considerations (for example, see the classification of singularities in asymptotically safe cosmology [155] ) and often it may not even be possible to do so in an orthodox fashion. Hence it becomes essential to look for non-conventional ways to find out cosmological singularities in exotic cosmologies and in this regard, a particular dynamical systems method can be of huge help. From a dynamical standpoint, one of the most intriguing aspects of studying various dynamical systems lies in understanding their singularity structure, which becomes particularly relevant when these systems describe physically significant phenomena. While numerous approaches have been proposed to explore the singularity structure of autonomous dynamical systems, one particularly interesting method is the Goriely-Hyde procedure [195]. As cosmology presents a multitude of captivating dynamical systems [196], the investigation of singularity structure in such systems has gained considerable attention, with the Goriely-Hyde method proving particularly useful for cosmological explorations [197, 198, 199, 200, 201, 202]. This method has previously been applied to study finite and non-finite time singularities in certain classes of quintessence models as well [33, 182, 203].The Goriely-Hyde method provides an elegant approach to determining the presence of singularities in dynamical systems and the procedure can be outlined as follows: * We begin by considering a dynamical system described by \(n\) differential equations of the form: \[\dot{x}_{i}=f_{i}(x),\] (81) where \(i=1,2,...,n\), and the overdot represents differentiation with respect to time \(t\), which in the case of quintessence models can be better represented by the number of e-foldings \(N\). We identify the parts of the equation \(f_{i}\) that become significant as the system approaches the singularity. These significant parts are referred to as "dominant parts" [195]. Each dominant part constitutes a mathematically consistent truncation of the system, denoted as \(\hat{f}_{i}\). The system can then be written as: \[\dot{x}_{i}=\hat{f}_{i}(x).\] (82) * Without loss of generality, the variables \(x_{i}\) near the singularity can be expressed as: \[x_{i}=a_{i}\tau^{p_{i}},\] (83) where \(\tau=t-t_{c}\), and \(t_{c}\) is an integration constant. Substituting equation (4) into equation (3) and equating the exponents, we can determine the values of \(p_{i}\) for different \(i\), which form the vector \(\mathbf{p}=(p_{1},p_{2},...,p_{n})\). Similarly, we calculate the values of \(a_{i}\) to form the vector \(\vec{a}=(a_{1},a_{2},...,a_{n})\). It is important to note that if \(\vec{a}\) contains only real entries, it corresponds to finite-time singularities. Conversely, if \(\vec{a}\) contains at least one complex entry, it may lead to non-finite-time singularities. Each set \((a_{i},p_{i})\) is known as a dominant balance of the system. * Next, we calculate the Kovalevskaya matrix given by: \[R=\begin{pmatrix}\frac{\partial f_{1}}{\partial x_{1}}&\frac{\partial f_{1}}{ \partial x_{2}}&.&.&\frac{\partial f_{1}}{\partial x_{n}}\\ \frac{\partial f_{2}}{\partial x_{1}}&\frac{\partial f_{2}}{\partial x_{2}}&.&.&\frac{\partial f_{2}}{\partial x_{n}}\\.&.&.&.&.\\.&.&.&.&.\\ \frac{\partial f_{n}}{\partial x_{1}}&\frac{\partial f_{n}}{\partial x_{2}}&.&.&\frac{\partial f_{n}}{\partial x_{n}}\end{pmatrix}-\begin{pmatrix}p_{1}&0&.&. &0\\ 0&p_{2}&.&.&0\\.&.&.&.&.\\.&.&.&.&.\\ 0&0&.&.&p_{n}\end{pmatrix}.\] (84) After obtaining the Kovalevskaya matrix, we evaluate it for different dominant balances and determine the eigenvalues. If the eigenvalues are of the form \((-1,r_{2},r_{3},...,r_{n})\), with \(r_{2},r_{3},...>0\), then the singularity is considered general and will occur regardless of the initial conditions of the system. Conversely, if any of the eigenvalues \(r_{2},r_{3},...\) are negative, the singularity is considered local and will only occur for certain sets of initial conditions. * Without loss of generality, the variables \(x_{i}\) near the singularity can be expressed as: \[x_{i}=a_{i}\tau^{p_{i}},\] (85) where \(\tau=t-t_{c}\), and \(t_{c}\) is an integration constant. Substituting equation (4) into equation (3) and equating the exponents, we can determine the values of \(p_{i}\) for different \(i\), which form the vector \({\bf p}=(p_{1},p_{2},...,p_{n})\). Similarly, we calculate the values of \(a_{i}\) to form the vector \(\vec{a}=(a_{1},a_{2},...,a_{n})\). It is important to note that if \(\vec{a}\) contains only real entries, it corresponds to finite-time singularities. Conversely, if \(\vec{a}\) contains at least one complex entry, it may lead to non-finite-time singularities. Each set \((a_{i},p_{i})\) is known as a dominant balance of the system. * Next, we calculate the Kovalevskaya matrix given by: \[R=\begin{pmatrix}\frac{\partial f_{1}}{\partial x_{1}}&\frac{\partial f_{1}}{ \partial x_{2}}&.&.&\frac{\partial f_{1}}{\partial x_{n}}\\ \frac{\partial f_{2}}{\partial x_{1}}&\frac{\partial f_{2}}{\partial x_{2}}&. &.&\frac{\partial f_{2}}{\partial x_{n}}\\.&.&.&.&.\\.&.&.&.&.\\ \frac{\partial f_{n}}{\partial x_{1}}&\frac{\partial f_{n}}{\partial x_{2}}&. &.&\frac{\partial f_{n}}{\partial x_{n}}\\ \end{pmatrix}-\begin{pmatrix}p_{1}&0&.&.&0\\ 0&p_{2}&.&.&0\\.&.&.&.&.\\.&.&.&.&.\\ 0&0&.&.&p_{n}\\ \end{pmatrix}.\] (86) After obtaining the Kovalevskaya matrix, we evaluate it for different dominant balances and determine the eigenvalues. If the eigenvalues are of the form \((-1,r_{2},r_{3},...,r_{n})\), with \(r_{2},r_{3},...>0\), then the singularity is considered general and will occur regardless of the initial conditions of the system. Conversely, if any of the eigenvalues \(r_{2},r_{3},...\) are negative, the singularity is considered local and will only occur for certain sets of initial conditions. After applying the method, one can then classify singularities using well motivated ansatz' for the scale factor or the Hubble parameter.The most general form of the Hubble parameter for investigating singularities within the aforementioned classified types is expressed as [203]: \[H(t)=f_{1}(t)+f_{2}(t)(t-t_{s})^{\alpha} \tag{87}\] Here, \(f_{1}(t)\) and \(f_{2}(t)\) are assumed to be nonzero regular functions at the time of the singularity, and similar conditions apply to their derivatives up to the second-order. Additionally, \(\alpha\) is a real number. It is not mandatory for the Hubble parameter (34) to be a solution to the field equations; however, we will consider this case and explore the implications of this assumption on the singularity structure based on our dynamic analysis. First, we observe that none of the variables \(x\), \(y\), or \(z\) as defined in (10) can ever become singular for any cosmic time value. The singularities that can occur considering the Hubble parameter as defined in (34) are as follows: * For \(\alpha<-1\), a big rip singularity occurs. * For \(-1<\alpha<0\), a Type III singularity occurs. * For \(0<\alpha<1\), a Type II singularity occurs. * For \(\alpha>1\), a Type IV singularity occurs. Another ansatz useful for classifying singularities was introduced in [39] whereby the scale factor was written as \[a(t)=g(t)(t-t_{s})^{\alpha}+f(t) \tag{88}\] where g(t) and f(t) and all their higher order derivatives with respect to the cosmic time are smooth functions of the cosmic time. For this ansatz, according to the values of the exponent \(\alpha\) one can have the following singularities * For \(\alpha<0\), a type I singularity occurs * For \(0<\alpha<1\), a type III singularity develops * For \(a<\alpha<2\), a type II singularity occurs * For \(\alpha>2\), a type IV singularity occurs Again, it is not mandatory that the scale factor in (88) will necessarily be a solution to the field equations but we would like to consider this and (87) in order to get a well-motivated feel for the type of cosmological singularities we can deal with in the various models we have discussed so far. As an example for this method, let's consider singularities in an RS-II braneworld cosmology where dark energy can be described by a scalar field paradigm, where we shall follow the treatment of [33]. The action for inclusive of both the scalar and the background fluid term can be written as \[S=S_{RS}+S_{B}+S_{\phi}=\int d^{5}x\sqrt{-g^{(5)}}\left(\Lambda^ {(5)}+2R^{(5)}\right)+\\ \int d^{4}x\sqrt{-g}\left(\sigma-\frac{1}{2}\mu(\phi)(\nabla\phi )^{2}-V(\phi)+\mathcal{L}_{B}\right) \tag{89}\] where \(R^{(5)}\), \(g^{(5)}_{\mu\nu}\) and \(\Lambda^{(5)}\) are the bulk Ricci Scalar, metric and the cosmological constant respectively with \(\sigma\) being the brane tension on the 3-brane, \(g_{\mu\nu}\) being the 3-brane metric and \(\mu(\phi)\) being a scalar coupling function. Note that here we are working in Planck units with \((m_{p}^{(5)})^{2}=1\) with \(m_{p}^{(5)}\) being the 5-dimensional Planck mass. Assuming that the brane metric has the usual FLRW form, we get the Friedmann equation to be [204] \[H^{2}=\rho\left(1+\frac{\rho}{2\sigma}\right) \tag{90}\] where \(\rho=\rho_{\phi}+\rho_{B}\) is the total cosmological energy density taking into account contributions from both the scalar field and the background fluid term and the Bulk cosmological constant has been set to zero for simplicity. One can similarly find \[2\dot{H}=-\left(1+\frac{\rho}{\sigma}\right)\left(\mu(\phi)\dot{\phi}^{2}+\rho _{B}\right) \tag{91}\] And the equation motion of the scalar is given by \[\mu(\phi)\ddot{\phi}+\frac{1}{2}\frac{d\mu}{d\phi}\dot{\phi}^{2}+3H\mu(\phi) \dot{\phi}+\frac{dV}{d\phi}=0 \tag{92}\] Finally, using the following variables introduced in [205] \[x=\frac{\dot{\phi}}{\sqrt{6}H}\qquad y=\frac{\sqrt{V}}{\sqrt{3}H}\qquad z=\frac{ \rho}{3H^{2}} \tag{93}\] Choosing the background fluid to be of the form of pressurelees dark matter, in a way that \(w_{B}=0\), we get the dynamical system for this model to be \[x^{\prime}=-\sqrt{\frac{3}{2\mu}}\lambda y^{2}-3x+\frac{3x}{2}\left(z+x^{2}-y^{ 2}\right)\left(\frac{2}{z}-1\right) \tag{94}\] \[y^{\prime}=\sqrt{\frac{3}{2\mu}}\lambda xy+\frac{3y}{2}\left(z+x^{2}-y^{2} \right)\left(\frac{2}{z}-1\right) \tag{95}\] \[z^{\prime}=3(1-z)(z+x^{2}-y^{2}) \tag{96}\] where the primes denote differentiation with respect to the e-folding number N and \(\lambda=\frac{V^{\prime}}{V}\). We can finally start with the analysis as we have proper autonomous dynamical system, with the first truncation that we consider being \[\hat{f}=\begin{pmatrix}-k\lambda y^{2}\\ -3y^{3}z^{-1}\\ 3x^{2}\end{pmatrix} \tag{97}\] where \(k=\sqrt{\frac{3}{2\mu}}\). Using the ansatz of the Goriely-Hyde method, we get \(\mathbf{p}=(-1,-1,-1)\) and using these, we get \[\begin{array}{c}a_{1}=\left(-\frac{1}{k\lambda},\frac{i}{k\lambda},-\frac{3 }{k^{2}\lambda^{2}}\right)\\ \\ a_{2}=\left(-\frac{1}{k\lambda},-\frac{i}{k\lambda},-\frac{3}{k^{2}\lambda^{2} }\right)\end{array} \tag{98}\] as both \(a_{1}\) and \(a_{2}\) have complex entries, only non-finite time singularities will be possible with regards to this truncation. The Kovalevskaya matrix then takes the form \[R=\begin{pmatrix}1&-2k\lambda y&0\\ 0&1-\frac{9y^{2}}{z}&\frac{3y^{3}}{z^{2}}\\ 6x&0&1\end{pmatrix} \tag{99}\] We then finally find the eigenvalues of the matrix, which are given by \[r=(-1,-1,2) \tag{100}\] Hence the singularities in this case will only be local singularities which will only form for a limited set of initial conditions. In [33], it was worked out that there are two more possible truncations, with the balances and corresponding eigenvalues in them being \[a_{1}=\left(\frac{1}{\sqrt{3}},\frac{i}{\sqrt{3}},\frac{1}{3}\right) \tag{101}\] \[a_{2}=\left(\frac{1}{\sqrt{3}},-\frac{i}{\sqrt{3}},\frac{1}{3}\right)\] \[a_{3}=\left(-\frac{1}{\sqrt{3}},\frac{i}{\sqrt{3}},\frac{1}{3}\right)\] \[a_{4}=\left(-\frac{1}{\sqrt{3}},-\frac{i}{\sqrt{3}},\frac{1}{3}\right)\] with eigenvalues being \[r=(-1,\sqrt{\frac{3}{2}},-\sqrt{\frac{3}{2}}) \tag{102}\] And another truncation with the balance \[a_{1}=\left(\frac{1}{\sqrt{3}},\sqrt{\frac{2}{3}},\frac{2}{3}\right) \tag{103}\] \[a_{2}=\left(\frac{1}{\sqrt{3}},-\sqrt{\frac{2}{3}},\frac{2}{3}\right)\] \[a_{3}=\left(-\frac{1}{\sqrt{3}},\sqrt{\frac{2}{3}},\frac{2}{3}\right)\] \[a_{4}=\left(-\frac{1}{\sqrt{3}},-\sqrt{\frac{2}{3}},\frac{2}{3}\right)\] with \[r=(-1,-1,1) \tag{104}\] We see from (101) and (102) that the truncations for which they belong to seem to still tell the story that only non-finite time local singularities could be possible in the system but we note from (103), that the other truncation will allow for finite time singularities albeit they would still be local as (104) has \(r_{2}=-1\). To proceed further and now classify the singularities physically, we use the ansatz for the Hubble parameter (87) and we need to express \(\dot{\phi}\) and \(V(\phi)\) in terms of the Hubble parameter. For simplicity, we will consider that the coupling constant \(\mu=1\) and \(\dot{\rho_{B}}=0\). Making these considerations, we can write \[-2\dot{H}=\dot{\phi}^{2}\left(1+\frac{\rho}{\sigma}\right) \tag{105}\] One can then write \[\dot{\phi}^{2}=-2\left[\left(\sigma+V+\sigma\rho_{B}\right)+\sqrt{\left(\sigma+V+ \sigma\rho_{B}\right)^{2}-2\dot{H}}\right] \tag{106}\] Furthermore, one can now write \(V(\phi)\) in terms of the dark energy equation of state 5 as Footnote 5: Note that here we are only considering dark energy equation of state with no background contributions, hence here we will only consider scalar field contributions \[V(\phi)=\frac{\dot{\phi}^{2}}{2}\frac{(1-w)}{(1+w)} \tag{107}\] Then we can write the potential as \[V=\frac{2b(1+k)+\sqrt{(2b(1+k))^{2}-2\dot{H}(k^{2}-1)}}{2(k^{2}-1)} \tag{108}\] where \(k=\frac{2w}{1-w}\) and \(b=\sigma(1+\rho_{B})\) (note that both k and b will always be positive for a positive brane tension). Notice that V is now completely in terms of the Hubble parameter (for constant values of \(\sigma\), w and \(\rho_{B}\)) and so one can use this form of V in to find \(\dot{\phi}\) in terms of the Hubble parameter as well. It is necessary to express these quantities in terms of \(H(t)\) as now we can find out which type of singularities are possible in this scenario, in the view of the fact that x,y and z as described in have to remain regular. By studying the expressions for these variables, one can make out that Type I, Type III and Type IV singularities are allowed in this scenario while Type II is not. This also makes us realize that even if the cosmology is heavily motivated by quantum gravitational considerations( like the Braneworld in this scenario), it can still have quite a few cosmological singularities. ## 6 Future outlooks and conclusions In this brief review paper, we have discussed (almost) all the prominent developments in the field of cosmological singularities which have taken place in the recent 25 years or so. We firstly provided a detailed outlook on what space-time singularities are and discussed the various nuances regarding them, like various strength criterion etc. After that, we discussed in detail about the prominent strong and weak cosmological singularities in accordance with the classification scheme provided by Odintsov and Nojiri. We detailed under which conditions these singularities can occur in various scenarios and under which cosmological settings they were initially found in. We then saw how one can moderate or even remove these singularities using various techniques having quantum or modified gravity origins and we also discussed the Goriely-Hyde method and its usefulness in singularity works. As a whole, one general point that we could safely make is that such singularities provide a revealing arena on the interface of cosmology and quantum gravitational theories. The scales at which such events could take place lie in the horizons for testing quantum gravity ideas and with the constant increase in the precision of various observational setups, one would not be wrong to think that investigating such singularities in detail can possibly shed light on problems in both cosmology and quantum gravity. ## Acknowledgments The author would like to thank Sergei Odintsov for the invitation to write this review article and for the numerous discussions with him on various aspects related to singularities. I would also like to thank Maxim Khlopov, Pankaj Joshi, Robert Scherrer, Alexander Timoshkin, Vasilis Oikonomou, Jackson Levi Said and Rafael Nunes for various discussions on singularities. I would also like to thank Parth Bhambhaniya for discussions on some particular aspects of singularities. Finally, I would also like to thank Sunny Vagnozzi for discussions on cosmology and dark energy in general which have been very helpful for this work. ## Appendix A The use of power series expansions for the scale factor and related quantities in cosmology has gathered significant pace in recent times(for a detailed overview, see [112]) and especially in the context of studies of cosmological singularities. Hence it is fitting to discuss such expansions in a bit of detail here. Generalized Frobenius series find frequent application in the expansion of solutions to differential equations around their singular points. With this characteristic in mind, we will assume that in the vicinity of the key point, the scale factor exhibits an expansion based on a generalized power series. This concept extends the familiar notions of Taylor series, meromorphic Laurent series, Frobenius series, and Liapunov expansions, as referenced in [206]. Furthermore, this generalized power series is more encompassing than the one employed in [207]. In the current context, if the scale factor \(a(t)\) can be expressed using such a generalized power series, then the Friedmann equations dictate that both \(\rho(t)\) and \(p(t)\) can likewise be represented using such power series. Employing formal series reversion, it follows that the equation of state \(\rho(p)\), and consequently the function \(\rho(a)\), exhibit these generalized power series expansions. Conversely, when \(\rho(a)\) is described by such a generalized power series, the first Friedmann equation indicates that \(\dot{a}(t)\) possesses a power series of similar nature, which, upon integration, implies that \(a(t)\) itself is characterized by such a power series. Similarly, if the equation of state \(p(\rho)\) can be expressed as a generalized power series, then integrating the conservation equation leads to the expression: \[a(\rho)=a_{0}\exp\left\{\frac{1}{3}\int_{\rho_{0}}^{\rho}\frac{d\bar{\rho}}{\bar{ \rho}+p(\bar{\rho})}\right\}, \tag{109}\] This equation also adopts a generalized power series representation. The potential value of expanding the conventional notion of a Frobenius series becomes evident through the analysis presented in [207]. It is important to clarify the types of entities that fall outside this category of generalized power series. First, essential singularities, effectively infinite-order poles that emerge, for instance, in functions like \(\exp(-1/x)\) near \(x=0\), lie beyond this classification. Secondly, certain variations on the concept of Puiseux series, specifically those containing terms like \((\ln x)^{n}\), \((\ln\ln x)^{n}\), \((\ln\ln\ln x)^{n}\), and so forth, also exist beyond this classification. However, there is currently no awareness of any scenarios where these exceptional cases become pertinent in a physical context. It has been shown to be reasonable to assume that in the vicinity of some cosmological singularity, happening at some time \(t_{0}\), the scale factor has a (possibly one-sided) generalized power series expansion of the form \[a(t)=c_{0}|t-t_{0}|^{\eta_{0}}+c_{1}|t-t_{0}|^{\eta_{1}}+c_{2}|t-t_{0}|^{\eta_ {2}}+c_{3}|t-t_{0}|^{\eta_{3}}+\ldots \tag{110}\] where the indicial exponents \(\eta_{i}\) are generically real (but are often non-integer) and without loss of generality are ordered in such a way that they satisfy \[\eta_{0}<\eta_{1}<\eta_{2}<\eta_{3}\ldots \tag{111}\] Finally we can also without loss of generality set \[c_{0}>0. \tag{112}\] There are no _a priori_ constraints on the signs of the other \(c_{i}\), though by definition \(c_{i}\neq 0\). From a physical point of view, this definition is really generic and can be applied to any type of cosmological milestone. This generalized power series expansion is sufficient to encompass all the models we are aware of in the literature, and as a matter of fact, the indicial exponents \(\eta_{i}\) will be used to classify the type of cosmological singularity we are dealing with. For many of the calculations in this chapter, the first term in the expansion is dominant, but even for the most subtle of the explicit calculations below it will be sufficient to keep only the first three terms of the expansion: \[a(t)=c_{0}|t-t_{0}|^{\eta_{0}}+c_{1}|t-t_{0}|^{\eta_{1}}+c_{2}|t-t_{0}|^{\eta _{2}}\ldots;\qquad\eta_{0}<\eta_{1}<\eta_{2};\qquad c_{0}>0. \tag{113}\] The lowest few of the indicial exponents are sufficient to determine the relationship between these cosmological milestones, the curvature singularities and even the energy. Note also that this expansion fails if the cosmological milestone is pushed into the infinite past or infinite future. Using such an expansion, one can encounter quite a few cosmological singularities and we shall list some conditions under which some prominent singularities 6 can be found as follows : Footnote 6: It is worth noting that with such a power series ansatz for the scale factor, one can also find conditions in which other exotic cosmological scenarios like bounce, emergent universe etc. can also be recovered but here we shall not list that as we are only interested in cosmological singularities. * Big bang (Type 0): If a big bang occurs at time \(t_{0}\)7, we define the behavior with indicial exponents (\(0<\eta_{0}<\eta_{1}\dots\)) when the scale factor has a generalized power series near the singularity, given by: Footnote 7: Similar series can be used for the Big crunch too, in which case the series takes the form \[a(t)=c_{0}(t_{0}-t)^{\eta_{0}}+c_{1}(t_{0}-t)^{\eta_{1}}+\dots\] (114) The series is carefully constructed such that \(a(t_{0})=0\). * Big rip (Type 1): If a big rip occurs at time \(t_{0}\), the indicial exponents of the rip (\(\eta_{0}<\eta_{1}\dots\)) are defined when the scale factor has a generalized power series near the rip: \[a(t)=c_{0}|t_{0}-t|^{\eta_{0}}+c_{1}|t_{0}-t|^{\eta_{1}}+\dots,\] (115) where \(\eta_{0}<0\) and \(c_{0}>0\). The series is constructed to satisfy \(a(t_{0})=\infty\). The only difference from the big bang case is the _sign_ of the exponent \(\eta_{0}\). * Sudden singularity (Type II): If a sudden singularity occurs at time \(t_{0}\) (past or future), the exponent is defined as \(\eta_{0}=0\) and \(\eta_{1}>0\), resulting in the scale factor's generalized power series near the singularity: \[a(t)=c_{0}+c_{1}|t-t_{0}|^{\eta_{1}}+\dots\] (116) Here, \(c_{0}>0\) and \(\eta_{1}\) is a non-integer. The condition \(a(t_{0})=c_{0}\) ensures finiteness, and a sufficient number of differentiations yields: \[a^{(n)}(t\to t_{0})\sim c_{0}\ \eta_{1}(\eta_{1}\!-\!1)(\eta_{1}\!-\!2) \dots(\eta_{1}\!-\!n\!+\!1)\ |t\!-\!t_{0}|^{\eta_{1}-n}\to\infty.\] (117) The toy model by Barrow [56] can be expressed as: \[a(t)=c_{0}\left[(t_{0}-t)^{\eta}-1\right]+\tilde{c}_{0}(t-t_{b})^{\tilde{\eta}}\] (118) where \(t_{b}\) is the time of the big bang. This model fits into the general classification when expanded around the sudden singularity time, \(t_{0}\), and into the classification of big bang singularities when expanded around the big bang time, \(t_{b}\). * ## Appendix B Over the years, several alternatives to the Big rip have been found and the first one that we shall discuss in this regard is the Little rip (LR). It is characterized by a growing energy density \(\rho\) over time, but this increase follows an asymptotic pattern, necessitating an infinite amount of time to approach the singularity. This situation corresponds to an equation of state parameter \(w\) that falls below -1; however, it approaches -1 as time progresses towards infinity. The energy density's growth is gradual, preventing the emergence of the Big Rip singularity. The LR models depict transitional behaviors between a asymptotic de Sitter expansion and a BR evolution. In their work [108], the authors presented an elegant method for comprehending the implications of the little rip, distinguishing it from the Big Rip, which we will explore in the following. During the universe's expansion, the relative acceleration between two points separated by a comoving distance \(l\) can be expressed as \(l\ddot{a}/a\), where \(a\) signifies the scale factor. If an observer is situated at a comoving distance \(l\) from a mass \(m\), they will detect an inertial force acting on the mass as follows: \[F_{\rm iner}=ml\ddot{a}/a=ml\left(\dot{H}+H^{2}\right) \tag{119}\] Let's assume that the two particles are bound by a constant force \(F_{0}\). When the positive value of \(F_{\rm iner}\) surpasses \(F_{0}\), the two particles become unbound. This scenario corresponds to the phenomenon known as the "rip," which emerges due to the accelerating expansion. Equation (119) demonstrates that a rip always occurs when either \(H\) or \(\dot{H}\) diverges (assuming \(\dot{H}>0\)). The divergence of \(H\) leads to a "big rip", while if \(H\) remains finite but \(\dot{H}\) diverges with \(\dot{H}>0\), it results in a Type II or "sudden future" singularity [31, 56, 197], which also causes a rip. Nonetheless, as pointed out in [208], it's feasible for \(H\) and, consequently, \(F_{\rm iner}\), to grow boundlessly without inducing a future singularity at a finite time. This phenomenon is referred to as the little rip. Both the big rip and the little rip share the characteristic of \(F_{\rm iner}\rightarrow\infty\); the distinction lies in the fact that for the big rip, \(F_{\rm iner}\rightarrow\infty\) occurs at a finite time, whereas for the little rip, it occurs as \(t\rightarrow\infty\). Two possible ansatz/models which have been shown to lead to little rip behaviour [108] are given by the following forms of the Hubble parameter \[H(t)=H_{0}\exp{\lambda t} \tag{120}\] where \(H_{0}\) and \(\lambda\) are positive constants while another viable model which is similar to this is given by \[H(t)=H_{0}\exp{C}\exp{\lambda t} \tag{121}\] where \(H_{0}\), \(\lambda\) and \(C\) are positive constants as well. Another interesting possibility for the evolution of the universe is the so-called Pseudo-Rip [109], where the Hubble parameter, although increasing, tends to a "cosmological constant" in the remote future. That means, \(H(t)\to H_{\infty}<\infty,t\rightarrow+\infty\), where \(H_{\infty}\) is a constant. A possible model for this is given by the Hubble ansatz given as \[H(t)=H_{0}-H_{1}\exp-\lambda t \tag{122}\] where \(H_{0}\), \(H_{1}\) and \(\lambda\) are positive constants with \(H_{0}>H_{1}\). Yet another possible alternative for the rip is a model in which the dark energy density \(\rho\) monotonically increases (\(w<-1\)) in the first stage, and thereafter monotonically decreases (\(w>-1\)), known as the "Quasi rip" [110]. At first, it thus tends to disintegrate bound structures in the universe, but then in the second stage the disintegration becomes reversed, implying that already disintegrated structures have the possibility to be recombined again. As an example model for this, we consider the energy density of dark energy to be a function of the scale factor and consider it's anastz to be \[\rho(a)=\rho_{0}a^{\alpha-\beta\ln a} \tag{123}\] where a is the scale factor, \(\alpha\) and \(\beta\) are constants with \(\rho_{0}\) being the energy density at some past time \(t_{0}\). Yet another possibility is the Little sibling of the big rip [209] wherein the Hubble rate and the scale factor blow up but the derivatives of the Hubble rate does not. This takes place at an infinite cosmic time with the scalar curvature blowing up too. An example model for this also involves taking the energy density of dark energy as a function of the scale factor, given by [209] \[\rho(a)=\Lambda+A\ln\frac{a}{a_{0}} \tag{124}\] Table 1 summarizes all these various scenarios as discussed till now. Further- \begin{table} \begin{tabular}{l l l} \hline Scenario & Description & Example Model \\ \hline Little Rip (LR) & Gradual energy density growth (\(\rho\)) over infinite time, asymptotically approaching a singularity. & \(H(t)=H_{0}\exp\lambda t\) \\ \hline Pseudo-Rip & Expansion accelerates with \(H\) approaching a constant (\(H_{\infty}\)) but finite value. & \(H(t)=H_{0}-H_{1}\exp-\lambda t\) \\ \hline Quasi Rip & Dark energy density \(\rho\) first increases (\(w<-1\)) and then decreases (\(w>-1\)), implying disintegration and recombination of structures. & \(\rho(a)=\rho_{0}a^{\alpha-\beta\ln a}\) \\ \hline Little Sibling of the Big Rip & Hubble rate and scale factor diverge, but derivatives of Hubble rate do not, with scalar curvature divergence. & \(\rho(a)=\Lambda+A\ln\frac{a}{a_{0}}\) \\ \hline \end{tabular} \end{table} Table 1: Comparison of Rip Scenarios and Example Models more, there have been loads of works which have explored all these alternative rip scenarios in non-standard cosmologies similar to how other singularities have also been probed in such models, the possibilites ranging from various modified gravity theories to holographic cosmologies and viscous models too [219, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230]. There have also been works which have discussed ways of avoiding or moderating these singularities [231, 232, 233, 120] but we will not be going over the details of that here.
2309.09492
Target-aware Bi-Transformer for Few-shot Segmentation
Traditional semantic segmentation tasks require a large number of labels and are difficult to identify unlearned categories. Few-shot semantic segmentation (FSS) aims to use limited labeled support images to identify the segmentation of new classes of objects, which is very practical in the real world. Previous researches were primarily based on prototypes or correlations. Due to colors, textures, and styles are similar in the same image, we argue that the query image can be regarded as its own support image. In this paper, we proposed the Target-aware Bi-Transformer Network (TBTNet) to equivalent treat of support images and query image. A vigorous Target-aware Transformer Layer (TTL) also be designed to distill correlations and force the model to focus on foreground information. It treats the hypercorrelation as a feature, resulting a significant reduction in the number of feature channels. Benefit from this characteristic, our model is the lightest up to now with only 0.4M learnable parameters. Futhermore, TBTNet converges in only 10% to 25% of the training epochs compared to traditional methods. The excellent performance on standard FSS benchmarks of PASCAL-5i and COCO-20i proves the efficiency of our method. Extensive ablation studies were also carried out to evaluate the effectiveness of Bi-Transformer architecture and TTL.
Xianglin Wang, Xiaoliu Luo, Taiping Zhang
2023-09-18T05:28:51Z
http://arxiv.org/abs/2309.09492v1
# Target-aware Bi-Transformer for Few-shot Segmentation+ ###### Abstract Traditional semantic segmentation tasks require a large number of labels and are difficult to identify unlearned categories. Few-shot semantic segmentation (FSS) aims to use limited labeled support images to identify the segmentation of new classes of objects, which is very practical in the real world. Previous researches were primarily based on prototypes or correlations. Due to colors, textures, and styles are similar in the same image, we argue that the query image can be regarded as its own support image. In this paper, we proposed the Target-aware Bi-Transformer Network (TBTNet) to equivalent treat of support images and query image. A vigorous Target-aware Transformer Layer (TTL) also be designed to distill correlations and force the model to focus on foreground information. It treats the hypercorrelation as a feature, resulting a significant reduction in the number of feature channels. Benefit from this characteristic, our model is the lightest up to now with only 0.4M learnable parameters. Futhermore, TBTNet converges in only 10% to 25% of the training epochs compared to traditional methods. The excellent performance on standard FSS benchmarks of PASCAL-5\({}^{i}\) and COCO-20\({}^{i}\) proves the efficiency of our method. Extensive ablation studies were also carried out to evaluate the effectiveness of Bi-Transformer architecture and TTL. Keywords:Semantic segmentation Fes-shot learning Transformer. ## 1 Introduction Semantic segmentation aims to assign each pixel of an image to a certain class, which is one of the cornerstones of computer vision tasks. With the development of the deep convolution network [12, 15], it has made considerable progress. However, the training process requires an enormous amount of labeled data, which is a labor-intensive task. So semi- and weakly-supervised segmentation [11, 24, 27] is invented to reduce the dependence on expensive labels. But all the above methods can only recognize the classes in the train episode. To address this limitation, Few-shot segmentation (FSS) task was proposed. There are many FSS approaches that have been proposed in recent years [10, 22, 26]. The typical method follows the meta-learning paradigm [2], which is easy to overfit due to insufficient data for training. The FSS model is supposed to predict the segmentation of query images based on the condition of support images and corresponding annotations. Nowadays the prevalent approaches are based on prototype [9, 23, 26] and pixel-wise correlation [10, 19, 28]. Prototype-based methods want to obtain the prototype of the target from support images of high-level features and then utilize the prototype to segment query images. Pixel-wise-based methods take the similarity of each pixel between the query and support images as features to train the model. We can regard similarity as a class-agnostic feature, so the model rarely overfit. Objects in different pictures, even if they belong to the same category, may have vastly different features, especially for those parts that are not easily distinguishable. Self-similarity may alleviate this phenomenon, but all the above methods ignored it. In the paper, we proposed the Target-aware Bi-Transformer Network (TBTNet), which can integrate two types of similarity. As shown in Fig. 1, we first construct an intermediate prediction based on cross-similarity. According to [28], using high-level feature similarity can make a preliminary prediction, which highlights the most recognizable area for the class. After that we utilize self-similarity to refine the segmentation of query images. It is because that self-similarity contains the structure information of an image, which can expand the part segmentation to the whole. We also adopt a pyramid structure to implement our model. Our Target-aware Bi-Transformer Module (TBTM) can aggregate two affinity matrices at the same layer, guided by previous layer information, and then transfer the refined similarity to the next layer. That is because high-level intermediate prediction can roughly locate the target, but the boundary of an object is hard to distinguish due to low resolution. Increasing the segmentation resolution progressively with the expansion of affinity matrix can make the boundary more accurate. In order to make the model only concentrates to the categories we are interested in, we propose a Target-aware Transformer Module (TTM), which consists of two Target-aware Transformer Layers (TTL). Inheriting the virtue of transformer, each pixel of the query image has a global receptive field for the support image. Guided by the mask, TTL only focus on the target in the foreground. Since our model takes affinity matrices as inputs, which are low-dimensional features, the learned parameters are much less than the vanilla FSS models [10, 13, 19, 28]. Besides, the training time is shorter, and the computing complexity is lower than other methods due to the fewer parameters. Although our model is small, it can also achieve state-of-the-art performance. All in all, our contributions are: * Bi-Transformer architecture is proposed for few-shot segmentation work, which can take the advantage of self-similarity to boost performance. * We propose a novel Target-aware Transformer, which can efficiently extract the target's hypercorrelation information under the guidance of the mask. * Our TBTNet only has 0.4M learnable parameters, it is the lightest FSS model to date. * Our model can converge quickly and achieve SOTA performance. ## 2 Relate works ### Few-shot semantic segmentation Few-shot semantic segmentation is a branch of semantic segmentation that aims to assign each pixel to a particular class with only a few examples. It was first proposed by [2], which adopts a meta-learning paradigm to propagate support branch annotation to query branch. Soon afterward, Jake _et al_. [9] imports the idea of prototype into FSS, which extracts the prototype from the support set of a certain class and segments the query image by the prototype. Recently, Liu _et al_. [26] try to alleviate the intra-class variations, they generate an intermediate prototype from both query and support images. Although the prototype-based method has had great success in FSS, it disregards a lot of pixel structure information, which hinders the performance of this approach. PFNet [28] uses the affinity matrix between query and support high-level features to obtain a prior segmentation with a parameters-free method. The idea of hypercorrelation was introduced by [10], which is a 4D tensor transformed from affinity matrices and squeezed with 4D convolution. Followed [10], ASNet [5] replaced 4D convolution with an attention mechanism based on the transformer to compress hypercorrelation. Figure 1: Illustration of the Bi-Transformer architecture for few-shot segmentation. ### Vision Transfomer Ashish _et al_. [3] first proposed transformer in the Nature Language Processing (NLP) field, which is the standard architecture now. After that, Vit [6] introduced the transformer to Computer Vision (CV) and achieved great success. Recently, many transformer-based methods have been proposed in FSS. CyCTR [8] screens out reliable support features as query tokens to implement cross attention with the query image. DCAMA [25] aggregates mask by the attention between query and support features. ## 3 Problem setting There are two sets of data \(D_{train}\) and \(D_{test}\). The former is used to train the FSS model, and the last one is for testing, to evaluate the accuracy of the model. Each set contains many episodes \(E=\{I^{q},M^{q},I^{s},M^{s}\}\) where \(I^{s}\) and \(I^{q}\) represent support image and query image, \(M^{s}\) and \(M^{q}\) denote the corresponding binary mask of the certain category. For the k-shot scenario, \(E=\{I^{q},M^{q},I^{s}_{1},M^{s}_{1},I^{s}_{2},M^{s}_{2},...,I^{s}_{K},M^{s}_{K}\}\). The categories of \(D_{train}\) and \(D_{test}\) are disjoint, which means \(C_{train}\cap C_{test}=\varnothing\), where \(C_{train}\) and \(C_{test}\) are the classes of \(D_{train}\) and \(D_{test}\). During the training stage, we randomly sample episodes \(E\) from \(D_{train}\) to learn a network that can predict \(M^{q}\) by \(\{I^{q},I^{s}_{1},M^{s}_{1},I^{s}_{2},M^{s}_{2},...,I^{s}_{K},M^{s}_{K}\}\). At the inference stage, our model samples episodes from \(D_{test}\) and predicts the novel class target segmentation \(M^{q}\). Figure 2: Overall network architecture. Our TBTNet consists of four main sub-modules: feature extraction, similarity computation, TBTM pyramidal encoder, and a simple convolution decoder. For more details please refer to Sec.4. ## 4 Method ### Overview As shown in Fig. 2, our Target-aware Bi-Transformer Network (TBTNet) consists of three Target-aware Bi-Transformer Modules and two decoders. Firstly, a pre-trained backbone is used to extract the features of support image and query image respectively. After that, computing the cosine similarity between query and support, query and itself for hypercorrelation. Next, the hypercorrelation with the same resolution will be mixed by a TBTM. The output of TBTM contains a predicted query mask and a tensor which is the input of the next TBTM. The output from the last TBTM mixed all the hypercorrelation information and will be sent to decoder to make the final prediction. ### Hypercorrelation Features Computation Following [10], we take ResNet50 and ResNet101 [12], which is pre-trained on ImageNet [15], as the backbone to extract features of images. \(\mathrm{F}_{l,d}^{s},\mathrm{F}_{l,d}^{q}\in\mathbb{R}^{C_{l}\times H_{l} \times W_{l}}\) are the features of support and query images \(I^{s},I^{q}\in\mathbb{R}^{3\times H\times W}\) respectively. \[\{\{\mathrm{F}_{l,d}^{*}\}_{d=1}^{D_{l}}\}_{l=2}^{4}=\mathrm{ResNet}(I^{*}), \tag{1}\] where \(l\) denotes the output layer of ResNet, \(D_{l}\) means the number of block at layer \(l\) and \(*\in\{s,q\}\). #### 4.2.1 Cross- & self- similarity. Since the features extracted from the backbone contain rich semantic information, we compute the cosine between features and get cross-similarity \(\mathrm{A}_{l,d}^{qs}\in\mathbb{R}^{H_{l}^{q}}\,W_{l}^{q}\times H_{l}^{s}\,W_ {l}^{s}\): \[\mathrm{A}_{l,d}^{qs}(p^{q},p^{s})=\mathrm{ReLU}(\frac{\mathrm{F}_{l,d}^{q}(p ^{q})^{T}\,\mathrm{F}_{l,d}^{s}(p^{s})}{\|\mathrm{F}_{l,d}^{s}(p^{q})\|\| \mathrm{F}_{l,d}^{q}(p^{s})\|}), \tag{2}\] where \(p^{*}\) is 2D positions and \(\mathrm{F}_{l,d}^{*}(p^{*})\in\mathbb{R}^{C_{l}\times 1}\). We compute the self-similarity \(\mathrm{A}_{l,d}^{qq}\) in the same way as cross-similarity, only replacing \(\mathrm{F}_{l,d}^{s}\) with \(\mathrm{F}_{l,d}^{q}\). #### 4.2.2 Hypercorrelation. To obtain cross-hypercorrelation \(\mathrm{X}_{l}^{qs}\) and self-hypercorrelation \(\mathrm{X}_{l}^{qq}\in\mathbb{R}^{H_{l}^{q}}\,W_{l}^{q}\times H_{l}^{s}\,W_ {l}^{s}\times D_{l}\), we stack all the affinity matrix at the same layer. \[\mathrm{X}_{l}^{q*}=\mathrm{Stack}(\{\mathrm{A}_{l,d}^{q*}\}_{d=1}^{D_{l}}), \tag{3}\] ### Target-aware Bi-Transformer Module Previous approaches only use cross-similarity to predict the segmentation of query image, often leading to incomplete result. It greatly limits the capability of the model. In contrast, self-similarity contains the structural information inherent in the image, which helps make the prediction more complete. Therefore, we designed the TBTM, which first passes the cross-hypercorrelation through two Target-aware Transformer Modules (TTM) and a Convolution Block to obtain the intermediate prediction, then, under the guidance of the prediction, refines the self-hypercorrelation through the other two TTMs. As shown in Fig. 2, TTM aims to reduce the support spatial sizes progressively and change the channels of the hypercorrelation. \[\widetilde{\mathrm{X}}_{l}^{qs}\mathrm{=TTM(X}_{l}^{qs},M^{s}), \tag{4}\] \[\overline{\mathrm{X}}_{l}^{qs}\mathrm{=TTM(\widetilde{\mathrm{X}}_{l}^{qs} \oplus T_{l+1},}M^{s}), \tag{5}\] where \(M^{s}\in\{0,1\}^{H\times W}\) is the binary segmentation map of support image. \(\widetilde{\mathrm{X}}_{l}^{qs}\in\mathbb{R}^{H_{l}^{q}\,W_{l}^{q}\times \widetilde{H_{l}^{s}}\,\widetilde{W_{l}^{s}}\times D}\) and \(\overline{\mathrm{X}}_{l}^{qs}\in\mathbb{R}^{H_{l}^{q}\,W_{l}^{q}\times 1 \times D}\) are the output of TTM. Noting that both \(\widetilde{H_{l}^{s}}\) and \(\widetilde{W_{l}^{s}}\) are smaller than \(H_{l}^{s}\) and \(W_{l}^{s}\) respectively. \(\mathrm{T}_{l+1}\) denotes the MixToken from previous layer which has mixed the self- and cross-similarity information. It is initialized to 0 and \(\mathrm{T}_{l+1}\in\mathbb{R}^{H_{l}^{q}\,W_{l}^{q}\times 1\times D}\). We utilized broadcasted element-wise addition to sum \(\widetilde{\mathrm{X}}_{l}^{qs}\) and \(\mathrm{T}_{l+1}\) because of their shapes are different. \(\overline{\mathrm{X}}_{l}^{qs}\) will be sent to a convolution block to compute \(\widehat{\mathrm{M}}_{l}^{q}\): \[\widehat{\mathrm{M}}_{l}^{q}=\mathrm{ReLU(Conv(ReLU(Conv(\overline{\mathrm{X} }_{l}^{qs}))))}, \tag{6}\] where \(\widehat{\mathrm{M}}_{l}^{q}\) denotes the predicted segmentation of query image at layer \(l\), \(\widehat{\mathrm{M}}_{l}^{q}\in\mathbb{R}^{2\times H_{l}^{q}\times W_{l}^{q}}\). The convolution block consists of two times alternating convolution layers and ReLU activation functions. We can get a binary segmentation map \(\mathrm{M}_{l}^{q}\in\{0,1\}^{H_{l}^{q}\times W_{l}^{q}}\) easily from \(\widehat{\mathrm{M}}_{l}^{q}\): \[\mathrm{M}_{l}^{q}(x,y)=\left\{\begin{array}{ll}0\ \mathrm{if}\ \widehat{ \mathrm{M}}_{l}^{q}(0,x,y)>\widehat{\mathrm{M}}_{l}^{q}(1,x,y)\\ 1\ ### Target-aware Transformer Module The traditional Transformer has global attention, it is diffcult to make the model only focus on specific categories due to the support images contain multiple objects. Therefore, we propose the Target-aware Transformer Module (TTM) to make the model only calculates the hypercorrelation of the target in the mask. TTM consists of multiple Target-aware Transformer Layers(TTL), and the structure of TTL is illustrated in Fig. 3. In order to gradually reduce support spatial sizes, _i.e._\(H^{s}\,W^{s}\), we replaced the linear layer with a convolution layer to project input \(\mathrm{X}_{in}\) into \(\mathrm{X}_{Q},\mathrm{X}_{K},\mathrm{X}_{V}\), and a shortcut term \(\mathrm{X}_{SC}\): \[\mathrm{X}_{\bigstar}=\mathrm{Conv}_{\bigstar}(\mathrm{Drop}(\mathrm{X}_{in})), \tag{11}\] where \(\bigstar\in\{Q,K,V,SC\}\), \(\mathrm{X}_{in}\in\mathbb{R}^{H^{q}\,W^{q}\times H^{s}\,W^{s}\times D_{in}}\), and Drop means randomly setting elements \(0\) with rate \(\beta\). We only perform drop operation on self-hypercorrelation branch, _i.e._\(\mathrm{X}_{in}\!=\!\mathrm{X}_{l}^{qq}\). Taking the mask as a filter so that only foreground information is retained in the \(\mathrm{X}_{V}\). We regarded query spatial sizes, _i.e._\(H^{q}\,W^{q}\), as batchsize and carried out Batch Matrix Multiplication(BMM) to compute: \[\mathrm{\dot{X}}_{out}=\mathrm{Soft}\max(\mathrm{X}_{Q}{X_{K}}^{T})(\mathrm{X} _{V}\odot\widetilde{M}^{s}), \tag{12}\] where: \[\widetilde{M}^{s}=\mathrm{DownSample}(\mathrm{M}^{s})\in\{0,1\}^{\hat{H}^{s} \times\hat{W}^{s}}, \tag{13}\] and \(\odot\) means broadcasted dot product. Two multi-layer perception and normalization layers follow to calculate the final output \(\mathrm{X}_{out}\in\mathbb{R}^{H^{q}\,W^{q}\times\hat{H}^{s}\,\hat{W}^{s} \times D_{out}}\): \[\mathrm{\ddot{X}}_{out}=\mathrm{Norm}(\mathrm{MLP}(\dot{X}_{out})+\dot{X}_{ out}+X_{SC}), \tag{14}\] \[\mathrm{X}_{out}=\mathrm{Norm}(\mathrm{MLP}(\ddot{X}_{out})+\ddot{X}_{out}), \tag{15}\] Now we have reduced support spatial sizes from \(H^{s}\,W^{s}\) to \(\dot{H}^{s}\,\hat{W}^{s}\) meanwhile changed channels from \(D_{in}\) to \(D_{out}\). ### Segmentation Decoder The structures of both decoders are the same as Convolution Block in the TBTM. It is simple but efficient to obtain the final prediction \(\mathrm{\widehat{M}}_{1}^{q}\in\mathbb{R}^{2\times H_{1}^{q}\times W_{1}^{q}}\). The model parameters are optimized by the cross-entropy loss between a series of predictions \(\{\mathrm{\widehat{M}}_{l}^{q}\}_{l=1}^{4}\) and the ground-truth \(\mathrm{M}^{q}\in\{0,1\}^{H\times W}\) overall pixel locations. Noting that we unsampled all the predictions to the same size with \(M_{q}\) by bilinear interpolation before computing loss. We also set a hyperparameter \(\alpha\) to adjust the weighs of \(\{\mathcal{L}_{l}=\mathrm{CE}(\mathrm{\widehat{M}}_{l}^{q},\mathrm{M}^{q})\}_{ l=1}^{4}\): \[\mathcal{L}_{total}=(1-3\times\alpha)\mathcal{L}_{1}+\alpha\sum_{l=2}^{4} \mathcal{L}_{l}, \tag{16}\] where \(\mathrm{CE}\) denotes cross-entropy and \(\alpha=0.1\) in all the experiments. ## 5 Experiments In this section, we conducted extensive experiments on PASCAL-\(5^{i}\)[2] and COCO-\(20^{i}\)[16] datasets which are prevalent in the few-shot segmentation field. And we use mIoU and FB-IoU as metrics to compare our results with recently excellent methods. Finally, we analyze the influence of each proposed module through extensive ablation experiments. All experiments are implemented on PyTorch [1]. Following HSNet [10], we use Adam [14] as the optimizer to update model parameters and the learning rate is set to 0.001. The batch size is set to 8 for all experiments. Both query and support images' spatial sizes are set to 400x400 without any data augmentation. Borrowed from ASNet [5], we set \(H_{2}^{q},W_{2}^{q}=50\), \(H_{2}^{s},W_{2}^{s},H_{3}^{s},W_{3}^{s},H_{3}^{q},W_{3}^{q}=25\) and \(H_{4}^{q},W_{4}^{q},H_{4}^{s},W_{4}^{s}=13\). Different from other methods [5, 10, 19], our train epoch is only set to 50 for PASCAL-\(5^{i}\) and 20 for COCO, which is much less than others. ### Datasets PASCAL-\(5^{i}\) includes PASCAL VOC 2012 [7] and extended annotations from SDS [4] datasets, which contain 20 object categories of images. All the images are evenly divided into 4 folds \(i=\{0,...,3\}\), each fold contains 5 classes images \(C_{test}^{i}=\{5\times i,...,5\times i+4\}\) for testing and the rest 15 classes \(C_{train}^{i}=\{0,...,19\}-C_{test}^{i}\) for training. Following [28], we randomly sampled 1000 support-query pairs for testing. COCO-20\({}^{i}\)[16] is based on MSCOCO [20], which is much more difficult than PASCAL-5\({}^{i}\). We divided it into 4 folds as same as PASCAL-5\({}^{i}\), but each fold contains 60 and 20 categories images for training and testing respectively. ### Comparison with State-of-the-Arts As shown in Table 1, 2, we compared the performance of TBTNet and recently excellent approaches on PASCAL-5\({}^{i}\)[2] and COCO-20\({}^{i}\)[16] respectively. Extensive experiments indicate that our model can achieve higher accuracy and shorter train time with fewer parameters. TBTNet outperformed all other models on PASCAL-5\({}^{i}\) whether took ResNet50 or ResNet101 as the backbone. It achieved the best or second-best results on each fold, especially over ASNet 2.5 mIoU on fold 3 with ResNet101. TBTNet exceeds the previous SOTA model ASNet 1.4 mIoU and achieved a new record. As for the numbers of learnable parameters, our TBTNet only has 0.4M which is 3.7% of PFENet's and 30.8% of ASNet's. Due to the small number of parameters, our model is easy to train and only needs 50 epochs to converge which is 10% of ASNet's and 25% of others. To the \begin{table} \begin{tabular}{c c|c c c c c c|c c c c c|c c c} \hline \hline Backbone & \multirow{2}{*}{Methods} & \multicolumn{5}{c|}{1-shot} & \multicolumn{5}{c|}{5-shot} & \multicolumn{3}{c}{learnable train} \\ network & & 5\({}^{0}\) & 5\({}^{1}\) & 5\({}^{2}\) & 5\({}^{3}\) & mean & FB-IoU & 5\({}^{0}\) & 5\({}^{1}\) & 5\({}^{2}\) & 5\({}^{3}\) & mean & FB-IoU & params & epoch \\ \hline \multirow{8}{*}{ResNet50} & PFENet [28] & 61.7 & 69.5 & 55.4 & 56.3 & 60.8 & 73.3 & 63.1 & 70.7 & 55.8 & 57.9 & 61.9 & 73.9 & 10.8M & 200 \\ & HSNet [10] & 64.3 & 70.7 & 60.3 & 60.5 & 64.0 & 76.7 & 70.3 & 73.2 & 67.4 & 62.1 & 69.5 & 80.6 & 2.6M & - \\ & SSP [18] & 60.5 & 67.5 & **66.4** & 51.0 & 61.4 & - & 67.5 & 72.3 & **75.2** & 62.1 & 69.3 & - & 8.7M & - \\ & VAT [19] & 67.6 & 72.0 & 62.3 & 60.1 & 65.5 & 72.8 & **72.4** & 73.6 & 68.6 & 65.7 & 70.1 & **80.9** & 3.2M & 300 \\ & IPRNet [17] & 65.2 & **72.9** & 63.3 & 61.3 & 65.7 & - & 70.2 & **75.6** & 68.9 & 66.2 & **70.2** & - & - & 200 \\ \cline{2-11} & **Ours** & **68.7** & 72.0 & 62.4 & **62.6** & **66.4** & **77.9** & 70.6 & 75.0 & 66.6 & **68.1** & 70.1 & 80.1 & 0.3M & 50 \\ \hline \multirow{8}{*}{ResNet101} & PFENet [28] & 60.5 & 69.4 & 54.4 & 55.9 & 60.1 & 72.9 & 62.8 & 70.4 & 54.9 & 57.6 & 61.4 & 73.5 & 10.8M & 200 \\ & HSNet [10] & 67.3 & 72.3 & 62.0 & 63.1 & 66.2 & 77.6 & 71.8 & 74.4 & 67.0 & 68.3 & 70.4 & 80.6 & 2.6M & - \\ & ASNet [5] & 69.0 & 73.1 & 62.0 & 63.6 & 66.9 & 78.0 & 73.1 & 75.6 & 65.7 & 69.9 & 71.1 & 81.0 & 1.3M & 500 \\ & IPMT [26] & **71.6** & **73.5** & 58.0 & 61.2 & 66.1 & - & **75.3** & **76.9** & 59.6 & 65.1 & 69.2 & - & - & 200 \\ \cline{1-1} \cline{2-11} & **Ours** & 70.2 & 73.3 & **63.6** & **66.1** & **68.3** & **79.0** & 72.2 & **76.0** & **68.3** & **71.5** & **72.0** & **81.6** & 0.4M & 50 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison on PASCAL-5\({}^{i}\)[2]. Best results in **bold**, and second best are underlined. \begin{table} \begin{tabular}{c c|c c c c c c|c c c c c|c c c} \hline \hline Backbone & \multirow{2}{*}{Methods} & \multicolumn{5}{c|}{1-shot} & \multicolumn{5}{c}{5-shot} & \multicolumn{5}{c}{learnable train} \\ network & & 5\({}^{0}\) & 5\({}^{1}\) & 5\({}^{2}\) & 5\({}^{3}\) & mean & FB-IoU & 5\({}^{0}\) & 5\({}^{1}\) & 5\({}^{2}\) & 5\({}^{3}\) & mean & FB-IoU & params & epoch \\ \hline \multirow{8}{*}{ResNet50} & PFENet [28] & 36.5 & 38.6 & 34.5 & 33.8 & 35.8 & - & 36.5 & 43.3 & 37.8 & 38.4 & 39.0 & - & 10.8M & 50 \\ & CMNet [21] & **48.7** & 33.3 & 28.8 & 31.2 & 35.0 & - & **49.5** & 35.6 & 31.8 & 33.1 & 37.5 & - & - & 50 \\ & IPMT [26] & **41.4** & **45.1** & **45.6** & 40.0 & 43.0 & - & 43.5 & 49.7 & 48.7 & **47.9** & 47.5 & - & - & 50 \\ & VAT [19] & 39.0 & 43.8 & 42.6 & 39.7 & 41.3 & 68.8 & 44.1 & 51.1 & 50.2 & 46.1 & 47.9 & 72.4 & 3.3M & - \\ \cline{2-11} & **Ours** & 39.8 & **46.9** & 44.6 & **43.8** & **43.8** & **70.6** & 45.6 & **54.7** & **51.5** & 47.2 & **49.7** & **72.7** & 0.3M & 20 \\ \hline \multirow{8}{*}{ResNet101} & PFENet [28] & 34.3 & 33.0 & 32.3 & 30.1 & 32.4 & - & 38.5 & 38.6 & 38.2 & 34.3 & 37.4 & - & 10.8M & 50 \\ & HSNet [10] & 37.2 & 44.1 & 42.4 & 41.3 & 41.2 & 69.1 & 45.9 & 53.0 & 51.8 & 47.1 & 49.5 & 72.4 & 2.6M & - \\ \cline{1-1} & ASNet [5] & **41.8** & 45.4 & 43.2 & 41.9 & 43.1 & 69.4 & **48.0** & 52.1 & 49.7 & 48.2 & 49.5 & 72.7 & 1.3M & - \\ \cline{1-1} & IPMT [26] & **40.5** & 45.7 & 44.8 & 39.3 & 42.6 & - & 45.1 & 50.3 & 49.3 & 46.8 & 47.9 & - & - & 50 \\ \cline{1-1} \cline{2-11} & **Ours** & 40.2 & **47.5** & **46.6** & **45.3** & **44.9** & **71.2** & 46.2 & **55.5** & **52.7** & **49.4** & **50.9** & **73.3** & 0.4M & 20 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison on COCO-20\({}^{i}\)[16]. best of our knowledge, TBTNet is the model with the shortest training period to date. On the more difficult datasets COCO-20\({}^{i}\), TBTNet also achieved remarkable performance. Our model got the best score on folds 1, 2, 3 and mIoU, no matter whether in 1-shot or 5-shot conditions and two backbones. It manifests that TBTNet can generalize well, with almost no bias towards categories. Under 1-shot configuration, TBTNet outperformed ASNet by 1.8 mIoU when taking ResNet101 as backbone. As on the PASCAL dataset, our training period was only 40% of the others. In Fig. 5, we visualized the inference procedure of TBTNet and compared the predictions with ASNet, which is one of the SOTA models. We observed that segmentation can be optimized gradually with increasing resolution layer by layer. Our model can use self-similarity to make the segmentation more complete. ### Ablation Study All ablation study experiments are carried out on PASCAL-5\({}^{i}\)[2] with the ResNet101 backbone and the 1-shot setting. #### 5.3.1 Effectiveness of Bi-Transformer architecture. We conduct an ablation study by modifying the structure of TBTM to evaluate the influence of self-similarity. As shown in the Table 3, "Bi-T" means whether utilize self-similarity Figure 5: Qualitative comparison between our proposed TBTNet and ASNet. From left to right: support image, query image, intermediate prediction of TBTNet at layer 4, 3, 2, finale prediction, ground truth and the prediction of ASNet. branch in the TBTM. In other words, the unselected "Bi-T" represents \(\text{T}_{l}=\text{T}_{l+1}+\overline{\text{X}}_{l}^{qs}\), and vice versa \(\text{T}_{l}=\text{T}_{l+1}+\overline{\text{X}}_{l}^{qs}+\overline{\text{X}}_{ l}^{qq}\) in Eq.(10). Experiments indicate that self-similarity branch can lead to 3.1% increase in mIoU. The improvement can prove that Bi-Transformer architecture is very efficacious for FSS. **Effectiveness of TTL.** To explore the strength of the proposed TTL, we compare it with Attention Squeeze Layer (ASL) [5]. In the Table 3, "TTL" and "ASL" denote the sub-module in TTM. We set \(\beta\) of both experiments as 0.05 for fairness. When TTL is replaced with ASL, a significant drop can be observed, with mIoU descending from 68.3 to 67.7. It indicates that our proposed TTL is more efficient than ASL, which may benefit from a more multivariate residual structure in Eq.(14). **Ablation study on the dropout rate.** We conducted a series of experiments to find the optimal parameter \(\beta\), and all the results are shown in Fig. 4. The mIoU reaches its peak at 68.3 when \(\beta\) is 0.05. As \(\beta\) increases, mIoU rises and then falls. It is because appropriate \(\beta\) can effectively prevent overfitting, and enhance the generalization ability of the model, whereas an excessive \(\beta\) will lead to the loss of too much information, thus hindering performance. ## 6 Conclution In this paper, we introduce Bi-Transformer architecture to few-shot segmentation. To utilize self-similarity information efficiently, we proposed TBTM to integrate it with cross-similarity. A novel TTL is also been proposed to compact the similarity information which is a variant of the transformer. Our TBTNet is a lightweight and fast convergence model. Its effectiveness has been demonstrated by its outstanding performance on the standard benchmarks for few-shot segmentation. We hope that our research will shed light on other domains where similarity analysis is required.
2306.00555
Sensitivity Analysis of High-Dimensional Models with Correlated Inputs
Sensitivity analysis is an important tool used in many domains of computational science to either gain insight into the mathematical model and interaction of its parameters or study the uncertainty propagation through the input-output interactions. In many applications, the inputs are stochastically dependent, which violates one of the essential assumptions in the state-of-the-art sensitivity analysis methods. Consequently, the results obtained ignoring the correlations provide values which do not reflect the true contributions of the input parameters. This study proposes an approach to address the parameter correlations using a polynomial chaos expansion method and Rosenblatt and Cholesky transformations to reflect the parameter dependencies. Treatment of the correlated variables is discussed in context of variance and derivative-based sensitivity analysis. We demonstrate that the sensitivity of the correlated parameters can not only differ in magnitude, but even the sign of the derivative-based index can be inverted, thus significantly altering the model behavior compared to the prediction of the analysis disregarding the correlations. Numerous experiments are conducted using workflow automation tools within the VECMA toolkit.
Juraj Kardos, Wouter Edeling, Diana Suleimenova, Derek Groen, Olaf Schenk
2023-05-31T14:48:54Z
http://arxiv.org/abs/2306.00555v1
# Sensitivity Analysis of High-Dimensional Models with Correlated Inputs1 ###### Abstract Sensitivity analysis is an important tool used in many domains of computational science to either gain insight into the mathematical model and interaction of its parameters or study the uncertainty propagation through the input-output interactions. In many applications, the inputs are stochastically dependent, which violates one of the essential assumptions in the state-of-the-art sensitivity analysis methods. Consequently, the results obtained ignoring the correlations provide values which do not reflect the true contributions of the input parameters. This study proposes an approach to address the parameter correlations using a polynomial chaos expansion method and Rosenblatt and Cholesky transformations to reflect the parameter dependencies. Treatment of the correlated variables is discussed in context of variance and derivative-based sensitivity analysis. We demonstrate that the sensitivity of the correlated parameters can not only differ in magnitude, but even the sign of the derivative-based index can be inverted, thus significantly altering the model behavior compared to the prediction of the analysis disregarding the correlations. Numerous experiments are conducted using workflow automation tools within the VECMA toolkit. keywords: Global sensitivity analysis, Uncertainty quantification, Parameter Correlation, Sobol index, Polynomial Chaos Expansion + Footnote †: journal: Journal of Computational Science ## 1 Nomenclature \begin{tabular}{l l l} \(Q\) & A set of uncertain input parameters \(Q_{i}\) & \(\mathbb{E}\) & Expectation value operator \\ \(D\) & A number of the input parameters & \(S_{i}\) & Variance-based sensitivity index \\ \(\rho_{Q_{i}}\) & Parameter probability density function & \(S_{i}^{\mathcal{D}}\) & Derivative-based sensitivity index \\ \(\mathbf{q}\) & A set of parameter realizations & \((\cdot)^{*}\) & Denotes the correlated variables/samples \\ \(V\) & Vector of the application model & \(\boldsymbol{\mu}\) & Mean vector of the uncertain parameters \\ \(U\) & An application model & \(\boldsymbol{Y}=U(\mathbf{t},\boldsymbol{x},\boldsymbol{Q})\) & Covariance matrix of the parameters \\ \(P\) & Degree of the polynomial basis & \(C\) & Correlation matrix of the parameters \\ \(\Psi\) & Polynomial basis & \(L\) & Cholesky factor of the correlation matrix \\ \(\mathbf{a}\) & Polynomial coefficients for the basis \(\Psi\) & Permutation vector \\ \(\hat{\cdot}\) & Quantities related to the polynomial approximation of the true model & \\ \end{tabular} ## 2 Introduction Sensitivity analysis (SA) is a technique for understanding how changes in the input parameters influence the uncertainty in the output of a model or simulation. SA facilitates the understanding of how the outputs of a model change with respect to variations in the input parameters. It it particularly useful for complex models, in order to determine which parameters cause the greatest variation of the output, and quantify the sensitivity of the model to changes in these parameters. Additionally, SA can be used to improve the accuracy of a model by identifying and reducing sources of uncertainty in the input data. Two SA methods are studied in this manuscript, a global variance-based method, where the sensitivity is computed over the support of the input distributions. A local derivative-based method is considered as well, where the sensitivity is studied only in the vicinity of a fixed input point. The variance-based SA method [1] quantifies the sensitivity of each input parameter by estimating its contribution to the overall variance of the model output. This is achieved by decomposing the variance of the model output by splitting it into contributions which arise due to the impact of the input parameters or their interaction, and the parameters are assigned a sensitivity index based on their relative contributions. This sensitivity index is also known as the Sobol index [2]. Variance-based methods allow full exploration of the input space, accounting also for the interactions and nonlinear responses. The variance-based sensitivity is used especially in the context of uncertainty quantification, where the input parameters are usually characterized by a probability density function, modeling their uncertain nature or reflecting the uncertainty in the data collection method. The current state-of-the-art of variance-based SA comprises two main methodologies - quasi-Monte Carlo (QMC) [2] and the methods based on model surrogates such as polynomial chaos expansion (PCE) [3; 4]. Both approaches are based on sampling the input parameters from the given probability distributions, where the model is evaluated for the values of the parameter samples. In case of the QMC approach, this process is repeated thousands of times, and statistical metrics such as the mean and variance are computed from the resulting series of model outputs. On the other hand, the general idea behind PCE is to approximate the model input-output relationship with a polynomial expression, which is then used to directly obtain the statistical metrics such as mean and variance, while the first and total-order Sobol indices can also be calculated directly from the polynomial model [5]. In case of the derivative-based analysis, the sensitivity information comprise computation of the partial derivative of the model output with respect to an input parameter at some fixed point in the input space. The analytical derivative is often unknown, thus standard methods such as finite differences (FD) are used. The domain of the FD study is local, since such analysis can consider only vicinity around a single parameter and its fixed operating point. However, this shortcoming can be circumvented by exploiting a surrogate model where the derivative can be computed analytically. This, in turn, allows one to study sensitivity considering interactions between multiple parameters via correlations. In this manuscript, the derivative-based sensitivity indices are computed from the PCE surrogate model, in order to obtain information about the interaction of correlated variables. While the variance-based SA is used more during the initial phases of model design, where the goal is to understand the behavior of a model or simulation and the sources of uncertainty in its inputs. It can be used to guide model calibration by identifying the most important parameters, determine the range of input values that result in acceptable output values. Similarly, it can be used to optimize the design of a system by identifying the inputs that have the greatest impact on the performance of the system, and exploring the trade-offs between different design options [6]. On the other hand, the derivative-based sensitivity is particularly useful in operational context, where it is used to understand how changes in the input variables affect the output of a system or process. This can guide design of the robust control systems which are resilient to variations in the system's inputs. Alternatively, it can be used to manage risk by understanding how changes in input variables affect the risk of a system or process. For example, in finance, partial derivatives can be used to calculate the sensitivity of the value of a portfolio to changes in the underlying asset prices or guide forward hedging ratios in commodity trading [7]. ### Motivation and Research Context Correlation of the input parameters is a common phenomenon in many scientific and engineering models and there have been few studies conducted on sensitivity analyses with correlated parameters. Since the standard SA methods assume that the parameters are stochastically independent, this can have a significant impact on the results of the analysis. The presence of parameter correlations renders several assumptions no longer valid, e.g. the polynomials in the PCE are no longer orthogonal. Additionally, if two input parameters are highly correlated, it may be erroneous to draw conclusions about which of the two parameters has a greater impact on the output of the model using standard SA methods. Similarly, the results do not provide adequate information to determine the sensitivity of the model to variations in these inputs. Consequently, the presence of correlation between the input parameters can lead to biased estimates of the model sensitivity, which can lead to incorrect conclusions about the importance of the inputs and the input-output interactions. For example, when considering the context of energy market models, the input parameters such as the cost of fossil fuel resources (liquid fuels and natural gas) account for the majority of the variance in the total energy system cost. However, these parameters are often tightly correlated, and applying the state-of-the-art SA methods ignoring the correlation may lead to an optimistic risk assessment of voltage instability, the cost of power generation, a line overload risk, and a power shortage expectation [8; 9]. ### Literature Review and Related Work There are two directions in the literature how to deal with the correlations during SA; (i) decomposition of the traditional sensitivity indicies into correlated and uncorrelated parts [10; 11] and (ii) introducing new sets of indices which contain all correlations and indices which are reduced by the contributions due to the correlation [12; 13; 14]. The definition of the first order Sobol indices was extended to consider parameter dependencies in [15]. The method extends the QMC framework, such that the sampling is performed considering the conditional probability densities of the individual inputs. In case of dependent normal distributions, the samples are transformed using the Cholesky decomposition of the correlation matrix. The number of model evaluations required to obtain both the first and total order indices for a simple linear model with three inputs was \(2^{16}\), which is prohibitive for real-world complex models. The interpretation of the indices is also not clear, as in some cases the total Sobol index is smaller than the first order one. In [10; 11], the classical first order Sobol index is split into various components. These components represent uncorrelative, interactive and correlative contributions of a given parameter to the output variance. However, the interpretation of these contributions, as well as of total order indices, remains unclear. In this approach, the surrogate model is set up using independent joint input distribution. The polynomials of the PCE expansion are evaluated with the dependent samples, subsequently used to compute the covariance of the components functions. Analysis of covariance is then used to compute the resulting indices and their decomposition into the three components. A new set of the indices for correlated inputs was introduced by Mara and Tarantola [12; 13; 14]. Two distinct indices represent correlated and uncorrelated contributions of a given variable. These allow to distinguish between the mutual dependent contribution and the independent contribution of the parameter to the model response variance. The dependent parameters are decorrelated using the Gram-Schmidt procedure and Rosenblatt transformation, such that standard SA methods such as PCE or QMC frameworks can be used. However, since the SA is no longer performed using the original parameters, additional attention needs to be put to interpretation of the sensitivity indices. Additionally, different permutations of the decorrelated variables can be obtained, thus resulting in multiple set of the indices. ### Contribution and Organization SA with correlated parameters is studied in this work. The decorrelation approach is based on transformation of the input parameter space, such that the SA is performed using the independent distributions, following the approach (ii) and the work of Mara and Tarantola [12; 13; 14]. The contributions are the following: * The correlated SA approach is studied in context of both variance-based and derivative-based sensitivities; * Two transformations are used in order to reflect the stochastic dependencies in the input parameters, Cholesky decomposition of the correlation matrix and the Rosenblatt transformation; * The methods are implemented within EasyVVUQ SA framework, aiming to leverage large-scale computational resources to make state-of-the-art uncertainty quantification algorithms available and accessible to a wide range of computational scientists; * Demonstrate the importance of the parameter correlations in the SA and provide extensive numerical experiments accompanied by a comprehensive interpretation of the results. The following Sec. 3 discusses various aspects of the SA, introducing both variance and derivative based indices. The treatment of the correlated variables and modifications of the SA algorithm are introduced in Sec. 4. The application model used in the numerical experiments is presented in Sec. 5. Finally, extensive numerical experiments and their analysis is provided in Sec. 6. The paper concludes in Sec. 7 outlining also future research directions. ## 3 SA Method without Correlations The model is usually a complex interaction between its input parameters and outputs, and is treated in a black box fashion for the purpose of non-intrusive SA. Consider a model \(U\) that is defined over a time horizon \(\mathbf{t}\), space dimension \(\mathbf{x}\) and a set of \(D\) uncertain input parameters \(\mathbf{Q}=\{Q_{1},Q_{2},\ldots Q_{D}\}\), such that \[\mathbf{Y}=U(\mathbf{t},\mathbf{x},\mathbf{Q}). \tag{1}\] The model includes uncertain parameters that can be collectively described by a joint multivariate probability density function \(\rho_{\mathbf{Q}}\). If the uncertain parameters are statistically independent, the multivariate probability density function \(\rho_{\mathbf{Q}}\) can be defined by separate univariate probability density functions \(\rho_{Q_{i}}\), one for each uncertain parameter \(Q_{i}\), \[\rho_{\mathbf{Q}}=\prod_{i=1}^{D}\rho_{Q_{i}}, \tag{2}\] where unit normal distributions are assumed, such that \(\rho_{Q_{i}}\sim\mathcal{N}(\mu=0,\sigma=1)\). The main computational pattern of the SA in both MC and PCE consists of drawing the samples \(\mathbf{q}\) from the input parameter space \(\rho_{\mathbf{Q}}\) and evaluating the model \(U(\mathbf{t},\mathbf{x},\mathbf{q})\) at these points. The number \(N\) of such evaluations in the PCE approach \[N=\begin{pmatrix}D+P\\ P\end{pmatrix} \tag{3}\] is a function of the polynomial degree \(P\) of the basis and the dimension \(D\) of the parameters, where \(N\) grows fast, especially with the increasing dimension of the parameters. Based on these model evaluations, the true response of the model \(\mathbf{Y}\) is fitted onto a polynomial basis \(\Psi=\{\Psi_{p},p=0,\ldots,P\}\) with a polynomial degree up to \(P\). The basis needs to be orthogonal with respect to the input distributions \(\rho_{Q_{i}}\). The polynomial model \(\hat{\mathbf{Y}}=\hat{U}(\mathbf{t},\mathbf{x},\mathbf{Q})\) is build such that the true model is approximated by the polynomial expansion, \(U(\mathbf{t},\mathbf{x},\mathbf{Q})\approx\hat{U}(\mathbf{t},\mathbf{x},\mathbf{Q})\), and the model outputs are similar \(\mathbf{Y}\approx\hat{\mathbf{Y}}\). The surrogate model \(\hat{U}(\mathbf{t},\mathbf{x},\mathbf{Q})\) is built from the polynomial basis \(\Psi\) as \[\hat{U}(\mathbf{t},\mathbf{x},\mathbf{Q}) =\sum_{p\subset P}a_{p}\Psi_{p}(\mathbf{Q})\] \[=a_{0}\Psi_{0}+\sum_{p\subset P}\sum_{i=1}^{D}a_{p}^{i}\Psi_{p}^{ i}(Q_{i})\] \[+\sum_{p\subset P}\sum_{i,j=1,j>i}^{D}a_{p}^{ij}\Psi_{p}^{ij}(Q_{ i},Q_{j})\] \[\qquad\vdots\] \[+\sum_{p\subset P}a_{p}^{12\ldots D}\Psi_{p}^{12\ldots D}(Q_{1}, \ldots,Q_{D}), \tag{4}\] where \(\Psi_{0}=1\) is a zero order polynomial, \(\Psi_{p}^{i}(Q_{i})\) is a single dimensional polynomial up to degree \(p\) for a single input \(Q_{i}\), \(\Psi_{p}^{ij}(Q_{i},Q_{j})\) denotes polynomial order up to \(p\) of combination of two inputs \(Q_{i},Q_{j}\), etc. The polynomial coefficients \(a_{p}\) follow similar notation. In the non-intrusive variant of the method, the polynomial basis \(\Psi\) is constructed using, e.g., the three terms recurrence or the discretized Stieltjes method [3; 16]. The orthogonality of the polynomials holds in case the \(\mathbf{Q}\) parameters are independent, i.e., the joint density can be expressed as a product of the individual marginal densities from Eq. (2). A set of the polynomial coefficients \(a_{p}\) is determined such that the PCE model \(\hat{U}\) approximates the true model response \(\mathbf{Y}\). In point collocation, the approximation is built such that it minimizes the error at a set of collocation nodes compared to the true model response. Hammersley sampling [3] from the distribution is used to choose the collocation points. This results in a set of linear equations for the polynomial coefficients, which are solved using e.g. Tikhonov regularization. The overall algorithm is summarized in Alg. 1, where the SA is described in the following sections. ### Variance-based Sensitivity Variance-based SA [1] determines the impact of the input parameters which can be used to asses the role of the parameters in the model, i.e., determine if the parameter contributes intrinsically or via the parameter interactions, or asses the relative importance of the individual parameters. Additionally, variance-based sensitivity quantifies the output uncertainty and its propagation through the model from the uncertain inputs [4; 16]. Following the variance decomposition [2], the total output variance \(V(Y_{n})\) of \(n\)-th model output from Eq. (1) can be decomposed as \[V(Y_{n})=\sum_{i}V_{i}+\sum_{i}\sum_{j>i}V_{ij}+\ldots+V_{12\ldots D}, \tag{5}\] where the partial variances are defined as \[V_{i} =\mathbb{V}(\mathbb{E}(Y_{n}|Q_{i})), \tag{6}\] \[V_{ij} =\mathbb{V}(\mathbb{E}(Y_{n}|Q_{i},Q_{j}))-V_{i}-V_{j}, \tag{7}\] and so on, and the total variance is \(V(Y_{n})=\mathbb{V}(\mathbb{E}(Y_{n}))\). The polynomial coefficients can be post-processed to compute quantities of interest such as mean, variance and other statistical moments or variance-based sensitivity indices [5; 10]. The sensitivity indices in the variance-based measures, known as Sobol indices [2], are defined as the fraction of the variance of the component functions with respect to the total variance. The first order sensitivity index \(S_{i}\) measures the contribution of the \(i\)-th parameter, \[S_{i}=\frac{V_{i}}{V(Y_{n})}. \tag{8}\] The total order sensitivity index \(S_{i}^{T}\) includes not only the intrinsic contribution of the parameter itself as is the case for the first order index, but also interactions with other parameters are considered, \[S_{i}^{T}=\frac{\sum_{\alpha}V_{a}}{V(Y_{n})}, \tag{9}\] where \(\alpha\) is a set of all multi-indices which contain \(i\). It necessarily holds that \(0\leq S_{i}\leq S_{i}^{T}\leq 1\), and in case the model is additive and there are no parameter interactions, i.e. the higher order terms are zero, then \[\sum_{i}S_{i}=1. \tag{10}\] ### Derivative-based Sensitivity Derivative-based sensitivity indices express how much does the model output change if a small perturbation is applied to some of the inputs. The analytical derivatives of the complex models are not known, thus the usual practice is to use automatic differentiation tools or adopt approximations techniques such as finite differences to evaluate the numerical derivatives. The model derivative with respect to the parameter \(Q_{i}\) at a fixed point \(Q_{i}^{0}\) is expressed as \[S_{i}^{\mathcal{D}}=\left.\frac{\partial Y_{n}}{\partial Q_{i}}\right|_{Q_{i} ^{0}}. \tag{11}\] The shortcoming of this approach is that the resulting index can be computed only in the vicinity of the operating point of the given model configuration or its applicability for the SA of a single variable at a time, ignoring any possible interactions between the parameters. Alternatively, the derivative-based sensitivity index can be evaluated by constructing the surrogate model \(\hat{U}\) and compute the derivative of the polynomial expression with respect to a given parameter. With this approach, the interaction of the parameters can be incorporated in the SA via the parameter correlations. Thus, the sensitivity indices of the individual parameters can incorporate interaction with other parameters using the procedure proposed in this paper. This approach can be used for both variance-based and derivative-based sensitivities. ## 4 SA Method with Correlations When considering models with correlated parameters, the polynomial expansion (4) cannot be used to accurately represent the model sensitivity since it doesn't distinguish whether the parameter is contributing to the model directly or through a correlation with another variable. This can lead to incorrect conclusions about the variance-based decomposition, where the importance of the input parameters to the model and the sensitivity of the model to variations in these parameters no longer reflects the true parameter interactions in the model. In order to address the parameter dependency, the parameters must be decorrelated prior to applying the SA. This approach is adopted in the procedure of Mara and Tarantola [12; 13; 14]. In their original work, the samples are drawn from the correlated joint distribution and define a set of new variables, which are characterized by the conditional probability density functions and as such can be treated as independent. In this work, the collocation points are sampled using the independent unit normal distributions \(\rho_{\mathbf{Q}}=\mathcal{N}^{D}(\mathbf{\mu},I)\), while the model is evaluated using the transformed samples considering also the dependencies. Fig. 1 illustrates this principle, the independent collocation nodes and their transformation to the target correlated distribution \(\rho_{\mathbf{Q}}^{*}=\mathcal{N}^{D}(\mathbf{\mu},\mathcal{C})\). Since the linear relationship between the random variables is characterized using the Pearson and Spearman correlation coefficients, the correlated samples can be obtained from the independent ones using two different methods; (i) Rosenblatt transformation [17] and (ii) Cholesky decomposition of the correlation matrix [18]. ### Cholesky Decomposition Independent samples with an identity correlation matrix are drawn from a joint multivariate distribution \[\mathbf{Q}\sim\mathcal{N}^{D}(\mathbf{\mu},I). \tag{12}\] Since the components \(Q_{i}\) are random variables with zero mean and unit variance with zero correlation, we have \(\mathbb{E}(Q_{i}Q_{j})=\delta_{ij}\). Hence, \(\mathbb{E}(\mathbf{QQ}^{T})=I\). The joint probability of the independent variables can be expressed as the product of the marginal distributions. On the other hand, the joint distribution of the dependent variables \[\mathbf{Q}^{*}\sim\mathcal{N}^{D}(\mathbf{\mu},\mathcal{C}). \tag{13}\] can be expressed as a product of the conditional distributions, which are not known. An alternative approach is it to introduce a transformation between the two spaces of the variables, such that the independent variables \(\mathbf{Q}\) can be transformed to \(\mathbf{Q}^{*}\) and vice versa. The transformation is defined via the Cholesky decomposition of the correlation matrix. The Cholesky decomposition of the correlation matrix \(C\) is computed such that \(L=\text{chol}(C)\), and \(LL^{T}=C\), where \[L=\begin{pmatrix}c_{11}&&\\ c_{21}&c_{22}&\\ c_{31}&c_{32}&c_{33}\end{pmatrix}. \tag{14}\] The uncorrelated samples \(\mathbf{Q}\) are then transformed to samples that contain the correlations between the variables, as given by the correlation matrix, such that the transformed samples behave as drawn from the correlated distribution, i.e., \(\mathbf{Q}^{*}=T(\mathbf{Q})=L\mathbf{Q}\), where \(T\) is the transformation operator, \[\begin{pmatrix}Q_{1}^{*}\\ Q_{2}^{*}\\ Q_{3}^{*}\end{pmatrix}=\begin{pmatrix}c_{11}&&\\ c_{21}&c_{22}&\\ c_{31}&c_{32}&c_{33}\end{pmatrix}\begin{pmatrix}Q_{1}\\ Q_{2}\\ Q_{3}\end{pmatrix}. \tag{15}\] The random vector \(\mathbf{Q}^{*}\) behaves such that \(\mathbb{E}(\mathbf{Q}^{*}\mathbf{Q}^{*T})=\mathbb{E}((L\mathbf{Q})(L\mathbf{Q})^{T})=\mathbb{E }(L\mathbf{QQ}^{T}L^{T})=L\mathbb{E}(\mathbf{QQ}^{T})L^{T}=L\mathbb{E}(\mathbf{QQ}^{T})L^{T }=LIL^{T}=C\), since expectation is a linear operator. Hence, the transformed random vector \(\mathbf{Q}^{*}\) has the desired correlation matrix \(C\) and \(\mathbf{Q}^{*}\sim\mathcal{N}^{D}(\mathbf{\mu},\mathcal{C})\). One of the requirements for the Cholesky decomposition is that the matrix is positive definite. In practice, the sample covariance matrix is always at least positive semi-definite [19]. In certain situations, the eigenvalues of a covariance matrix can be zero. This can happen when the set of parameters includes constant or perfectly correlated variables, or the sample size is too small. In this work, the covariance matrix is always assumed to be positive definite. ### Rosenblatt Transformation The Rosenblatt transformation [17] allows for a vector of independent random variables \(\mathbf{Q}\) generated from the distribution \(\rho_{\mathbf{Q}}\) to be transformed Figure 1: Independent normal distribution (left) of the input space and the corresponding transformed parameter space with \(\rho_{\mathcal{C}}=0.8\) (right). The contour lines illustrate the multivariate probability density function. to the target distribution \(\rho^{*}_{\mathbf{Q}}\) which contains correlations between the variables. The transformed samples \(\mathbf{Q}^{*}=T(\mathbf{Q})\) behave as if they were drawn from the target density \(\rho^{*}_{\mathbf{Q}}\). The Rosenblatt transformation can be derived from a probability decomposition of a bivariate random variable \(\mathbf{Q}^{*}=(Q_{1}^{*},Q_{2}^{*})\) with a correlation as \[\rho^{*}_{\mathbf{Q}}=\rho_{Q_{1}}\rho_{Q_{2}|Q_{1}}, \tag{16}\] where \(\rho_{Q_{1}}\) is a marginal density function, and \(\rho_{Q_{2}|Q_{1}}\) is a conditional density. In a general multivariate case, the density decomposition has the form \[\rho^{*}_{\mathbf{Q}}=\rho_{Q_{1}}\prod_{d_{i}=2}^{D}\rho^{\prime}_{Q_{d_{i}}}, \tag{17}\] where \(\rho^{\prime}_{Q_{d_{i}}}=\rho_{Q_{d_{i}}}|\rho_{Q_{1}},\ldots,\rho_{Q_{d_{i-1 }}}\) is conditioned on all components with lower indices. A forward Rosenblatt transformation is then defined as \[T=\left(F_{Q^{\prime}_{1}},\ldots,F_{Q^{\prime}_{d}}\right), \tag{18}\] where \(F_{Q^{\prime}_{d_{i}}}\) is the cumulative distribution function \[F_{Q^{\prime}_{d_{i}}}=\int_{-\infty}^{q_{d_{i}}}\rho_{Q^{\prime}_{d_{i}}} \left(r\mid q_{1},\ldots,q_{d_{i}-1}\right)\mathrm{d}r. \tag{19}\] Note also that the Rosenblatt transformation is not limited to only Gaussian distributions. In this work, the implementation of the transformation implemented in the Chaospy [3] package is used. ### SA Method with Correlations The SA algorithm introduced in Alg. 1 needs to be modified in presence of the correlated inputs in order to correctly represent the input-output interactions and the sensitivity indices. The changes are summarized in the modified method presented in Alg. 2. The modified method first needs to generate the parameter samples including the correlations. As before, the set of the parameter samples \(\mathbf{q}\) is generated from the independent joint distribution \(\rho_{\mathbf{Q}}\) which are subsequently transformed according to the stochastic dependency structure. The correlated samples \(\mathbf{q}^{*}=T(\mathbf{q})\) can be obtained using either the Cholesky or Rosenblatt transformations. Having created the correlated samples, the modified method next evaluates the true model using the correlated samples \(\mathbf{Y}^{*}=U(\mathbf{x},\mathbf{t},\mathbf{q}^{*})\). The surrogate model is constructed in the transformed coordinate space compared to the independent model, reflecting the correlated contributions which affect the model outputs. This coordinate space is transformed implicitly, by mapping the polynomial expansion generated from the independent distribution \(\rho_{\mathbf{Q}}\), to the space of the correlated model outputs \(\mathbf{Y}_{n}^{*}\). In other words, the linear regression \[\mathbf{Y}_{n}^{*}=\sum_{p}a_{p}(t)\ \Psi_{p}(\mathbf{q}_{n}) \tag{20}\] is solved, where the left-hand side term is in the correlated space, while the polynomial expansion and the samples \(\mathbf{q}_{n}\) at the right-hand side is from the uncorrelated space. Such surrogate model is then used to perform the SA summarized in Alg. 3. ### Interpretation of the Sensitivity Indices The sensitivity indices computed following the method presented in Alg. 2 and 3 need to be interpreted differently, compared to their counterparts computed without any parameter dependencies (see Sec. 3.1 and 3.2). One needs to consider the fact that the parameter transformations effectively introduce new variables, which are a combination of the original ones in case of linear dependencies. Consequently, the resulting indices either include the effects of the parameter itself together with its dependence with other inputs or it can represents the sensitivity index without its mutual dependent contributions with other parameters. When applying the transformation a particular ordering of the parameters is assumed, e.g., the natural ordering \(\mathcal{P}_{1}=(1,2,\ldots,D)\) with the parameters \(\mathcal{P}_{1}\mathbf{Q}=(Q_{1},Q_{2},\ldots\,Q_{D})\). The transformation is then applied sequentially, where the first parameter is kept unmodified, while the others are transformed according to the particular correlation structure. Considering a vector of the input parameters \(\mathcal{P}_{1}\mathbf{Q}\), the correlated vector is formed as \[Q_{1}^{*} =Q_{1},\] \[Q_{2}^{*} =Q_{2}|Q_{1},\] \[Q_{3}^{*} =Q_{3}|Q_{1}Q_{2}, \tag{21}\] \[\vdots Q_{D}^{*} =Q_{D}|Q_{1}Q_{2}\ldots Q_{D-1}.\] The resulting sensitivity indices obtained by applying the SA with correlations using the transformed samples \(\mathbf{Q}^{*}=(Q_{1}^{*},Q_{2}^{*},\ldots\,Q_{D}^{*})\) need to be interpreted differently, since different variables have been used compared to the original variables \(\mathbf{Q}\). One needs to distinguish between the Full and Independent indices. The Full index includes the effects of the parameter itself together with its dependence with all other inputs. On the other hand, the Independent index represents the contribution of a parameter without its mutual dependent interactions with other parameters. Using the permutation \(\mathcal{P}_{1}\), the Full index for the parameter \(Q_{1}\) is obtained, together with the Independent index for the parameter \(Q_{D}\). The Full index is obtained for the first parameter in the permuted vector \(\mathcal{P}_{1}\mathbf{Q}\), while the independent index corresponds to the last parameter in the permuted vector. The sensitivity indices of the remaining variables in the vector \(\mathcal{P}_{1}\mathbf{Q}\), that is \((Q_{2},\ldots\,Q_{D-1})\), express the marginal contribution of \(Q_{i},\ i=2,\ldots,D-1\) to the output variance without its correlative contributions with parameters \(Q_{j},\forall j:j<i\). Thus, under the permutation \(\mathcal{P}_{1}\) the Full index for the parameter \(Q_{1}\) is defined as \[S_{1}=\frac{\mathbb{V}(\mathbb{E}(Y_{n}|Q_{1}^{*}))}{\mathbb{V}(Y_{n})}, \tag{22}\] while the Independent index for the parameter \(Q_{D}\) is defined as \[S_{D}=\frac{\mathbb{V}(\mathbb{E}(Y_{n}|Q_{D}^{*}))}{\mathbb{V}(Y_{n})}. \tag{23}\] Note that the Full index is computed for the parameter \(Q_{1}^{*}=Q_{1}\) which is chosen from its marginal distribution \(\rho_{Q_{1}}\) and that it carries mutual contributions to the total variance due to the dependence on other parameters \(Q_{j},\ j>1\). On the other hand, the Independent index for the parameter \(Q_{D}^{*}=Q_{D}|Q_{1}Q_{2}\ldots Q_{D-1}\) does not contain the mutual contributions with other parameters, since the parameter was drawn from the conditional distribution \(\rho_{Q_{D}|Q_{1}Q_{2}\ldots Q_{D-1}}\). In order to compute the remaining Full and Independent indices, different permutations need to be used. For example \(\mathcal{P}_{2}=(2,\ldots,D,1)\), such that \(\mathcal{P}_{2}\mathbf{Q}=(Q_{2},Q_{3},\ldots\,Q_{D},Q_{1})\) from which the Full index of parameter \(Q_{2}\) and Independent index of \(Q_{1}\) can be determined. Overall, there exist \(D!\) different permutations. However, both indices for all parameters can be obtained by circularly reordering the input vector \(\mathbf{Q}\), i.e., performing the SA \(D\) times in total, as summarized in Tab. 1. ## 5 Application Model The coffee cup model [4] simulates a cooling process of a liquid contained in an open container. The model uses Newton's law of cooling to evolve the temperature \(T\) over the simulation time \(t\), \[\frac{dT(t)}{dt}=-\kappa(T(t)-T_{env}). \tag{24}\] The parameter \(\kappa\) characterizes the container containing the liquid and the rate at which it dissipates the heat to the environment. Ambient temperature of the environment is represented by the parameter \(T_{env}\), while the initial temperature of the liquid is specified by the constant \(T_{0}=95^{\circ}\)C. In this study, the SA of the \(\kappa\) and \(T_{env}\) parameters is studied. Due to the measurement error, insufficient knowledge of the physical model or other reasons, the parameters \(\kappa\) and \(T_{env}\) cannot be assigned an exact numerical value representing the modeled physical system. Instead the parameters are modeled as uncertain and they are described with probability distributions. A normal distribution \(\mathcal{N}(\mu,\sigma)\) is assumed in this work, with a given mean \(\mu\) and standard deviation \(\sigma\) for each parameter, \[\kappa =\mathcal{N}(0.05,0.008), \tag{25}\] \[T_{env} =\mathcal{N}(20,1.5).\] On top of the uncertainty in the individual parameters, these parameters might be correlated with each other. The correlation captures a physical property of the container's material and its heat transfer rate, witch changes depending on the ambient temperature of the environment. For example, as the ambient temperature \(T_{env}\) increases, the material dissipates the heat more efficiently, increasing also the value of the parameter \(\kappa\). The stochastic dependency of the two parameters is described using a correlation matrix \(C\) with correlation between the parameters specified by \(\rho_{\mathcal{C}}\), \[C=\begin{pmatrix}1.0&\rho_{\mathcal{C}}\\ \rho_{\mathcal{C}}&1.0\end{pmatrix}. \tag{26}\] Fig. 1 illustrates the probability density function of the parameters, both with and without the correlation. The goal of the SA is to analyze the impact of the uncertain parameters to the outcome of the model, considering also the correlation between the parameters. ### Software Tools and Libraries The VECMA toolkit, or VECMAtk [20], is used to manage the simulations required for the analysis. It enables automated verification, validation and UQ for complex applications, irrespective of their source domain. VECMAtk is optimized for large scale computations, and can be deployed on emerging high-performance computing (HPC) platforms. The toolkit has previously been used for a range of applications, such as a COVID model [21] (with computational complexity in order of \(10^{4}\) core hours per experiment), a molecular dynamics model [22] (experiments consumed \(2\cdot 10^{6}\) core hours), and a range of other applications [6]. The EasyVVUQ package [16], a component of the VECMA toolkit, has been developed to facilitate forward UQ for HPC applications. EasyVVUQ supports the definition of custom UQ and SA procedures, which may include sampling and analysis, without requiring users to modify their core applications. It has been applied successfully to a diverse set of applications, and is able to cope with procedures that require thousands of simulation runs. EasyVVUQ is open source and written in Python 3. ## 6 Numerical Experiments Numerical experiments are performed using the model introduced in Sec. 5. The initial condition for the differential equation (24) used hereafter is \(T_{0}=95^{\circ}\)C. The simulation time covers first \(t=200\) minutes of the cooling process, with the time discretized into 150 time steps of length \(\Delta t=80\,s\). The parameter distributions used in the numerical experiments, if not stated otherwise, are defined in Eq. (25) and (26). The surrogate model is constructed using polynomials up to the third order, unless specified otherwise. \begin{table} \begin{tabular}{l|c|c|c} Permutation & Full Index & Marginal Indices & Independent Index \\ \hline \(\mathcal{P}_{1}=(1,2,3,\ldots,D)\) & \(Q_{1}\) & \(Q_{2},\ldots,Q_{D-1}\) & \(Q_{D}\) \\ \(\mathcal{P}_{2}=(2,3,\ldots,D,1)\) & \(Q_{2}\) & \(Q_{3},\ldots,Q_{D}\) & \(Q_{1}\) \\ \(\mathcal{P}_{3}=(3,\ldots,D,1,2)\) & \(Q_{3}\) & \(Q_{4},\ldots,Q_{D},Q_{1}\) & \(Q_{2}\) \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\ \(\mathcal{P}_{D}=(D,1,2,\ldots,D-1)\) & \(Q_{D}\) & \(Q_{1},\ldots,Q_{D-2}\) & \(Q_{D-1}\) \\ \end{tabular} \end{table} Table 1: Sensitivity indices for different parameter permutations \(\mathcal{P}_{i}\). ### Surrogate Models The polynomial surrogates of the model (24) are examined in the vicinity of the mean value of the parameters (25). The surrogate model is built for each time instant of the discretized time horizon, depicting the model output as a function of the particular values of the input parameters. The surrogate models at various time instants \(t\) for the coffee cup are illustrated in Fig. 2, demonstrating the effect of the correlation in the parameters. Note that while the difference between the two models is small near the begging of the simulation time, the gap between the two grows as the time progresses. The changes in the final temperature profile are exaggerated by the interaction of the parameters within the model over time, thus the effect of the correlation is particularly visible at advanced simulation time, i.e., \(t>20-30\,min\). The absolute difference between the uncorrelated and correlated surrogate models \(e=\hat{U}_{\rho_{\mathcal{C}}}-\hat{U}_{\rho_{0}}\) with \(\rho_{\mathcal{C}}=0.8\) is illustrated in Fig. 3. It is also important to highlight different curvature of the surrogate models, since during the derivative-based analysis a partial derivative of the surrogate with respect to a parameter is evaluated at the mean value of the parameters. Similarly as before, the curvature difference between the two models \(\hat{U}_{\rho_{\mathcal{C}}}\), \(\hat{U}_{\rho_{0}}\) grows with the proceeding simulation time. ### SA with Uncorrelated Parameters In case the correlation matrix \(\mathcal{C}\) is an identity matrix, i.e., there is no correlation between the parameters, the model evolution is shown in Fig. 4. The model variance due to the uncertainty in the input parameters is shown as well. Note that the model variance at the initial point is zero, thus the Sobol indices are not defined at this time instant. #### Variance-based Indices The corresponding sensitivity indices are shown in Fig. 5, replicating the values of the variance-based Sobol indices from previous works, e.g., [4]. The difference of the first and total order variance-based indices, shown in the left panel of Fig. 5, are less than \(10^{-4}\), indicating there are no higher order parameter interactions. The first order Sobol index of the \(\kappa\) parameter is the most influential in the first 75 minutes, while the ambient temperature parameter dominates in the remaining simulation time. After reaching near equilibrium, i.e. the ambient and the coffee cup temperature difference is less than \(\approx 0.1^{\circ}\)C, the ambient temperature parameter explains nearly all of the output variance as shown in Fig. 4. Intuitively, this is an expected behavior or the model, since the end state of the coffee cup after reaching the equilibrium is the environment temperature. Since there are no higher order interactions, the first order Sobol indices add up to one. Consequently, the behavior of the indices is necessarily complementary for the two parameters, i.e., if one index is increasing, the other is proportionally decreasing and vice versa. #### Derivative-based Indices The derivative-based indices, shown in the right panel of Fig. 5, provide an insight into the model around the vicinity of a fixed point, in this case the mean value of the model parameters. Following the definition in Eq. (11), the values of the derivative-based index correspond to the slope of a tangent line to the model surface at the given spatial point and time instant. The magnitude of the individual derivative-based sensitivity indices differs by more than two orders of magnitude, thus the sensitivity indices are shown also in the logarithmic scale (considering their absolute values) in Fig. 6. Note that the sensitivity of the parameter \(T_{env}\) is near zero at the beginning of the simulation, which reflects the fact that it has very small contribution to the model output as the temperature of the coffee cup is driven mainly by the heat transfer constant. As the time progresses, the sensitivity of \(T_{env}\) increases and approaches one, meaning that a change of the ambient temperature will have the proportional effect on the model output. This reflects the fact that the final Figure 4: Statistical moments of the coffee cup model. Figure 5: First-order Sobol and derivative-based indices considering the independent parameters. Figure 6: Derivative-based indices using linear and logarithmic y-axis scale. Note that the absolute values of the sensitivity indices are used in the latter case. temperature of the coffee cup is equal to the ambient temperature, thus the change in the ambient temperature induces an equal change in the final state of the coffee cup. On the other hand, the sensitivity of the parameter \(\kappa\) is significantly larger but the sensitivity of the parameter decreases over the simulation time, since the heat transfer is driven mainly by the temperature gradient between the coffee cup and the surrounding environment which is largest in the begging of the simulation. As this temperature differential decreases, the heat transfer becomes less significant. Note also the negative value of the sensitivity index, meaning that as the heat transfer parameter \(\kappa\) increases, the output of the model, that is the coffee cup temperature, decreases due to a larger effect of the heat transfer. ### SA with Parameter Dependency Next, the correlation matrix \(\mathcal{C}\) is modified, such that the off-diagonal elements are no longer zero, indicating parameter correlation. If not stated otherwise, the numerical experiments use the value \(\rho_{\mathcal{C}}=0.4\) for the Pearson correlation coefficient. The ordering of the indices in SA with correlations becomes important and the SA needs to be performed for different permutations, as detailed in Sec. 4.4. #### Variance-based Indices The Sobol indices for the correlated parameters and their difference relative to the baseline experiment with independent parameters is shown in Fig. 7. In order to obtain the complete set of the Full order and Independent indices for both parameters, the SA needs to be executed twice, each time with different parameter permutation. First the permutation \(\mathcal{P}_{1}=(\kappa,T_{env})\) is used to obtain the Full order index of the \(\kappa\) and the Independent index for the parameter \(T_{env}\). Using the second permutation, \(\mathcal{P}_{2}=(T_{env},\kappa)\), the Full order index of the \(T_{env}\) and the Independent index for the parameter \(\kappa\) are obtained. The Full order and Independent indices for the parameter \(\kappa\) are shown in Fig. 7, comparing them to the sensitivity index shown in the previous section with uncorrelated parameters. Considering the Full order index of the \(\kappa\) parameter (permutation \(\mathcal{P}_{1}\)), the contribution of this parameter to the output variance near the end of the simulation time is increased compared to the independent case. Since the parameters are positively correlated, increasing the value of parameter \(\kappa\) induces growth also in the \(T_{env}\) parameter, thus increasing the end state equilibrium temperature of the coffee cup. Previously, there was no such interaction of the parameters, thus the variance-based index of the \(\kappa\) parameter was zero at the end of the simulation time. However, the Full index is lower around the simulation time \(t\approx 100\,min\) compared to the uncorrelated case. This is due to to the fact that increasing \(\kappa\) induces growth of the \(T_{env}\) parameter, which in turn decreases the temperature gradient. Considering the dynamics of the model in (24), the induced of the ambient temperature counteracts the elevated heat transfer, thus the sensitivity of the heat transfer parameter has decreased. When the Independent index is considered (permutation \(\mathcal{P}_{2}\)), the effect of the correlation is removed and the index is nearly identical to the independent case in the simulation time \(t>100\)min. However, in the simulation time around \(t\approx 50\) the sensitivity of the \(\kappa\) parameters has increased, thus emphasizing the importance of the parameter at the time instants when the temperature gradient is large. Fig. 7 also shows the absolute difference of the Full order and Independent index compared to the uncorrelated case, in order to illustrate the magnitude of the difference. The behavior of the first order Sobol index for \(T_{env}\) parameter, as shown in Fig. 7, is opposite to that of the \(\kappa\) parameter. The Full order index (permutation \(\mathcal{P}_{2}\)) matches the independent case at the end of the simulation time (since it was already at the maximum value of 1). Removal of the con Figure 7: Variance-based indices considering correlated parameters (\(\rho_{\mathcal{C}}=0.4\)) and the absolute difference with the uncorrelated indices from Fig. 5. tribution of the correlation decreases the value of the index. The magnitude of the change is proportional to the difference in the \(\kappa\) parameter indices (Full vs. Independent index). This behavior of the indices can be interpreted such that the portion of the output variance can be explained by both parameters simultaneously since they are correlated. It can be equally said that some output variance is explained either by one or the other parameter. In an extreme case of the perfect correlation, \(\rho_{12}=1\), it is equivalent to say that the output variance is explained either by one or the other parameter, since the value of one parameter completely determines the value of the other. It is also interesting to observe the complement of the indices to one, shown in Fig. 8. Consider the Full index of the \(\kappa\) parameter, as shown in Fig. 7(a). Its complement to one explains the output variance contributed by the other parameter alone without its correlated contribution with \(\kappa\). In case of two parameters, this complement is the Independent Sobol index of the \(T_{env}\) parameter. In general case with a set of \(D\) parameters, complement of the Full index of the parameter \(i\) explains the amount of variance contributed by the remaining \(D-1\) parameters without their correlated contribution with \(i\). A similar relationship is observed between the complement of the Full index of \(T_{env}\) and Independent index of \(\kappa\) in Fig. 7(b). Note the presence of a numerical error in this case, in order to eliminate it, the indices should be computed with higher order polynomials in the PCE analysis (see Sec. 6.5). Similarly, Fig. 7(c),d illustrate the relationship between the complement of the Independent indices and the Full indices of the other parameter. ### Derivative-based Indices Behavior of the derivative-based indices in the correlated case for the parameter \(\kappa\) is shown in Fig. 8(a). In Sec. 6.2 it was shown that the significance of the heat transfer diminishes toward the end of the simulation time, \(t>150\,min\), and the value of the derivative-based index approaches zero. This is due to the fact that the final equilibrium is completely determined by the ambient temperature parameter \(T_{env}\). However, when we consider correlation between the parameters and the Full order index (permutation \(\mathcal{P}_{1}\)) the significance of the parameter \(\kappa\) is increased since the Full order index includes also the interaction with the other parameters due to the correlations. In physical terms, it can be interpreted such as when the parameter \(\kappa\) is increased, the ambient temperature \(T_{env}\) will be increased due to their positive correlation \(\rho_{\mathcal{C}}=0.4\). Consequently, the final temperature of the coffee cup will be increased as well and this is reflected accordingly in the Full order index of the \(\kappa\) parameter which is no longer zero as in the uncorrelated case. It is also interesting to observe the behavior around the simulation time \(t\approx 20\,min\). Note that the magnitude of the Full order index is reduced compared to the uncorrelated case. The reason for this is again the correlated interaction with the \(T_{env}\) parameter. Studying the effect of increasing the \(\kappa\) incurs also an increase in the \(T_{env}\) due to their correlation. This however reduces the temperature gradient when assuming constant initial Figure 8: Complementary behaviors of the indices. Figure 9: Derivative-based indices considering dependent parameters with correlation \(\rho_{\mathcal{C}}=0.4\) and the absolute difference with the uncorrelated indices. temperature of the coffee cup, thus the cooling process is reduced, even though the heat transfer coefficient was increased. This effect is represented by the reduced magnitude of the Full order index of \(\kappa\) parameter around the simulation time \(t\approx 20\,min\). Similar logic applies when considering the Independent index. Consider the Independent index of the \(T_{env}\) parameter (permutation \(\mathcal{P}_{1}\)) in Fig. (b)b. In the uncorrelated case, the equilibrium near the end of the simulation time, \(t>150\,min\), was completely determined by the \(T_{env}\) parameter. In the correlated case, after removing the effect of the correlation, the Independent index is proportionally reduced since a part of the ambient temperature growth was induced by the effect of the \(\kappa\) parameter and the Independent index eliminates these parameter interactions. ### Parameters with Increasing Correlation It is important to understand the effect of the correlation to the value of the indices. In this section, the correlation \(\rho_{\mathcal{C}}\) is gradually increasing in increments of \(0.2\), ranging from zero all the way to one (i.e. from no correlation up to perfect correlation). The largest value of the correlation is slightly reduced, \(\rho_{\mathcal{C}}=1.0-\epsilon,\;\epsilon=10^{-10}\), in order to preserve positive definiteness of the correlation matrix \(C\). #### 6.4.1 Variance-based Sensitivity The study of the First order Sobol indices is first performed considering the permutation \((\kappa,T_{env})\), which is used to compute Full Sobol index for \(\kappa\) and an Independent index for \(T_{env}\) shown in Fig. 10. It can be observed that as the correlation increases, the Full index of \(\kappa\) at \(t>125\,min\) in Fig. (a)a is gradually increasing, while at the same time the Independent index of the other parameter in Fig. (b)b proportionally decreases. This reflects the fact that due to the correlation, the Full index \(\kappa\) becomes gradually more significant due to its correlation with the \(T_{env}\) parameter, not because of the parameter \(\kappa\) itself. On the other hand, the amount of the variance explained by \(T_{env}\) alone decreases with increasing correlation, because of its interaction with \(\kappa\). In the limit situation when \(\rho_{12}=1\), the parameters alone become insignificant and all of the variance is explained by their correlated interaction. Note that the Independent index is zero (Fig. (b)b) while the Full index is one (Fig. (a)a). Similar effect can be observed using the permuta Figure 10: Sobol indices considering dependent parameters with increasing correlation \(\rho_{\mathcal{C}}\) and permutation \((\kappa,T_{env})\), showing also difference with respect to the uncorrelated indices. tion \((T_{env},\kappa)\) used to compute Independent Sobol index for \(\kappa\) and the Full index for \(T_{env}\). #### 6.4.2 Derivative-based Sensitivity The effect of increasing correlation for the derivative-based sensitivity indices is shown in Fig. 11. It shows the indices obtained from the permutation \((\kappa,T_{env})\), which corresponds to the permutation \(\mathcal{P}_{1}\) in Fig. 9. It can be seen that the increasing correlation intensifies the effects described in Sec. 6.3. Note that in the extreme case of correlation \(\rho_{\mathcal{C}}=1.0\) the Independent index becomes zero across the whole simulation, as the parameter \(T_{env}\) is completely explained by the parameter \(\kappa\). Note that, as opposed to the variance-based sensitivity indices, this doesn't mean that the Full order index is equal to one across the simulation time, since the range of the index values is not bound to the interval \((0,1)\) nor there is any property similar to Eq. (10). ### Convergence Analysis The convergence of the QMC method is tested, which is later used as a reference value for the PCE method. The QMC method is run with an increasing number of samples, and the resulting first order indices are shown in Fig. 12. The absolute difference between the indices is well below the threshold of significance \(0.05\), thus the method is considered to have converged. The rather arbitrary value of \(0.05\) is frequently accepted for this type of analysis for distinguishing important parameters from the unimportant ones [23], thus similar idea can be applied to declare a method to converge. The PCE method is run with an increasing polynomial order, ranging from 2nd to 7th order. Fig Figure 11: Derivative-based indices considering dependent parameters with increasing correlation \(\rho_{\mathcal{C}}\) and permutation \((\kappa,T_{env})\), showing also difference relative to the uncorrelated indices. Figure 12: Convergence of the QMC method for Sobol indices using permutation \((\kappa,T_{env})\) with correlation \(\rho_{\mathcal{C}}=0.417\).
2309.06979
Auto-Regressive Next-Token Predictors are Universal Learners
Large language models display remarkable capabilities in logical and mathematical reasoning, allowing them to solve complex tasks. Interestingly, these abilities emerge in networks trained on the simple task of next-token prediction. In this work, we present a theoretical framework for studying auto-regressive next-token predictors. We demonstrate that even simple models such as linear next-token predictors, trained on Chain-of-Thought (CoT) data, can approximate any function efficiently computed by a Turing machine. We introduce a new complexity measure -- length complexity -- which measures the number of intermediate tokens in a CoT sequence required to approximate some target function, and analyze the interplay between length complexity and other notions of complexity. Finally, we show experimentally that simple next-token predictors, such as linear networks and shallow Multi-Layer Perceptrons (MLPs), display non-trivial performance on text generation and arithmetic tasks. Our results demonstrate that the power of today's LLMs can be attributed, to a great extent, to the auto-regressive next-token training scheme, and not necessarily to a particular choice of architecture.
Eran Malach
2023-09-13T14:15:03Z
http://arxiv.org/abs/2309.06979v3
# Auto-Regressive Next-Token Predictors are Universal Learners ###### Abstract Large language models display remarkable capabilities in logical and mathematical reasoning, allowing them to solve complex tasks. Interestingly, these abilities emerge in networks trained on the simple task of next-token prediction. In this work, we present a theoretical framework for studying auto-regressive next-token predictors. We demonstrate that even simple models such as linear next-token predictors, trained on Chain-of-Thought (CoT) data, can approximate any function efficiently computed by a Turing machine. We introduce a new complexity measure--length complexity--which measures the number of intermediate tokens in a CoT sequence required to approximate some target function, and analyze the interplay between length complexity and other notions of complexity. Finally, we show experimentally that simple next-token predictors, such as linear networks and shallow Multi-Layer Perceptrons (MLPs), display non-trivial performance on text generation and arithmetic tasks. Our results demonstrate that the power of language models can be attributed, to a great extent, to the auto-regressive next-token training scheme, and not necessarily to a particular choice of architecture. ## 1 Introduction Large language models have achieved tremendous progress in various NLP tasks, such as machine translation, logical reasoning, coding and natural language understanding. These models, like GPT-3, GPT-4 and LaMDA [6, 31, 39], are trained on massive amounts of text data and learn to generate coherent and contextually relevant responses to input prompts. Amazingly, such language models are mostly trained with a single objective: predicting the next token. While this objective seems extremely simplific, auto-regressive next-token predictors trained on rich enough data are able to solve strikingly complex tasks [7]. This raises the question of whether such next-token predictors are merely "glorified" autocomplete models, which happened to memorize the entire internet, or are they truly performing novel logical reasoning. To this end, it has been shown that the ability of language models to compute complex functions can be greatly enhanced by using chain-of-thought [44, 17, 20] and scratchpad [30] techniques, allowing the network to perform unrestricted intermediate computations before arriving at a final answer. In this work, we introduce a theoretical framework for studying auto regressive next-token predictors. We demonstrate that much of the power of today's language models in logical reasoning can be attributed to the nature of the auto-regressive learning, and not to a particular choice of architecture. We show theoretically that very simple models trained to only predict the next token in an auto-regressive fashion can be used to solve extremely complex tasks when utilizing chain-of-thought techniques. In particular, we show that even linear predictors--models where the next-token probability is a linear function of the input sequence--are already powerful enough to compute _any Turing computable function_. The main theoretical result in the paper is captured in the following informal statement: **Theorem 1** (informal).: _For any function \(f\) that can be efficiently computed using a Turing machine, there exists a dataset \(D\) such that training a (linear) auto-regressive next-token predictor on \(D\) results in a predictor that approximates \(f\)._ That is, any computer program or intelligent agent that can be simulated by a computer, can be learned, given the right dataset, by a simple next-token predictor. To understand the power of auto-regressive learning, observe that a result equivalent to Theorem 1 is not possible in classical supervised learning, where the learner is given access only to the input sequence and the target label. It is well-known that no learning algorithm can efficiently learn the class of all (efficient) Turing computable functions [41], given only the input and the output of the function (without access to intermediate supervision). In fact, in classical supervised learning, there are only a few function classes that are known to be _efficiently learnable_--function classes for which there exists a learning algorithm that can efficiently recover the target function given a labeled dataset. Learnable function classes are known to have fundamental limitations to their computational capacity. For example, the class of linear predictors is efficiently learnable in many settings, e.g. using the Perceptron algorithm [34]. However, a famous result in [27] shows that linear predictors cannot compute simple functions such as the XOR function. Auto-regressive learning, however, presents a striking difference. While linear next-token predictors are still _efficiently learnable_ using simple algorithms such as SGD, their computational capacity greatly surpasses the capacity of their _classical_ counterparts. Since auto-regressive inference introduces a sampling function1 after each step, it allows linear next-token predictors to compute non-linear functions. As implied by Theorem 1, linear next-token predictors can implement practically any target function of interest. Footnote 1: In our analysis we focus on the zero-temperature/argmax sampling, which acts as an explicit non-linearity. While next-token predictors have the capacity to generate highly proficient learners, this does not come without a cost. One sig requirement to provide the learning model with potentially long sequences of tokens that detail the internal computations of the target. This requirement can be resource-intensive and often impractical. As such, it prompts the introduction of a new measure of learning complexity, analogous to sample complexity or run-time complexity: the _length complexity_. This type of complexity measures the quantity of intermediate tokens in a CoT necessary for the model to learn a particular concept class. We explore this complexity in the context of the parity learning problem, an extension of the XOR problem that is known to be computationally hard to learn in some settings. We demonstrate how traditional forms of complexity, such as sample or run-time complexity, can be traded off with length complexity when learning parities. Specifically, we show that an _increase_ in the complexity of the hypothesis class--and therefore in sample or computational complexity--leads to a _decrease_ in length complexity. This opens up a new path for the theoretical investigation of auto-regressive learning, by studying the interplay between these different complexity measures. To substantiate our theoretical results, we perform several experiments that illustrate the power of auto-regressive learning in enhancing the performance of simple models. We train a linear next-token prediction network on the TinyStories dataset [11], a collection of short stories composed of simple words. We observe that linear models, once trained on this dataset, frequently generate plausible and grammatically sound stories. Next, we demonstrate that a shallow Multi-Linear Perceptron (MLP) with 775M parameters (no attention layers), can learn to correctly multiply two 4-digit numbers, given chain-of-thought data. Our MLP outperforms GPT-4 in this task, and achieves comparable results to Goat, a 7B-parameter transformer that was trained to solve arithmetic tasks [23]. ### Related Work Chain-of-Thought ReasoningThe proposition of supervising intermediate logical steps as an effective approach for problem-solving is well established, predating the advent of Transformer models. The technique was found to be particularly beneficial in solving arithmetic problems [35]. This idea became very popular with the introduction of the Chain-of-Thought (CoT) approach, where models are prompted to elucidate their thought process prior to yielding a final outcome [44, 17, 20]. Recent developments have further demonstrated the efficacy of the CoT method in the training of smaller student models [19]. Another method that bears similarity to CoT is the "scratchpad" technique, which allows models to record intermediate computations that subsequently aid in deriving the final answer [30]. Such techniques have been shown to enhance performance across a variety of logical reasoning and arithmetic tasks. The research presented in this paper aims to contribute to the theoretical understanding of CoT reasoning in auto-regressive models. Our work illustrates how the employment of CoT can significantly amplify the capabilities of simple models. Furthermore, we introduce a novel complexity measure, the _length complexity_, that allows us to study the influence of the length of the intermediate sequence of tokens within CoT on the difficulty of the learning problem. Language Models for Arithmetic TasksLeveraging large language models to tackle mathematical reasoning and arithmetic tasks has gained significant interest, a trend that is discussed at length in a recent survey [24]. While these models have demonstrated a promising capacity for solving an array of mathematical problems, they often encounter difficulties in executing straightforward arithmetic operations, such as the multiplication and addition of large numbers [29, 33]. Previous studies have suggested that the efficiency of language models in arithmetic tasks can be dramatically enhanced by structuring them to perform calculations using an algorithmic pipeline, facilitating step-by-step execution [28]. A notable contribution in this realm is the recent work by [23], where they fine-tuned a moderately sized (7B-parameter) transformer employing the CoT method to perform complex arithmetic operations, including the multiplication of large numbers--a challenge even for advanced models like GPT-4. A very recent work studies the ability of small transformers trained from scratch to solve arithmetic tasks [18]. In our study, we further substantiate this claim by demonstrating that a small MLP, devoid of any attention mechanism, can match the performance of the transformer in [23] in 4-digit multiplication, provided that it receives appropriate intermediate supervision. This highlights that the capability of language models for arithmetic and mathematical reasoning is largely attributable to the CoT and next-token prediction techniques, rather than the specific architectural choice. Beyond TransformersAlthough the transformer architecture [42] currently stands as the leading approach in language modeling, it is noteworthy that a diverse range of other architectures have served this purpose over time. A notable instance is the application of Recursive Neural Networks (RNNs) [13], a model highly popular for language modeling only a few years back, due to its efficient and inherent sequence processing capabilities [26]. Furthermore, convolutions have also been explored for language modeling tasks [9]. A work more related to our own leveraged linear dynamical systems to model text [3]. Recent years have witnessed an emerging interest in substituting the attention layer of transformers, primarily due to its high computational cost, with simpler and more efficient alternatives. In this vein, the work of [14] introduced the linear transformer, where the attention layer was replaced with a more computationally-friendly linear layer. Concurrently, [47] advanced an Attention-Free Transformer. More recent advancements include the RWKV architecture [32], a modern variant of the RNN architecture inspired by transformers, which exhibits competitive performance when trained on large datasets. Some studies have proposed the use of simpler MLP-based architectures as feasible alternatives to transformers [40, 22]. Our work contributes to this ongoing discourse by conducting both theoretical and empirical investigations into the potential of very simple models, such as linear models and small MLPs, training them to solve complex tasks by leveraging the power of next-token auto-regressive learning. Related Theoretical WorkDespite the rapid pace of practical advancements in the realm of language models and transformers, the theoretical underpinning remains comparatively unexplored. Early investigations have established the universality of transformers (i.e., their ability to emulate any Turing machine) given the incorporation of a recurrent module [46, 43]. More recently, it has been demonstrated that transformers can simulate universal computers when incorporated into an execution loop [12]. The work of [21] shows that Transformers can simulate Automata, which are equivalent to bounded-memory programs, using surprisingly few layers. Turing universality extends to other language modeling architectures, such as RNNs [38]. A study by [10] underscores the inductive biases of self-attention, demonstrating that bounded-norm Transformer networks can represent sparse functions of the input sequence with logarithmically scaling sample complexity. Of particular relevance to our study is the work of [45], which delves into how sub-task decomposition and the CoT technique can facilitate the learning of computationally challenging problems. Similarly to our study, [45] also explores parity learning with intermediate supervision and demonstrates that arbitrary Turing machines can be efficiently learned by language models trained with CoT. Our work extends these findings, introducing a theoretical framework that enables broader examination of auto-regressive learning. We show that even simple models, such as linear predictors, can efficiently learn Turing computable functions. In addition, our results offer improved length complexity bounds for learning parities, indicating that parities can be learned using \(O(\log n)\) intermediate tokens, a marked reduction from the \(O(n)\) intermediate tokens shown in [45]. ## 2 Theory The key principle in our theoretical results is the differentiation between "classical" supervised learning and auto-regressive learning. In supervised learning, there is a clear separation between the input and the label (or target). The learner gets a dataset of inputs with their labels, and needs to find a model that correctly predicts the label given a new input example. While supervised learning tasks can sometimes be easy (e.g., when the label is given by a linear function of the input features), this task becomes very hard, or even impossible, when the function used for generating the labels requires a complex computational process [41]. This hardness stems from the fact that the internal computation is not available to the learner, who only observes the input and the corresponding final output. In auto-regressive learning, on the other hand, the situation is different. Auto-regressive learners get a sequence of tokens, and treat every token both as an input (for predicting future tokens) and as a label (for sequences of previous tokens). Coupling auto-regressive learning with the chain-of-thought technique results in a learning paradigm where the internal computations required for reaching the final answer become available to the learner both as inputs and as _labels_. This naturally allows supervision on intermediate steps in the computa tion/reasoning process, which greatly simplifies the learning task. In the following sections we detail our theoretical results. In Section 2.1 we formally define the framework of Auto-Regressive (AR) Learning and Learnability, in an analogous way to classical PAC Learning. We then show how PAC Learnable hypothesis classes can be used for constructing AR Learnable classes, and discuss the special case of linear classes (which are known to be efficiently PAC Learnable). In Section 2.2 we move on to discussing approximation results, namely understanding what types of function a given AR model can compute. To this end, we consider the function computed by the model to be the function mapping the input tokens to the final token(s), allowing the model to arbitrarily use internal computations in a chain-of-thought manner. Following this, we show that even linear AR models can compute very complex functions, for example emulating arbitrary Turing machines. Finally, in Section 2.3 we introduce _length complexity_, which measures how many intermediate tokens are required in order to learn to compute a given function. We show that using more intermediate tokens, i.e. increasing the length complexity, can reduce time/sample complexity, and vice-versa. ### Learnability Results Let \(\mathbb{D}\) be a finite set of tokens, let \(\mathcal{X}=\mathbb{D}^{n}\) be the space of contexts of \(n\) tokens, and let \(\mathcal{Z}=\mathbb{D}^{*}\) be a space of strings of tokens. For some \(t\), we denote \(\mathcal{Z}_{t}=\mathbb{D}^{t}\). An Auto-Regressive (AR) function \(h\) is a mapping \(\mathcal{X}\times\mathcal{Z}\to\mathbb{D}\) (we assume a deterministic function). An AR hypothesis class \(\mathcal{H}\) is a set of AR functions. For some distribution \(\mathcal{D}\) over \(\mathcal{X}\times\mathcal{Z}_{T}\), we say that \(\mathcal{D}\) is _realizable_ by the AR class \(\mathcal{H}\) if there exists a function \(h\in\mathcal{H}\) such, with probability \(1\) over \((\mathbf{x},\mathbf{z})\sim\mathcal{D}\), we have \(h(\mathbf{x},\mathbf{z}_{<t})=z_{t}\) for all \(t\leq T\) (where \(\mathbf{z}_{<t}\) denotes the first \(t-1\) coordinates of \(\mathbf{z}\)). In other words, the pair \((\mathbf{x},\mathbf{z})\) is realizable by \(h\) if \(h\) accurately predicts the next token for all sub-sequences \(\mathbf{z}_{<t}\) of \(\mathbf{z}\). We now define Learnability in the AR framework: **Definition 2**.: _We say that \(\mathcal{H}\) is AR Learnable if there exists an algorithm that for every \(\epsilon,\delta\) and distribution \(\mathcal{D}\) realizable by \(\mathcal{H}\), given a sample of size \(m(\epsilon,\delta)\) from \(\mathcal{D}\), returns w.p. \(\geq 1-\delta\) a function \(\hat{h}\) s.t._ \[\Pr\left[\exists t\leq T\text{ s.t. }\hat{h}(\mathbf{x},\mathbf{z}_{<t})\neq z _{t}\right]\leq\epsilon\] _Furthermore, we say that \(\mathcal{H}\) is efficiently AR Learnable if it is AR Learnable with an algorithm running in polynomial time._ That is, a class \(\mathcal{H}\) is (efficiently) AR Learnable if there exists an (efficient) algorithm that finds, w.h.p., a next-token predictor with low error. We now show that hypothesis classes that are learnable in the classical sense (i.e., by supervised learning), naturally induce hypothesis classes that are AR Learnable. Let \(\mathcal{H}\) be some AR hypothesis class. We assume that \(\mathcal{H}\) can be decomposed into "standard" hypothesis classes in the following sense. Let \(\{\mathcal{H}_{t}\}_{t=1}^{\infty}\) be a sequence of classes, where \(\mathcal{H}_{t}\) is a class of functions \(\mathcal{X}\times\mathcal{Z}_{t-1}\mapsto\mathbb{D}\). We assume that \(\mathcal{H}=\mathcal{H}_{1}\times\mathcal{H}_{2}\times\dots\). Namely, we associate every \(h\in\mathcal{H}\) with a sequence \((h_{1},h_{2},\dots)\), where \(h_{i}\in\mathcal{H}_{i}\), s.t. for every \(\mathbf{x}\in\mathcal{X}\) and \(\mathbf{z}\in\mathcal{Z}_{t-1}\) we have \(h(\mathbf{x},\mathbf{z}_{<t})=h_{t}(\mathbf{x},\mathbf{z}_{<t})\). While we define \(\mathcal{H}\) on arbitrarily long sequences, when we study learnability we limit ourselves to discussing sequences of length at most \(T\)2. In particular, we can assume \(\mathcal{H}=\mathcal{H}_{1}\times\dots\times\mathcal{H}_{T}\). The following result shows that PAC Learnability of the underlying hypothesis classes (as defined e.g. in [36]) implies AR Learnability of the class \(\mathcal{H}\): Footnote 2: In Section 2.3 we study how the choice of \(T\) affects the complexity of the learning problem, but for now we treat \(T\) as a fixed parameter of the learning problem. **Theorem 3**.: _If \(\mathcal{H}_{1},\dots,\mathcal{H}_{T}\) are (efficiently) PAC Learnable with sample complexity \(m(\epsilon,\delta)\), then \(\mathcal{H}\) is (efficiently) AR Learnable with sample complexity \(m(\epsilon/T,\delta/T)\)._ The proof (shown in Appendix A) is a simple reduction using the standard notion of PAC Learnability. #### 2.1.1 Example: Linear Decoder From Theorem 3, efficiently learnable classes induce classes that are efficiently learnable in the Auto-Regressive setting. For example, by letting \(\mathcal{H}_{t}\) be a class of linear functions, we can use known results on learning linear classifiers to show that the induced AR hypothesis class is efficiently learnable. We define the linear AR hypothesis class as follows. Let \(\psi:\mathbb{D}\to\mathbb{R}^{d}\) be some embedding of the dictionary. With some abuse of notations, for \(\mathbf{z}\in\mathbb{D}^{t}\) we define \(\psi(\mathbf{z})=[\psi(z_{1}),\dots,\psi(z_{t})]\in\mathbb{R}^{d\times t}\). Fix some \(t\), let \(\mathbf{W}\in\mathbb{R}^{\mathbb{D}\times d\times(n+t)}\), and for all \(\mathbf{x}\in\mathcal{X}\) and \(\mathbf{z}\in\mathcal{Z}_{t}\) define \[h_{\mathbf{W}}(\mathbf{x},\mathbf{z})=\arg\max_{D\in\mathbb{D}}\left\langle W _{D},\psi([\mathbf{x},\mathbf{z}])\right\rangle\] Now, denote the function class of all linear predictors \(\mathcal{H}_{t}^{\mathsf{Lin}}=\{h_{\mathbf{W}}\ :\ \mathbf{W}\in\mathbb{R}^{ \mathbb{D}\times d\times(n+t)}\}\), and observe that this class is learnable in polynomial time. Under some margin conditions and using a convex surrogate loss function, this class is in fact learnable using SGD. Therefore, for the linear AR hypothesis class \(\mathcal{H}^{\mathsf{Lin}}=\mathcal{H}_{1}^{\mathsf{Lin}}\times\dots\times \mathcal{H}_{T}^{\mathsf{Lin}}\), we get that \(\mathcal{H}^{\mathsf{Lin}}\) is efficiently learnable in the Auto-Regressive setting. ### Approximation Results We showed that when the AR hypothesis class \(\mathcal{H}\) is induced from a sequence of (efficiently) learnable hypothesis classes, then \(\mathcal{H}\) is also (efficiently) AR learnable. In particular, \(\mathcal{H}^{\mathsf{Lin}}\) is efficiently AR learnable, as a product of linear classes. We now show that while learnability transfers from the classical setting to the AR setting, in AR learning we can get much stronger _approximation_ guarantees. In fact, while linear classes are relatively limited in the standard setting, we show that the linear AR class \(\mathcal{H}^{\mathsf{Lin}}\) is extremely powerful. Namely, we show that linear AR functions can efficiently approximate any Turing computable function. We first need a proper definition of what are the functions that AR hypotheses "compute". For some AR hypothesis \(h\), define the output of the auto-regression process at time \(t\) to be \(h^{(t)}(\mathbf{x})\), defined recursively by: * \(h^{(1)}(\mathbf{x})=h(\mathbf{x},\emptyset)\) * \(h^{(t)}(\mathbf{x})=h\left(\mathbf{x},\left(h^{(1)}(\mathbf{x}),\ldots,h^{(t-1 )}(\mathbf{x})\right)\right)\) For now, we focus on AR hypotheses that are evaluated for \(T\) steps, for some fixed \(T\in\mathbb{N}\). In Section 2.3 we discuss how the choice of \(T\) (length complexity) interacts with different measures of complexity. We define the function computed (approximated) by \(h\) as follows: **Definition 4**.: _Fix some target \(f:\mathbb{D}^{n}\to\mathbb{D}\) and some AR hypothesis \(h\). Then, we say that \(h\) computes \(f\), if for every input \(\mathbf{x}\in\mathbb{D}^{n}\) we have \(h^{(T)}(\mathbf{x})=f(\mathbf{x})\). Additionally, for some distribution \(\mathcal{D}\) over \(\mathbb{D}^{n}\), we say that \(h\)\(\epsilon\)-approximates \(f\), if \(\Pr_{\mathcal{D}}\left[h^{(T)}(\mathbf{x})\neq f(\mathbf{x})\right]\leq\epsilon\)._ In other words, we say that \(h\) computes \(f\) if after running auto-regression for \(T\) steps, it outputs a value that agrees with \(f\). Note that we ignore all the intermediate outputs of \(h\) and observe only the final output. This is in alignment with common practice, where we let language models use arbitrarily long chain-of-thought/scratchpad before arriving at the final answer3. Footnote 3: Here we assume that \(f\) outputs a single token in \(\mathbb{D}\), and therefore observe only the last token produced by the auto-regression. However, we note that this can be extended to the case where \(f\) outputs multiple tokens, and we observe a sequence of tokens at the end of the auto-regression. Next, we show that if some AR class \(\mathcal{H}\) is learnable, then auto-regressive learning of distributions realizable by \(h\in\mathcal{H}\) returns an approximator for the function computed by \(h\): **Theorem 5**.: _Assume that \(\mathcal{H}\) is (efficiently) AR Learnable with sample complexity \(m(\epsilon,\delta)\). Then, there exists an (efficient) algorithm that for every \(\epsilon,\delta\) and distribution \(\mathcal{D}\) realizable by some \(h\in\mathcal{H}\), given a sample of size \(m(\epsilon,\delta)\) from \(\mathcal{D}\), returns w.p. \(\geq 1-\delta\) a function \(\hat{h}\) s.t. \(\hat{h}^{(T)}\)\(\epsilon\)-approximate \(h^{(T)}\)._ The proof follows by induction from the definitions (see Appendix A). Theorem 5 shows that using auto-regressive learning, we can learn to approximate the function computed by the underlying AR function \(h\). #### 2.2.1 Approximation Capacity of Linear Hypotheses We now limit ourselves to a dictionary with only two tokens \(\mathbb{D}=\{0,1\}\), to be compatible with standard analysis of computations with Boolean inputs/outputs. We will show that linear AR functions can approximate a very large class of functions--namely, the class of _linear threshold circuits_. **Definition 6**.: _A linear threshold function is a function of the form_ \[x\mapsto\sigma(\langle\mathbf{w},x\rangle+b)\] _for \(\sigma(x)=\mathbf{1}_{x\geq 0}\). A linear threshold circuit is a Boolean circuit where every gate computes a linear threshold function._ The following result shows that linear AR functions can approximate arbitrary linear threshold circuits: **Theorem 7**.: _Assume that \(f:\{0,1\}^{n}\rightarrow\{0,1\}\) can be computed by a linear threshold circuit with at most \(T\) gates. Then, \(f\) can be computed by a linear AR function \(h\)._ The proof of the above result uses the fact that a linear threshold function can be implemented using the argmax over a linear function, in the case where \(\mathbb{D}=\{0,1\}\). The full proof is given in Appendix A. We note that any Turing computable function can be computed by a linear threshold circuit of some size \(T\) that scales polynomially with the runtime of the Turing machine (see e.g. [2]). Therefore, we get that linear AR functions can compute any Turing computable function, with only polynomial blow-up in run-time. This leads to the following result: **Corollary 8**.: _For any function \(f\) that is Turing computable in time \(T(n)\), and for any distribution \(\mathcal{D}\) over inputs of size \(n\), there exists a dataset of strings of tokens, each of size \(\operatorname{poly}(T(n))\), such that training a linear AR model over this dataset efficiently recovers a function that approximates \(f\) w.r.t. \(\mathcal{D}\)._ ### Length Complexity We showed that even simple classes like linear auto-regressive predictors can approximate any Turing computable function. Since linear predictors can be learned efficiently, we get a learning scheme that can efficiently learn virtually any function of interest. This is in contrast with the standard supervised learning setting, where efficiently learnable function classes are typically very limited in their expressive power. However, we note that the complexity of learning did not magically "disappear". To make learning possible, we require that the learner has, during learning, access to a sequence of tokens representing the internal "chain-of-thought" generated by the target it aims to imitate. While the length of this sequence is still reasonable (polynomial in the problem parameters), acquiring data with such long sequences might be costly, or even impossible. In this section we introduce _length complexity_, a new notion of learning complexity that quantifies the number of intermediate tokens required for learning some concept class. In other words, length complexity captures the length of the "chain-of-thought" supervision provided to the model during training. The length complexity complements common complexity measures such as sample complexity and run-time complexity, and we show that in some cases we can trade off sample or computational complexity for length complexity, and vice versa. We begin with a formal definition of _length complexity_. Fix some distribution over \(\mathbb{D}^{n}\), some AR hypothesis class \(\mathcal{H}\) and some target concept class \(\mathcal{F}\) of functions \(\mathbb{D}^{n}\rightarrow\mathbb{D}\). The definition below extends Definition 4 to function classes, which allows an explicit discussion on length complexity. **Definition 9**.: _We say that \(\mathcal{H}\) computes \(\mathcal{F}\) with length complexity \(T\), if for every \(f\in\mathcal{F}\) there exists some \(h\in\mathcal{H}\) such that, for all \(\mathbf{x}\in\mathbb{D}^{n}\) we have \(h^{(T)}(\mathbf{x})=f(\mathbf{x})\). Additionally, we say that \(\mathcal{H}\)\(\epsilon\)-approximates \(\mathcal{F}\) with length complexity \(T\) if for every \(f\in\mathcal{F}\) there exists some \(h\in\mathcal{H}\) s.t. \(\Pr_{\mathcal{D}}\left[h^{(T)}(\mathbf{x})\neq f(\mathbf{x})\right]\leq\epsilon\)._ From Theorem 7 we get that the class of linear threshold circuits of size \(T\) can be \(\epsilon\)-approximated using linear AR functions with _length complexity_\(T\). For small circuits this might not be an issue, but otherwise the dependence of the length complexity on the circuit size may become problematic. We expect that taking a richer AR hypothesis class \(\mathcal{H}\) would result in reduction of the length complexity. For example, in the extreme case, if we take \(\mathcal{H}\) to be the class of all linear threshold circuits of size \(T\) (that is, we take the target class and the hypothesis class to be the same class), then it trivially shrinks the approximation length complexity of such circuits to \(1\). However, AR learning of \(\mathcal{H}\) in this case is equivalent to classical supervised learning without any intermediate supervision, which becomes again computationally hard. In the rest of this section, we discuss the interplay between the choice of the AR hypothesis class and the different measures of complexity that it induces: sample complexity, computational complexity and length complexity. #### 2.3.1 Length Complexity of Parities To demonstrate a concrete analysis of length complexity, we consider the well-studied problem of learning parities, a natural extension of the XOR problem [27]. In the parity learning problem, the inputs are sequences of \(n\) bits, and the label is determined by the parity of the sum of an unknown subset of bits from the input. This problem is known to be computationally hard in some settings. For example, Statistical Query (SQ) algorithms need to use \(\Omega(2^{n})\) queries to solve the parity problem [16]. This problem has also been shown to be hard for different variants of gradient-descent [37, 1, 25]. We now formally define the set of parity functions. Assume \(\mathbb{D}=\{0,1\}\) (Boolean inputs). For some subset \(A\subseteq[n]\), define the parity function over \(A\), \[\chi_{A}(\mathbf{x})=\sum_{i\in A}x_{i}\mod 2\] Let \(\mathcal{F}_{n}\) be the class of all parity functions, \(\mathcal{F}_{n}=\{\chi_{A}\ :\ A\subseteq[n]\}\). We begin by showing that with a small (logarithmic) length complexity, a linear AR model can compute any parity function. **Theorem 10**.: _The class \(\mathcal{F}_{n}\) can be computed using the linear AR class \(\mathcal{H}^{\mathsf{Lin}}\), with length complexity \(O(\log n)\)._ Proof of Theorem 10.: From [15], there exists a linear threshold circuit which computes the parity function over \(n\) bits with \(O(\log n)\) gates. In particular, there exists such a circuit for computing the parity \(\chi_{A}\) over any subset of the bits. Therefore, the result follows from Theorem 7. Since we showed that linear AR functions are efficiently learnable (Theorem 3), the above theorem implies that parities become efficiently learnable given \(O(\log n)\) intermediate tokens. This is in contrast to the standard supervised learning setting, where linear functions cannot approximate parities [8]. We note that a similar result on learning parities with intermediate tokens appears in [45]. However, the result in [45] requires \(O(n)\) intermediate tokens, while we show that learning is possible with length complexity of \(O(\log n)\). We next show that by taking more complex hypothesis classes we can reduce the length complexity of computing \(\mathcal{F}_{n}\). However, this comes at a cost of increasing either the sample or the computational complexity. We now define a sequence of AR classes of growing complexity for computing \(\mathcal{F}_{n}\). For every \(k\leq n\), let \(\mathcal{F}_{n,k}\) be the class of parities over subsets of size at most \(k\), namely \(\mathcal{F}_{n,k}=\left\{\chi_{A}\ :\ A\in\binom{n}{\leq k}\right\}\). The larger \(n\) and \(k\) are, the harder it is to learn \(\mathcal{F}_{n,k}\) (in the standard notion of supervised PAC learning). In particular, there are known lower bounds on learning \(\mathcal{F}_{n,k}\) using Statistical Query (SQ) algorithms, a large family of algorithms that include variants of gradient-based learning algorithms [5]. Roughly speaking, learning \(\mathcal{F}_{n,k}\) using SQ algorithms requires the computational complexity to grow with \(\binom{n}{\leq k}\approx(n/k)^{k}\), and the sample complexity to grow with \(\approx k\log n\). We define \(\mathcal{H}^{(k)}=\mathcal{F}_{n,k}\times\mathcal{F}_{n+1,k}\times\ldots\), and show the following result: **Theorem 11**.: \(\mathcal{H}^{(k)}\) _can compute \(\mathcal{F}_{n}\) with length complexity \(\Theta(n/k)\)._ To prove the above result, we show that any parity over \(n\) bits can be computed by constructing a "tree" of \(k\)-order parities, which reduces the length complexity by a factor of \(k\) (see Appendix A). This result shows that by increasing the complexity of the hypothesis class, we can decrease the length complexity required for AR learning. In this particular case, a linear decrease by a factor \(k\) in the length complexity was possible at the cost of increasing the computational complexity _exponentially_ with \(k\) (for SQ algorithms and variants of GD). While the exact interplay of computational and length complexity depends on the choice of target and hypothesis classes, this example shows that, in some cases, significantly decreasing the length complexity makes the problem computationally hard to learn. The above results show an analysis of length complexity for a particular problem of interest: the parity problem. We believe that a fundamental understanding of the length complexity of different problem classes will allow us to gain a better understanding of auto-regressive predictors. For example, discovering an intrinsic complexity measure for hypothesis classes (analogous to VC dimension or SQ dimension) that can be used to derive length complexity bounds is of particular interest. We leave such an investigation to future research. ## 3 Experiments We now turn to empirically validate our theoretical results, showing that very simple models perform surprisingly well when trained auto-regressively to perform next-token prediction. We start by training a simple linear model on a dataset of short stories, and then evaluate the performance of a small MLP on a task of arithmetic computations. ### Tiny Stories We test the efficiency of linear AR models on the simple TinyStories dataset [11]. This is a synthetic dataset of short stories containing simple words. We train a linear model with context length of \(T=64\) on this dataset. The model has only three layers: 1) a standard (linear) embedding layer, mapping tokens into a vector of dimension \(d=256\); 2) a linear layer mapping \(d\times T\) to \(d\times T\) (using standard masking for next-token prediction during training); 3) an output embedding layer mapping vectors of dimension \(d=256\) back into the output space of all tokens (see Figure 1 for illustration of the architecture). To allow next-token prediction training, we apply masking on the second linear layer, so that each output token only has access to previous tokens in the sequence. While the resulting classifier is linear, we note that this model is not exactly the linear AR model analyzed previously, as we allow sharing some parameters (namely, Figure 1: Illustration of the linear network and the MLP used in our experiments. the input/output embedding parameters) across the different sequence positions. However, this is a close proxy to the idealized linear model. Altogether, the resulting model has roughly 162M active parameters. The model is trained for \(5^{\nicefrac{{1}}{{2}}}\) hours on a single A100 machine. While the results are certainly inferior in quality to transformer-based language models4, we note that the linear predictor often does produce coherent text. Below we show some example for prompts and the resulting output of the model. Notice that there are some grammatical errors (e.g. Prompt #3) or conceptual errors (e.g. Prompt #4)5, but the overall behavior seems reasonable. Footnote 4: For comparison, in our experiments GPT-2 Small (124M parameters) reaches perplexity of 2.2 on the TinyStories datasets, where our linear model reaches a perplexity of 3.4 when trained in the same scheme. Footnote 5: In Lewis Caroll’s “_Alice’s Adventures in Wonderland_”, the adventures of Alice begin after she falls asleep, sitting bored by a riverbank. Therefore, the model’s assertion that “_Alice was tired, so she decided to go on an adventure_” is not completely unreasonable. ### Multiplication We now turn to demonstrate the power of next-token prediction with chain-of-thought reasoning for arithmetic tasks. We focus on the task of multiplying two 4-digit numbers, which has been shown to be challenging even for huge language models such as GPT-4 [23]. For this task, we train a simple Multi-Layered Perceptron (MLP) with four layers: 1) a standard (linear) embedding layer, from tokens to dimension \(d=128\); 2) a linear layer with a ReLU activation, applied across all the context window, mapping the input of \(d\times T\) to an output of \(d\times T\) (where we use a context length of \(T=307\)); 3) a linear layer with a ReLU activation applied per token, mapping from \(d\) to \(d\); 4) a final output embedding, mapping back to the space of all tokens (see Figure 1 for illustration of the architecture). Similarly to the linear network, we mask future positions in the second layer. We note that while this network is non-linear (unlike the models discussed previously), it is still very simple, and very far from standard transformer-based networks (e.g., we use no attention mechanism). Altogether, our MLP has 775M active parameters. Recently, a paper by [23] instrodced Goat, a relatively small transformer fine-tuned from the LAMA model that was able to outperform GPT-4 in various arithmetic tasks, when trained on data with intermediate calculations. We follow a similar procedure for training our model on 4-digit multiplication, with some key differences. First, we give more intermediate steps than in [23], essentially unfolding the multiplication algorithm in the training sequences (see Figure 2). Second, we use a custom tokenization scheme, where we tokenizes separately single digits (\(1,2,3,\dots\)), signs (\(\times,+,=\)) and also pairs of digits with multiplication sign (\(1\times 2\), \(3\times 5\), etc). This tokenization allows the model to quickly solve the single-digit multiplication task (by mapping pairs of multiplied digits to their product), which is a crucial tool in the multiplication algorithm. Finally, we also add zero-padding to some of the numbers, to get all strings to have the same length. We split all pairs of 4-digit numbers arbitrarily, use 75% for training, and keep the rest for validation. The network is trained from scratch for 17 hours on a single A100 GPU, going over 100M sequences (307M tokens) sampled uniformly from the training set. In Table 1 we compare the performance of our simple MLP (evaluated on 1000 validation examples) with GPT-3.5 (evaluated on the same examples), as well as to GPT-4 and Goat-7B on the same task (as reported in [23]). We report both accuracy of the exact match of the final answer, as well as accuracy of individual digits in the final number. We note that the performance of our MLP matches the performance of the much larger fine-tuned transformer in [23], and outperforms both GPT-3.5 and GPT-4 on this task. This demonstrates again that a lot of the power of language models can be attributed to the next-token auto-regressive training, and not necessarily to a particular architectural choice. \begin{table} \begin{tabular}{l c|c} Model & Accuracy (exact match) & Accuracy (per-digit) \\ \hline **MLP-775M** & 96.9\% & 99.5 \% \\ \hline **GPT-3.5** & 1.2\% & 61.85\% \\ **GPT-4*** & 5.3\% & 61.8\% \\ **Goat-7B*** & 96.9\% & 99.2 \% \\ \end{tabular} \end{table} Table 1: Performance of GPT vs. MLP model on the 4-digit multiplication task. *For GPT-4 and Goat-7B, we use the numbers as reported in [23]. Figure 2: Comparison between the output of our MLP, GPT-3.5 and GPT-4 on the 4-digit multiplication task. Discussion The emerging capabilities of large language models has triggered an ongoing debate about their potential and implications. Certain proponents assert that we are close to achieving Artificial General Intelligence (AGI), pointing to models such as GPT-4 which have already demonstrated perceived "sparks of AGI" [7]. They argue that AGI is just a matter of scaling up--creating larger models, feeding them with more data, and increasing training time. In stark contrast, others dismiss these large models as merely sophisticated autocomplete systems, voicing concerns about their propensity to potentially absorb and perpetuate biased and harmful data from the internet [4]. While this debate is far from settled, we hope that our work sheds light on the theoretical possibilities inherent in training auto-regressive next-token predictors. Our findings indicate that, given suitable data, simple next-token predictors can be trained to effectively learn virtually any function of interest. Consequently, if there exists some computer program capable of realizing AGI, then it is theoretically plausible to attain AGI through training simple next-token predictors, given the appropriate data. Admittedly, these assertions, in their current form, are somewhat theoretical, with practical application requiring data composed of potentially very long sequences of intermediate computations. However, we show that by modifying the choice of the hypothesis class we can possibly shorten the required sequence length, making our results more realistic. Therefore, we believe that our research can contribute towards a better, more nuanced understanding of both the capabilities and constraints associated with next-token predictors.
2309.12918
Combining Resonant and Tail-based Anomaly Detection
In many well-motivated models of the electroweak scale, cascade decays of new particles can result in highly boosted hadronic resonances (e.g. $Z/W/h$). This can make these models rich and promising targets for recently developed resonant anomaly detection methods powered by modern machine learning. We demonstrate this using the state-of-the-art CATHODE method applied to supersymmetry scenarios with gluino pair production. We show that CATHODE, despite being model-agnostic, is nevertheless competitive with dedicated cut-based searches, while simultaneously covering a much wider region of parameter space. The gluino events also populate the tails of the missing energy and $H_T$ distributions, making this a novel combination of resonant and tail-based anomaly detection.
Gerrit Bickendorf, Manuel Drees, Gregor Kasieczka, Claudius Krause, David Shih
2023-09-22T15:13:56Z
http://arxiv.org/abs/2309.12918v2
# Combining Resonant and Tail-based Anomaly Detection ###### Abstract In many well-motivated models of the electroweak scale, cascade decays of new particles can result in highly boosted hadronic resonances (e.g. \(Z/W/h\)). This can make these models rich and promising targets for recently developed resonant anomaly detection methods powered by modern machine learning. We demonstrate this using the state-of-the-art CATHODE method applied to supersymmetry scenarios with gluino pair production. We show that CATHODE, despite being model-agnostic, is nevertheless competitive with dedicated cut-based searches, while simultaneously covering a much wider region of parameter space. The gluino events also populate the tails of the missing energy and \(H_{T}\) distributions, making this a novel combination of resonant and tail-based anomaly detection. ## I Introduction The absence of new physics at the LHC is an enduring mystery. Many well-motivated theoretical frameworks such as supersymmetry, extra dimensions, and composite Higgs have predicted signatures of new particles at the weak scale, yet countless searches for these new particles have not found any significant evidence for them to date. Nearly all of these searches for physics beyond the Standard Model (BSM) are model-specific to some degree, optimized for specific signal scenarios, often using simulations. It is highly likely that these searches have not thoroughly covered the full phase space at the LHC, leaving a real possibility of new physics simply hiding in the data at the LHC, undiscovered because we haven't searched for it. Recently there has been considerable interest in developing more model-agnostic search strategies for the LHC [1; 2; 3]. In particular, a lot of activity has focused on "resonant anomaly detection" methods [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. In these approaches, one singles out a specific kinematic feature (e.g. the invariant mass of something in the event) in which new physics is postulated to be localized (resonant) to a window. This window serves as the signal region (SR) of the anomaly search. Then one uses the sidebands and modern machine learning techniques to learn a multivariate, data-driven background template in additional features \(x\). Finally, one employs further techniques (such as a classifier) to learn the difference between the background template and the data itself in the SR, in the form of an anomaly score \[R(x)=\frac{p_{\rm data}(x)}{p_{\rm bg}(x)} \tag{1}\] If \(p_{bg}(x)\) is the true background density and the classifier is optimal, this is the Neyman-Pearson optimal (idealized) anomaly detector in the SR. By cutting on \(R(x)\), one can greatly enhance the significance of any resonant new physics in the SR. So far this activity has almost exclusively focused on new physics that is fully localized -- both in the SR and in the features \(x\) -- and using a global resonant feature such as the invariant mass of a dijet system. Here we point out that the resonant anomaly detection technique is more general and both assumptions can be easily relaxed. First, resonant anomaly detection methods can be applied to any resonant feature in the event, as long as the background satisfies the assumption of smoothness in that feature. One strong motivation for considering this broader perspective is that in many well-motivated models, such as those for the electroweak hierarchy, highly boosted resonances (either \(Z/W/h\) from the SM or additional BSM particles such as a heavier Higgs boson) can be quite common in the decays of heavier particles. Additionally, we seek to broaden the scope of resonant anomaly detection in this work by pointing out that the signal need not be localized in all (or any) of the features \(x\); it can also appear on the tails of the \(x\) distribution (although of course the signal needs to be distinguishable in some of the features). This is a feature of resonant anomaly detection that has not been utilized so far. Anomalies on the tails of distributions such as \(p_{T}^{\rm miss}\), \(H_{T}\) and \(M_{eff}\) are quite common and plausible in models of TeV-scale new physics. In this work, we illustrate this broader application of resonant anomaly detection using a supersymmetric (SUSY) scenario as a well-motivated example. This SUSY scenario consists of gluino pair production, with the gluinos decaying to neutralinos plus a pair of (light quark) jets, and the neutralino decaying to another neutralino (LSP) through an on-shell \(Z\) boson, as shown in figure 1. The LSP neutralino is much lighter than the second neutralino, meaning the \(Z\)'s are highly boosted. Therefore every event has two boosted \(Z\)'s, jets, and missing energy. CMS previously searched for this signal with a cut-based analysis [23]. It defined a series of SRs requiring leading and subleading AK8 jets within \(m\in[70,100]\) GeV and considering \(p_{T}^{\rm miss}\) in different exclusive bins. The background was estimated in two steps. First, the total number of events \(B_{norm}\) in the SR was determined using sidebands in the leading AK8 jet mass (the subleading AK8 jet was required to be in the SR). Then the distribution in \(p_{T}^{\rm miss}\) bins (shape) is determined using the \(p_{T}^{\rm miss}\) distribution in control regions defined by requiring both AK8 jets to be outside the SR, renormalized to \(B_{norm}\). Here we point out that we can use one of the \(Z\)'s to define a SR for resonant anomaly detection, and then we can use the rest of the kinematic variables (\(m_{jet}\) of the subleading AK8 jet, \(p_{T}^{\rm miss}\), \(H_{T}\), etc) to play the role of \(x\) in resonant anomaly detection. We show that this allows for a potentially more expansive and model-agnostic search, while not sacrificing much in sensitivity to the original SUSY signal. We illustrate this with additional SUSY-motivated scenarios (different decay branching ratios to \(h\) and \(Z\)), as well as hypothetical non-minimal scenarios involving non-SM resonances. Notably, all of these scenarios have \(p_{T}^{\rm miss}\), and in fact the \(p_{T}^{\rm miss}\) is essential to suppress the resonant backgrounds from SM \(Z/W\)+jets with hadronically decaying \(Z/W\). This leads to a novel combination of a resonant and non-resonant anomaly detection strategy.1 This is also the first application of model-agnostic strategies to the SUSY domain and opens up the potential for many more new avenues in the search for SUSY and other well-motivated top-down scenarios. Our method should be contrasted with existing ML-based approaches to SUSY in the literature which are fully supervised (see e.g. [25; 26; 27; 28; 29]). Footnote 1: See [24] for a different, fully non-resonant application of weakly-supervised anomaly detection to the jet constituents of the monojet+\(p_{T}^{\rm miss}\) final state. Motivated by (non-resonant) dark showers, they did not obtain their background templates from sidebands in the jet mass; instead they considered an idealized (perfect) background template from simulated \(Z(\nu\nu)\)+jets events. The outline of our paper is as follows: Section II describes how the signal and background processes are simulated. In section III we summarize the steps involved in CATHODE. We show the results of applying CATHODE to different signal processes in section IV. Finally, we conclude in section V. ## II Data Since all the methods described here (both the CMS search and CATHODE) fully rely on data for estimating backgrounds (aka are "fully data-driven"), the simulation data we generate here is meant to play the role of real data, and all background estimates and significances etc we derive are meant to illustrate the result one would get applying these methods to collider data. There will be no events generated here that play the role of simulations at the LHC. For Standard Model (SM) background data, we take into account the three largest contributions of background events to the CMS search, arising from \(Z\) + jets, \(W\) + jets and \(t\bar{t}\)+ jets. \(W\) and \(Z\) events were generated with 1 to 4 additional final state partons while \(t\bar{t}\) were generated with up to 3 additional partons. For the benchmark signal (to be used to compare the performance of the CMS search vs. the CATHODE method), we follow the CMS search and generate gluino pair production (with 0 to 2 additional partons), with subsequent cascade decay \(pp\to\widetilde{g}\widetilde{g},\widetilde{g}\to q\widetilde{q}\widetilde{ \chi}_{2}^{0},\widetilde{\chi}_{2}^{0}\to Z\widetilde{\chi}_{1}^{0}\) where the neutralino \(\widetilde{\chi}_{2}^{0}\) is the next to lightest supersymmetric particle (NLSP) and \(\widetilde{\chi}_{1}^{0}\) is the lightest supersymmetric particle (LSP). The mass splitting between the gluinos and NLSP is set to 50 GeV while the LSP-mass is 1 GeV. This results in soft jets from the first step of the decay and a highly boosted \(Z\)-boson. The LSP escapes the detector and contributes large amounts of missing energy. Later we will also consider decays of \(\widetilde{\chi}_{2}^{0}\) to \(X\widetilde{\chi}_{1}^{0}\) where the \(X\) is either a Standard Model Higgs boson or a new Higgs boson with mass besides 125 GeV like the new Higgs bosons in supersymmetric extensions of the Standard Model. The Standard Model Higgs boson decays in \(\sim 58\%\) of cases to \(b\bar{b}\) while for the latter case we set the branching ratio to 100%. All events are generated with MadGraph5_aMC@NLO 3.2.0 with \(\sqrt{s}=13\) TeV. The NNPDF3.1LO PDF-set [30] is used throughout. At the generator level a minimum \(H_{T}\) cut of 250 GeV is imposed. Gluinos are decayed spin-uncorrelated with Madspin [31] to \(q\widetilde{q}\widetilde{\chi}_{2}^{0}\) via an off-shell squark and subsequently \(\widetilde{\chi}_{2}^{0}\to X\widetilde{\chi}_{1}^{0}\). Showering is done using Pythia 8.306 [32] with MLM merging. Pythia-Tune CP5 was used for background events while CP2 [33] was used for the signal samples. The number of background events in each channel is scaled to match their respective next-to-leading-order cross sections [34]. Detector effects are simulated using Delphes 3.5.0 [35] with the delphes_card_CMS.tcl detector card modified to account for the lepton isolation criterion. Particles are clustered into jets using the anti-\(k_{T}\) clustering algorithm with cone-radius parameter \(R=0.4\) for AK4 jets and \(R=0.8\) for AK8 jets. To be considered jets have to have \(p_{T}>30\) and \(|\eta|<2.4\). The following selection criteria are imposed for both the classical CMS-recast and the dataset for CATHODE: 1. \(N_{\text{AK4 jet}}\geq 2\) 2. \(p_{T}^{\text{miss}}>300\) GeV 3. \(H_{T}>400\) GeV, where \(H_{T}=\sum_{\text{AK4 jets}}|\vec{p}_{T}|\) 4. \(|\Delta\phi_{j},\vec{H}_{T}^{\text{miss}}|>0.5(0.3)\) for the first 2 (up to next 2) AK4 jets, where \(\vec{H}_{T}^{\text{miss}}=-\sum_{\text{AK4 jets}}\vec{p}_{T}\) 5. no isolated photon, electron or muon candidate with \(p_{T}>10\) GeV with isolation variables \(I<0.1,0.2\) and \(1.3\) GeV/\(p_{T}+0.005\) for isolated electron, muon and photon respectively 6. no isolated track with \(m_{T}=\sqrt{2p_{T}^{\text{track}}p_{T}^{\text{miss}}(1-\cos(\phi^{\text{ miss}}-\phi^{\text{track}})}<100\) GeV and \(p_{T}>5\) GeV for tracks identified as an electron/muon or else 10 GeV. 7. at least 2 AK8 jets with \(p_{T}>200\) GeV The number of background events that pass this baseline selection is shown in the first line of table 1. In total, the dataset is composed of 107,421 background events corresponding to \(\mathcal{L}_{\text{int}}=300\text{ fb}^{-1}\) after cuts 1-7. Signal events are injected according to the gluino-pair production cross section. Figure 2 shows that the feature \(m_{J_{1}}\) is smooth for the background while it is resonant for the signal. (Hadronically decaying \(W\)'s and \(Z^{\prime}s\) are eliminated by the requirements on \(p_{T}^{\text{miss}}\).) This is a necessary feature for the application of the CATHODE method employed in section III. Figure 3 shows that the signal of new physics is found on the tail of the \(p_{T}^{\text{miss}}\)-distribution, while the background peaks at lower \(p_{T}^{\text{miss}}\). We will show that the powerful discriminator \(p_{T}^{\text{miss}}\) can be leveraged by CATHODE even though the signal is found on the tail of the distribution. ## III Cathode Here we recap the main points of the inner workings of Classifying Anomalies THrough Outer Density Estimation CATHODE (for more detail see [14]). In very broad strokes, CATHODE aims to learn the density of background events in a signal-depleted region and estimates the density inside the signal enriched region by interpolation. Then, artificial samples are generated in that region, which should follow a signal-depleted distribution. Using a classifier, which is trained to distinguish between the artificial and real events, we can approximate the likelihood-ratio (1). This would be the ideal (optimal) model-agnostic anomaly detector, as it is monotonic with \(p_{\text{signal}}(x)/p_{\text{bg}}(x)\) for any signal (since \(p_{\text{data}}(x)\) is an admixture of \(p_{\text{signal}}(x)\) and \(p_{\text{bg}}(x)\)) [36]. This allows CATHODE to classify data events as background-like or signal-like. The whole method works by learning directly form data. The training and model selection of both the density estimation and classification is completely agnostic of any signal truth label. In this study, the events are represented as the tuple \(m_{J_{1}}\) and \(x\) with \[x=\left(m_{J_{2}},p_{T}^{\text{miss}},H_{T},\tau_{21}^{J_{1}},\tau_{21}^{J_{2} }\right) \tag{2}\] where \(J_{1}\), \(J_{2}\) are the leading/subleading AK8 jets and \(\tau_{21}=\tau_{2}/\tau_{1}\) is the ratio of n-subjettiness variables [37]. To compare the technique to the classical search more directly we also consider the reduced set of features \[x=\left(m_{J_{2}},p_{T}^{\text{miss}},H_{T}\right) \tag{3}\] so that CATHODE only gets to use the same information. We use a slightly modified version of the original repository2 to allow for any dimension for \(x\). Figure 2: Distribution of the resonant feature \(m_{J_{1}}\) for background and signal events in the sideband (SB) and the signal region (SR). The signal corresponds to \(m_{\tilde{g}}=1700\text{ GeV}\). The distributions are scaled to \(\mathcal{L}_{\text{int}}=300\text{ fb}^{-1}\). ### Data preparation and density estimation First, one defines the signal region (SR) as an interval in \(m_{J_{1}}\) where the signal is expected to be concentrated similar to a classical bump hunt. The complement of the SR defines the side band (SB). As in any bump hunt, the SR-window has to account for the position and the width of the signal-bump. Because the reconstructed jet-mass is not distributed symmetrically around the mass \(m\) of the mother-particle (which is the \(Z\), the Higgs or a BSM Higgs in this paper) we chose parameterization \[m_{J_{1}}\in[m\left(1-\frac{4}{3}\sigma_{m}\right),m\left(1+\frac{2}{3}\sigma_ {m}\right)]. \tag{4}\] We estimate the mass-resolution to \(\sigma_{\rm m}=15\%\) and round the window to the closest GeV. The lower sideband extends to \(m_{J_{1}}=0\) while the upper sideband is only bound by the phase space. Events in the SB are partitioned into a training-set (75%) used for the actual training and in a validation Figure 3: Comparison of the signal and background distribution inside the signal region and the artificial samples. The artificial samples will be discussed in the next section. The signal corresponds to \(m_{\tilde{g}}=1700~{}\rm GeV\). The distributions are scaled to \(\mathcal{L}_{\rm int}=300~{}\rm fb^{-1}\). set (\(25\%\)), used to select the models used in the next steps. To address the finite number of real SB events we use leave-one-out cross-validation such that we get four datasets with non-overlapping validation-sets. The data is transformed (preprocessed) for easier learning by shifting and scaling the observables in \(x\) to fit the interval \((0,1)\), then applying a logit transformation3, and again shifting and scaling to unit standard deviation and zero mean. Footnote 3: logit\((x)=\ln\frac{x}{1-x}\) For density estimation, a Masked Autoregressive Flow (MAF) is used with affine transformations [38]. The MAF constructs invertible transformations with tractable Jacobians that maps a simple multidimensional distribution (e.g. multiple Gaussians as is considered here) to the target density, in this case the conditional probability \(p_{\text{data}}(x|m_{J_{1}}\in\text{SB})\). The MAF uses 15 blocks of Masked Autoencoder for Distribution Estimation (MADE) [39] to learn the transformations. The number of events it is trained on depends on the signal region but is typically of the order of \(10^{5}\).4 Training is done with the hyperparameters listed in tab. 2. Footnote 4: We emphasize that the number of events we are using for training was carefully tuned to match the actual number of events in data expected in \(\mathcal{L}=300/\text{fb}\). After training the 10 epochs with the lowest validation loss are selected for the sampling step. ### Sampling SR events The next step aims to sample synthetic events inside the SR using the four density estimates of the last step. Kernel density estimation with Gaussian kernel and bandwidth of 0.01 is used to model the \(m_{J_{1}}\) distribution inside the SR. This is then used to sample \(N=1,000\) events from each of the 10 DE-models which are combined, shuffled and split between the training set (\(60\%\)) and validation set (\(40\%\)) for the next step. The training and validation sets of all four density estimators are combined respectively to form the synthetic dataset with a total of 40,000 events. Compared to the roughly 10,000 real events in the SR (see Table 1 second line) this is intentionally oversampled to improve the classification performance [14]. Setting \(N\) even higher did not improve results systematically. The synthetic background events and the real SR events are then standardized in the SR without the logit transformation. The distributions of the synthetic events are shown in orange in figure 3. In all our models the signal is located in a resonance in \(m_{J_{2}}\) and in the tail of the \(p_{T}^{\text{miss}}\) distribution. The density-estimation has to model the shape reasonably well so this powerful classification feature can be leveraged. This is accomplished successfully as shown in figure 3. ### Classifier and anomaly detection Now a classifier is trained on both the synthetic and real SR dataset to distinguish the sampled events, which should follow the background distribution, from the real events, which additionally might contain events following the signal distribution. The classifier consists of 3 hidden layers with 64 nodes and ReLU activation each and it is optimized using the hyperparameters given in tab. 3. Because the datasets are imbalanced, a weight is assigned such that both classes contribute equally to the loss. Since in a realistic example the number of events to train and validate on is limited, we employ an additional step of leave-one-out cross-validation. The real SR data is partitioned into 4 subsets of equal size. In each subset, one quarter of the real events are held back as a test set for the anomaly detection while the remaining 75% are split between the training set (\(60\%\)) and the validation set (\(40\%\)). (The synthetic background events are also split into train/val sets with the same proportions.) After training, the 10 model states with the lowest validation loss are selected and evaluated on the test set. The predicted labels are then averaged over the models and assigned as anomaly scores to the events. This is repeated for the next quarter of the SR data, and so on, until every event in the SR is assigned an anomaly score. To reduce the statistical effects of severely over- and under-performing models, each dataset is shuffled 5 times to allow different selections. Then the entire process of the preceding paragraph is repeated to produce 5 different anomaly scores. All 5 anomaly score-assignments are averaged to produce a final, more robust score. Finally, to even out the influence of signal-event selection, everything is repeated 10 times with differing independent sets of signal-events. In all the results we report below, we will report the mean and standard deviation \begin{table} \begin{tabular}{|c c|} \hline Hyperparameter & Value \\ \hline optimizer & Adam \\ epochs & 100 \\ learning\_rate & \(10^{-4}\) \\ batch\_norm & true \\ batch\_norm\_momentum & 1 \\ batch\_size & 256 \\ \hline \end{tabular} \end{table} Table 2: Parameters of the density estimator \begin{table} \begin{tabular}{|c c|} \hline Hyperparameter & Value \\ \hline optimizer & Adam \\ epochs & 100 \\ learning\_rate & \(10^{-3}\) \\ batch\_size & 128 \\ \hline \end{tabular} \end{table} Table 3: Parameters of the classifier of these 10 different trials. The signal to background ratio is improved by cutting on the anomaly score above a critical value \(R_{c}\). Figure 4 shows the distributions of the anomaly score \(R\) for the signal and background. No additional selections are performed. In a real application one would perform statistical inference by means of a bump hunt on the \(R\) distribution which is beyond the scope of this work. Instead the performance is evaluated using the nominal significance \(\mathrm{Z}=S/\sqrt{B}\) with \(S\) (\(B\)) the number of signal (background) events after imposing this cut. This makes use of the truth labels which an experiment would have to replace by other means of background estimation. One still has to chose a strategy to set \(R_{c}\). In the following we will show the signal significance with \(R_{c}\) set to maximize \(\mathrm{Z}\) with at least 5 background events left to show the best performance one could hope for. Since a real application does not have access to the truth labels this is not immediately applicable. To show a more realistic method we also show the performance where \(R_{c}\) is set so that 1% of SR-events pass the cut while also containing at least 5 background events. ## IV Results ### Nominal signal model We first turn our attention to the nominal signal model where \(\widetilde{\chi}_{2}^{0}\to Z\widetilde{\chi}_{1}^{0}\). This is the signal model the dedicated CMS search [23] was aimed at. #### iv.1.1 Three features We start by using the limited feature set \(x=\left(m_{J_{2}},p_{T}^{\mathrm{miss}},H_{T}\right)\) so CATHODE does not have access to more information than the classical search. To compare with CMS we calculate the signal significance for events inside the signal region \(m_{J_{1}/J_{2}}\in[70\ \mathrm{GeV},100\ \mathrm{GeV}]\) with the b-veto mentioned in section A.1 applied. Since the search gets most of its sensitivity from the highest \(p_{T}^{\mathrm{miss}}\)-bins we apply an additional cut \(p_{T}^{\mathrm{miss}}>800\ \mathrm{GeV}\).5 This leads to roughly the same number of events as when only the top 1% of events are kept for CATHODE. For a gluino-mass with sizable cross section like 1700 GeV the classical search yields on average for 10 independent signal injections \(\mathrm{Z}=20\). Using CATHODE with 3 features the significance is on average \(\mathrm{Z}=34\pm 2\). Footnote 5: Technically, the original CMS search uses \(p_{T}^{\mathrm{miss}}\)-bins, and most of the sensitivity comes from the three highest bins, 800–1000 GeV, 1000–1200 GeV and larger than 1200 GeV, where the background is comparable or subdominant to the signal hypothesis. To get a fair comparison with CATHODE we replace this with a single cut. Evidently, CATHODE outperforms the classical approach, even though CATHODE is more model agnostic. The reason is that the classical approach, being cut-based, misses correlations between the features that the multivariate classifier of CATHODE can pick up. To confirm this, we also investigated the sensitivity of a fully supervised approach, using the same classifier architecture and hyperparameters as that of CATHODE. The training data for the fully supervised classifier consists of an additional 300 fb\({}^{-1}\) background events and 10,000 signal events. 60% of this dataset is used in training while the remaining 40% is used as a validation set to select the best performing model. Evaluating this classifier again with selecting only the top 1% of anomaly scores results in a significance of on average \(\mathrm{Z}=33\pm 4\). We conclude that CATHODE is saturating the performance of the fully-supervised classifier for this amount of signal (unsurprisingly, since this is a lot of signal), and that the deep neural networks of both CATHODE's classifier and the supervised classifier can leverage correlations to improve the signal significance significantly over the classical approach. #### iv.1.2 Five features From now on we will use the five features \(\left(m_{J_{2}},p_{T}^{\mathrm{miss}},H_{T},\tau_{21}^{J_{1}},\tau_{21}^{J_{2}}\right)\) because the subjettiness-variables \(\tau_{21}\) are useful discriminants. Figure 5 shows CATHODE's performance compared to the classical strategy. We see that in the relevant region at high gluino masses the conservative cut on \(R\) (allowing only the top 1% to pass) reaches only slightly weaker results. We identify the mass where the signal significance is \(\mathrm{Z}=1.645\) with the expected 95% limit on the mass in a real application [40]. The conservative cut on \(R\) alone excludes gluino masses up to \(m_{\widetilde{g}}=2066\ \mathrm{GeV}\). This is only slightly weaker than the expected excluded mass of \(m_{\widetilde{g}}<2145\ \mathrm{GeV}\) for a dedicated search at this integrated luminosity. This is expected because a model specific search will be fine-tuned to the specific process Figure 4: Normalized distributions of the anomaly score \(R\) of the signal and background processes. The signal corresponds the average distribution of 10 independent injections with \(m_{\widetilde{g}}=1900\ \mathrm{GeV}\). while CATHODE is intentionally kept more general. CATHODE's strength lies in this generalization as it is able to detect different models without the need to tweak the approach as we will show in the following sections. ### Alternate signal model: decays to SM Higgs Now we turn our attention to another model, where the neutralinos decay via \(\widetilde{\chi}^{0}_{2}\to h\widetilde{\chi}^{0}_{1}\) where \(h\) is the 125 GeV Standard Model Higgs boson. All that has to be done for CATHODE is select a new signal window around 125 GeV. A scan over the gluino-mass is shown in figure 6. A b-jet selection criterion would be beneficial in this case, but we omit this to keep CATHODE as general as possible. Even without the b-tag CATHODE still generates a sizable signal-significance for gluino masses comparable to the expected excluded value. While the dedicated search is expected to exclude gluino masses below 2355 GeV, CATHODE with the 1% cut reaches \(\mathrm{Z}>1.645\) for all masses up to 2233 GeV. With the best possible cut on \(R\) this can be pushed to 2300 GeV. As expected CATHODE results in slightly weaker bounds. The opportunity cost of this is significantly lower than a specialized search. The only change in the approach is the choice of the signal region. The intended use of CATHODE scans the signal region over the entire mass range, such that both the decay to \(Z\) and Higgs bosons would be included automatically in this strategy without any extra considerations. ### Alternate signal model: mixed \(Z/h\) decays Setting the branching ratio of the \(\widetilde{\chi}^{0}_{2}\to h\widetilde{\chi}^{0}_{1}\) or \(\widetilde{\chi}^{0}_{2}\to Z\widetilde{\chi}^{0}_{1}\) decays to 100% is a rather unnatural choice. Therefore we also show CATHODE's performance for a model where both branching ratios are 50%. This time the anomaly-detection has to find two bumps simultaneously. For this we chose the signal window to contain both resonances: \(m_{J_{1}}\in[70~{}\mathrm{GeV},140~{}\mathrm{GeV}]\). The results of a scan over the gluino masses is shown in figure 7. This time CATHODE seems to outperform the extrapolated bound from the dedicated search [42]. The extrapolation from 35.9/fb to 300/fb integrated luminosity is quite far and should be taken with a grain of salt. The dedicated search classifies events in 0,1 and 2 Higgs categories using b-tags. The signal model populates all categories simultaneously. The approach using CATHODE only uses the single signal region without further thought to generate these results. In figure 8 we show that CATHODE is indeed capable of recovering both bumps corresponding to the decay into \(Z\) and Higgs bosons respectively. Figure 9 shows that CATHODE is very robust against changed in branching ratios. We vary the branching ratio \(\mathrm{Br}(\widetilde{\chi}^{0}_{2}\to Z\widetilde{\chi}^{0}_{1})\) with \(\mathrm{Br}(\widetilde{\chi}^{0}_{2}\to h\widetilde{\chi}^{0}_{1})=1- \mathrm{Br}(\widetilde{\chi}^{0}_{2}\to Z\widetilde{\chi}^{0}_{1})\) and calculate the significance. Regardless of branching Figure 5: Sensitivity of CATHODE and the classical strategy. The signal window is set as \(m_{J_{1}}\in[70~{}\mathrm{GeV},100~{}\mathrm{GeV}]\). For the blue line \(R_{c}\) is set to allow 1% of events to pass this cut while the orange line omits the cut completely. The shaded region shows one standard deviation around the mean \(S/\sqrt{B}\) obtained from 10 different signal injections. The dot-dashed part of the blue line represents parameter points where \(R_{c}\) has to be lowered to allow 5 background events. The vertical black line at 2145 GeV indicates gluino-mass that is excluded at 95% confidence level by our 300/fb recreation of the dedicated search [23]. The red dot-dashed line is calculated using the classical strategy with \(m_{J_{1}/J_{2}}\in[70~{}\mathrm{GeV},100~{}\mathrm{GeV}]\), \(p_{T}^{\mathrm{miss}}>800~{}\mathrm{GeV}\) and the b-veto. Figure 6: CATHODE’s performance for \(\widetilde{\chi}^{0}_{2}\to h\widetilde{\chi}^{0}_{1}\). The signal window is set as \(m_{J_{1}}\in[100~{}\mathrm{GeV},140~{}\mathrm{GeV}]\). For the blue line \(R_{c}\) is set to allow 1% of events to pass this cut while the orange line omits the cut completely. The dot-dashed part of the blue line represents parameter points where \(R_{c}\) has to be lowered to allow 5 background events. The shaded region shows one standard deviation around the mean \(S/\sqrt{B}\) obtained from 10 different signal injections. The vertical black line at 2355 GeV indicates gluino-mass that is expected to be excluded by rescaling the (expected) limit from a dedicated CMS search for this decay [41] from 137/fb to 300/fb integrated luminosity. There is no red line corresponding to the classical search (as in Fig. 5) because we did not perform a detailed recast of [41]. ratio, the multiplicative gain of significance by applying the technique is always between 5 and 6. This shows the real strength of the CATHODE approach over the dedicated searches [41; 23; 42]. With the enlarged SR that covers both decay modes, CATHODE only needs to be trained once, independent of the assumption on the BRs, compared to performing a dedicated analysis for each BR assumption. ### Alternate signal model: decays to BSM Higgs Until now we applied CATHODE only to models where the position of the bump is known beforehand. But one strength of the technique is that we don't even need to know that. To discuss this further we now focus on another model that induces the neutralino decay \(\widetilde{\chi}^{0}_{2}\to H\widetilde{\chi}^{0}_{1}\) where \(H\) is one of the additional Higgs-bosons introduces by the (N)MSSM that has a mass different from 125 GeV. Because the decay of \(H\) depends on the specific implementation of SUSY-breaking parameters we set the branching ratio \(BR(H\to b\bar{b})=100\%\). To find the signal, CATHODE is applied to different signal regions given by varying mass hypotheses \(m\) in equation 4, scanning the entire mass range in discrete steps and the signal significance is determined. To demonstrate this we chose \(m_{H}=100\) GeV and \(m_{\widetilde{g}}=2000\) GeV and show the result in figure 10. Once the signal window has significant overlap with the signal-bump the signal significance gets sufficiently improved to show the presence of anomalous events. In a real application this would then warrant further investigation with a dedicated search. Finally we show how wide the possible choice of \(m_{H}\) is that CATHODE can still help to find in our dataset with the given choice of features. For this we perform a parameter-scan over \(m_{H}\) from 35 GeV to 515 GeV in 10 GeV steps shown in figure 11. The method reaches reliably signal significances of order 10 up to \(m_{H}\sim 350\) GeV without using b-tags as otherwise powerful discriminators. ## V Conclusions In this paper, we have shown how recently-developed techniques for weakly-supervised resonant anomaly detection can be easily extended to cover anomalies that also live on tails of distributions. This situation commonly arises in well-motivated weak-scale scenarios such Figure 8: The distribution of the data inside the signal region before the anomaly score-cut is shown in gray. After selecting the top 1% of events in the SR the remaining signal events are shown in orange while the remaining background-events are shown in blue. The signal corresponds to \(m_{\widetilde{g}}=1700\) GeV. Figure 7: Sensitivity of CATHODE and the classical strategy. The signal window is set as \(m_{J_{1}}\in[70\text{ GeV},140\text{ GeV}]\). For the blue line \(R_{c}\) is set to allow 1% of events to pass this cut while the orange line omits the cut completely. The dot-dashed part of the blue line represents parameter points where \(R_{c}\) has to be lowered to allow 5 background events. The shaded region shows one standard deviation around the mean \(S/\sqrt{B}\) obtained from 10 different signal injections. The vertical black line at 2060 GeV indicates gluino-mass that is expected to be excluded by rescaling the expected excluded cross section obtained by the dedicated CMS search for this decay [42] from 35.9/fb to 300/fb integrated luminosity. Figure 9: Sensitivity of CATHODE for varying branching ratios to \(Z\) bosons for \(m_{\widetilde{g}}=2000\text{ GeV}\). The shaded region shows one standard deviation around the mean \(S/\sqrt{B}\) obtained from 10 different signal injections. as SUSY, where the cascade decays of heavier BSM particles can produce resonances such as \(Z\)'s and Higgs bosons, while simultaneously populating the tails of features such as \(p_{T}^{\rm miss}\) and \(H_{T}\). As long as the signal is localized in one feature where the background is smooth, resonant anomaly detection can be brought to bear on these additional features in order to enhance the sensitivity to signal. As a proof-of-concept demonstration, we applied the state-of-the-art anomaly detection method CATHODE [14] to the SUSY scenario \(pp\to\widetilde{g}\widetilde{g},\widetilde{g}\to q\widetilde{q}\widetilde{ \chi}_{2}^{0},\widetilde{\chi}_{2}^{0}\to X_{1}^{\chi 0}\) where \(X\) is either a \(Z\) boson, Standard Model Higgs, or an additional (N)MSSM Higgs boson. Despite being model-agnostic, we showed that the CATHODE method is competitive with existing, dedicated, cut-based searches [41; 23; 42], because -- being inherently multivariate -- it takes advantage of correlations between features. Moreover, whereas each decay scenario required a separate, optimized analysis, CATHODE -- being model-agnostic -- is able to simultaneously target them all. In this work we considered two different feature sets for the CATHODE algorithm, as shown in eqs. (2) and (3). These were motivated by the SUSY scenarios we considered, and it would be interesting to generalize our study beyond these feature sets, both to increase the degree of model-agnosticness of the method, and possibly to enhance the sensitivity to the SUSY signals considered here. For example, our benchmark signals all come with \(\sim 4\) additional jets from the gluino decay, and their detailed kinematic distributions (instead of just the aggregate feature \(H_{T}\)) may offer additional discriminating power versus the QCD background. Adding features related to additional jets in the event may also give us more sensitivity to spectra not explicitly considered here, for example where the NLSP mass is not so close to the gluino mass. As long as \(m_{\rm LSP}+m_{Z}\ll m_{\tilde{g}}\), the \(Z\) will still be boosted, but the extra jets will get harder as \(m_{\rm LSP}\) moves away from \(m_{\tilde{g}}\). All in all, using modern methods for resonant anomaly detection such as CATHODE allows for a broader and more efficient coverage of the parameter space of physics beyond the Standard Model. With much more data on the way, methods like these should prove indispensable for maximizing the discovery potential of the LHC. ## VI Acknowledgments DS is supported by DOE grant DOE-SC0010008. CK would like to thank the Baden-Wurttemberg-Stiftung for financing through the program _Internationale Spitzenforschung_, project _Uncertainties_ - _Teaching AI_ its (BWST_IF2020-010). GK acknowledges support by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 Quantum Universe - 390833306. ## Appendix A Recasting CMS We describe how the background samples were simulated as closely as possible to an existing search and verified. We recreate the CMS SUSY search CMS-SUS-19-013 [23]. ### Recreating CMS-SUS-19-013 The recreation of CMS-SUS-19-013 [23] follows the most important analysis steps of the original publication. The number of events is set to the integrated luminosity of \(\mathcal{L}_{\rm int}=137\ {\rm fb}^{-1}\). First, a set of remaining cuts are applied to select \(Z\)-candidates, then the background-estimation is recreated before the statistical Figure 11: Parameter-scan of \(m_{H}\) with \(m_{\tilde{g}}=2000\ {\rm GeV}\) to show which signals CATHODE can help find in the dataset. The shaded region shows one standard deviation around the mean \(S/\sqrt{B}\) obtained from 10 different signal injections. Figure 10: Significance for a parameter scan over the mass hypothesis in 5 GeV steps, when the mass is not known a priori. The shaded region shows one standard deviation around the mean \(S/\sqrt{B}\) obtained from 10 different signal injections. Masses are chosen as \(m_{\tilde{g}}=2000\ {\rm GeV}\) and \(m_{H}=100\ {\rm GeV}\). analysis is performed. The following cuts are applied to select hadronically decaying \(Z\) bosons: 1. Softdropped \(m_{\rm jet}\in[40~{}{\rm GeV},140~{}{\rm GeV}]\) of the 2 highest \(p_{T}\) AK8 jets 2. \(\Delta R_{Z,b}>0.8\) for the second highest \(p_{T}\) AK8 jet \(Z\) and any b-tagged jet where the angular separation is defined as \(\Delta R=\sqrt{\Delta\phi^{2}+\Delta\eta^{2}}\) The resulting \(p_{T}^{\rm miss}\)-spectrum is shown in figure 12 which agrees with the spectrum shown in the original publication within uncertainties. The background estimation consists of the normalisation and the shape estimation. The signal region (SR) is defined as \(m_{\rm jet}\in[70~{}{\rm GeV},100~{}{\rm GeV}]\). First one demands the subleading AK8-jet to be in the SR. Then a linear function is fitted to the \(m_{\rm jet}\) spectrum of the leading AK8 jet outside its SR. The nominal yield \({\cal B}_{\rm norm}\) is obtained by integrating the linear function in the SR. The statistical error of the yield is obtained from the spread of pseudo-experiments sampled from the fit. Additionally to the linear function Chebychev functions up to the fourth order are fitted. The largest deviation of the nominal yield is then assigned as an additional uncertainty. The background \(p_{T}^{\rm miss}\)-shape is obtained by the sideband (SB) with both AK8 jets outside the SR. The content of the \(i\)th \(p_{T}^{\rm miss}\) bin is denoted as \(N_{i}^{\rm SB}\). The transfer factor from the SB to the SR is then calculated as \[{\cal T}\equiv\frac{{\cal B}_{\rm norm}}{\sum_{i}N_{i}^{\rm SB}}=0.206\pm 0.023. \tag{10}\] which agrees with the original publication within uncertainties. The expected background in bin \(i\) is then \[{\cal B}_{i}={\cal T}N_{i}^{\rm SB}. \tag{11}\] RooStats[43] is used for statistical modeling. It takes \(N_{i}^{\rm SB}\) with statistical errors, \({\cal T}\) and \(\Delta{\cal T}\) to model the background in the SR with uncertainties. The signal model contains signal events that pass all cuts and is rescaled to the approximate NNLO+NNLL crossection [44]. The overall uncertainty of the crossection is applied to all signal bins. The resulting statistical model is then evaluated with the CL\({}_{s}\) approach and the asymptotic form of the onesided profile likelihood teststatistic. This is used to obtain the 95% CL. crossections. The limits are shown in figure 13 for the integrated luminosity \({\cal L}_{\rm int}=137~{}{\rm fb}^{-1}\) and in figure 14 for \({\cal L}_{\rm int}=300~{}{\rm fb}^{-1}\). We use the latter dataset for the application of the ML-technique since the accuracy is greatly improved with more datapoints to learn on while in reach for the collider in the near future. Figure 14: Results of the classical search for 300/fb integrated luminosity Figure 12: \(p_{T}^{\rm miss}\) spectrum of the three leading background processes. The background of the same three processes from the CMS-publication is shown in red. The variation of cross section due to changing the energy-scale by a factor of 1/2 and 2 as computed by MadGraph is assigned as a systematic uncertainty and added to the statistic errors in quadrature and shown as the error bars. Figure 13: Recreation of CMS-SUS-19-013 [23]. The red dashed line denotes the expected limits of original CMS-search. The black dashed line shows the expected limits of the recreation.
2308.16857
IoMT-Blockchain based Secured Remote Patient Monitoring Framework for Neuro-Stimulation Device
Biomedical Engineering's Internet of Medical Things (IoMT) is helping to improve the accuracy, dependability, and productivity of electronic equipment in the healthcare business. Real-time sensory data from patients may be delivered and subsequently analyzed through rapid development of wearable IoMT devices, such as neuro-stimulation devices with a range of functions. Data from the Internet of Things is gathered, analyzed, and stored in a single location. However, single-point failure, data manipulation, privacy difficulties, and other challenges might arise as a result of centralization. Due to its decentralized nature, blockchain (BC) can alleviate these issues. The viability of establishing a non-invasive remote neurostimulation system employing IoMT-based transcranial Direct Current Stimulation is investigated in this work (tDCS). A hardware-based prototype tDCS device has been developed that can be operated over the internet using an android application. Our suggested framework addresses the problems of IoMTBC-based systems, meets the criteria of real-time remote patient monitoring systems, and incorporates literature best practices in the relevant fields.
Md Sakib Ullah Sourav, Mohammad Sultan Mahmud, Md Simul Hasan Talukder, Rejwan Bin Sulaiman, Abdullah Yasin
2023-08-31T16:59:58Z
http://arxiv.org/abs/2308.16857v1
# IoMT-Blockchain based Secured Remote Patient Monitoring Framework for Neuro-Stimulation Device ###### Abstract Biomedical Engineering's Internet of Medical Things (IoMT) is helping to improve the accuracy, dependability, and productivity of electronic equipment in the healthcare business. Real-time sensory data from patients may be delivered and subsequently analyzed through rapid development of wearable IoMT devices, such as neuro-stimulation devices with a range of functions. Data from the Internet of Things is gathered, analyzed, and stored in a single location. However, single-point failure, data manipulation, privacy difficulties, and other challenges might arise as a result of centralization. Due to its decentralized nature, blockchain (BC) can alleviate these issues. The viability of establishing a non-invasive remote neurostimulation system employing IoMT-based transcranial Direct Current Stimulation is investigated in this work (tDCS). A hardware-based prototype tDCS device has been developed that can be operated over the internet using an android application. Our suggested framework addresses the problems of IoMTBC-based systems, meets the criteria of real-time remote patient monitoring systems, and incorporates literature best practices in the relevant fields. blockchain, biomedical device, e-Health, IoMT, neuro-stimulation, tDCS. ## 1 Introduction The Internet of Things (IoT), Artificial Intelligence (AI), Machine Learning (ML), Robotics, Blockchain, and other smart technologies have revolutionized engineering and manufacturing. The healthcare industry is no exception. The healthcare system has been boosted by new technology, making it more viable and accessible to the general public. Since the 1970s, there has been a substantial change in the way technology is used in this field. Because of the rapid expansion and use of the Internet of Things (IoT) and Cloud computing, today's generation, known as Industry 4.0, is entirely reliant on intelligent gadgets and their use. IoT is used in a variety of fields, including smart cities, smart homes, smart grids, security and emergency situations, smart agriculture, smart monitoring, and so on [1]. Many sensors are employed for use in the creation of various intelligent wearable devices that aid in the monitoring of human activities as well as the recording of health data. In the field of biomedical engineering, the internet of medical things (IoMT) allows doctors to treat patients remotely. Models must be in place, however, to guarantee that the treatments are carried out properly, considering the security issues connected with the IoMT [2]. COVID-19's recent events demonstrate the need for a remote patient monitoring (RPM) system to be developed because of the benefits of protecting vulnerable patients by reducing the need for visits to hospitals and other clinical settings, as well as reducing risk to physicians by reducing physical patient contact [3]. For many years, neurorehabilitation has employed transcranial direct current stimulation (tDCS) to effectively boost or reduce mental function and learning [4] and it is considered to be safe and widely accepted [5]. Several studies have looked at using transcranial direct current stimulation (tDCS) to treat neurological illnesses including Parkinson's disease and other movement-related disorders [6, 7]. Such therapies have been shown to be successful in a wide spectrum of individuals with neuro diseases, with tDCS therapy improving quality of life (QoL) in people who would otherwise suffer greatly [8]. tDCS systems require researchers to collaborate with patients in order to achieve the desired outcomes. This is due to the fact that these systems must be able to accurately target and focus on certain parts of the brain in order to stimulate them [9]. Furthermore, tDCS devices can be quite costly, limiting their application to specialized units with available resources [10]. As a result, new methods for conducting tDCS have been developed in order to improve patient outreach. For instance, the approaches described by Sourav et al. [11] and Samuel et al. [12] are based on an open source framework and deliver the same medicines to a larger number of patients. However, such systems are still in the early stages of research and are subject to restrictions such as the accuracy of real output currents and the system's effectiveness. Treatment monitoring and delivery are crucial components of any new tDCS system, as tDCS therapies require specialized clinical supervision. More patients might be served by providing remote and cloud-based services for such therapy. This research will look into how clinicians can use tDCS remotely using cloud-based tools. Legal and ethical considerations, as well as the requirement for safe testing and development prior to clinical trials, must all be considered in such a system. As a consequence, these automated remote solutions must be safe and secure, with no risk of patient privacy or tDCS treatment abuse or misuse. This project will look into using off-the-shelf components in the hardware design to keep costs down and make the device more accessible for patients. Doctors rely on sensitive healthcare data to monitor and diagnose their patients' health. Data acquired from devices or diagnoses of patients is frequently kept in a centralized system called Electronic Health Records (EHR) for later study. Storing all of a patient's data in one location increases the danger of data loss, manipulation, and hacking. Furthermore, storing all data in a central repository makes it impossible to establish transparency. Furthermore, these records should be individualized and accessible to both patients and physicians from any location, which may be accomplished with the use of a decentralized storage system. However, data decentralization must ensure that data is not tampered with and is accessible to everyone in a secure manner. To address all the issues, blockchain, a secured and consistent decentralized storage technology can be introduced. Nonetheless, integrating blockchain with IoT in healthcare is difficult, and there are just a few researches on the topic. This paper has a two-fold contribution from us. * We created a hardware prototype of a tDCS device with unique characteristics that may be used at home with real-time guidance and instructions from a doctor. * To incorporate in our suggested model, we evaluated the current literature that blends IoMT with blockchain. This article is structured in the following sections. Brain simulation and devices are described in Section 2. Section 3 discusses the optimal conditions reported in the literature for tDCS treatments. Section 4 highlights the specifications of our proposed framework. Finally, Section 5 concludes the work with future directives. ## 2 Brain Simulation and Devices Brain stimulation is emerging as a highly promising treatment option for a wide range of disorders, most notably epilepsy. This cutting-edge approach involves the precise application of scheduled stimulation to specific cortical or subcortical targets, facilitated by commercial devices designed to deliver electrical pulses at designated intervals. The primary goal of this technique is to effectively modify the intrinsic neurophysiologic properties of epileptic networks, potentially offering transformative therapeutic outcomes. Among the extensively researched targets for scheduled stimulation, the anterior nucleus of the thalamus and the hippocampus have garnered considerable attention. Studies have demonstrated that activating the anterior nucleus of the thalamus can lead to a significant reduction in seizures, even months after the stimulator implantation [13]. Moreover, exciting progress has been made in treating cluster headaches (CH) through the use of temporary stimulating electrodes at the spheenopalatine ganglion (SPG), with patients reporting rapid pain relief within minutes of the stimulation [14]. The advancement of brain stimulation techniques has not been limited to invasive approaches involving implanted electrodes. Pioneering researchers have taken non-invasive strides by engineering a transparent zirconia "window" implanted in mice skulls. This ingenious method allows optical waves to penetrate more deeply, akin to the principles of optogenetics. By leveraging this non-invasive technique, researchers can precisely stimulate or inhibit individual neurons, broadening the horizons of brain research and fostering potential therapeutic breakthroughs. ### Invasive of Brain Stimulation Invasive techniques involve surgical procedures to implant electrodes or other devices directly into the brain to deliver electrical impulses or stimulation. The invasive stimulation devices are listed in Table 1. #### 2.1.1 Deep Brain Stimulation (DBS) Deep Brain Stimulation (DBS) is a surgical procedure and neuromodulation technique used to treat certain neurological conditions by delivering electrical impulses to specific brain regions [15]. It involves implanting \begin{table} \begin{tabular}{|c|c|} \hline **Serial No** & **Name of the noninvasive stimulation** \\ \hline 1 & Deep Brain Stimulation (DBS) \\ \hline 2 & Epidural Cortical Stimulation \\ \hline \end{tabular} \end{table} Table 1: Invasive neurostimulation devices. a small, battery-operated medical device, often referred to as a "brain pacemaker," into the brain. The device consists of electrodes that are carefully positioned in targeted brain areas and connected to a pulse generator, typically implanted under the skin in the chest or abdomen [16]. The working principle of DBS revolves around modulating abnormal neural activity in the brain circuits associated with various movement and neuropsychiatric disorders [17]. The electrical stimulation delivered by the electrodes helps regulate the firing patterns of neurons, effectively suppressing or facilitating specific brain pathways, which can alleviate symptoms and improve overall brain function. #### 2.1.2 Epidural Cortical Stimulation Epidural Cortical Stimulation (ECS) is a relatively novel brain stimulation technique that involves the placement of electrodes on the surface of the brain, specifically on the cerebral cortex, and delivering electrical impulses to modulate brain activity [18]. ECS is a form of neuromodulation and is used to explore brain function, investigate neural circuits, and potentially treat certain neurological and psychiatric disorders [19]. The procedure for ECS typically involves a surgical implantation of a thin sheet or grid of electrodes directly on the surface of the brain, just beneath the dura mater (the outermost protective membrane surrounding the brain). These electrodes can then be used to apply electrical currents to the cortical surface, allowing researchers or clinicians to stimulate or inhibit specific brain regions or neural circuits. ECS is believed to work by directly influencing the electrical activity of the targeted brain areas [20]. By modulating neural firing patterns and synaptic transmission, ECS can potentially alter brain network dynamics and influence various cognitive and motor functions. As a research tool, ECS allows scientists to study brain function with high spatial and temporal resolution, providing valuable insights into brain organization and connectivity. In a clinical context, ECS is being investigated as a potential treatment option for conditions like epilepsy, chronic pain, movement disorders, and even certain cases of traumatic brain injury or stroke rehabilitation [21]. The therapeutic potential of ECS is still being explored, and its use in clinical settings is limited to specialized centers and research studies. It is essential to note that ECS is an invasive procedure and carries potential risks, including infection, bleeding, and damage to brain tissue. As with other brain stimulation techniques, careful patient selection, thorough evaluation, and appropriate post-operative care are crucial to ensure safety and optimize outcomes. Additionally, since the field of neuroscience and neuromodulation is continuously evolving, there might have been further advancements or updates in ECS research beyond my knowledge cutoff date. ### Non-invasive of Brain Stimulation Non-invasive techniques do not require surgery and involve the application of external stimuli to the scalp or other peripheral areas to influence brain activity. The non-invasive neurostimulation devices are enlisted in Table 2. #### 2.2.1 Transcranial Magnetic Stimulation (TMS) Transcranial Magnetic Stimulation (TMS) is a non-invasive medical procedure used to treat certain neurological and psychiatric conditions [22]. It involves the use of electromagnetic induction to create small electrical currents in specific areas of the brain. During a TMS session, a magnetic coil is placed against the scalp of the patient. When an electrical current pass through the coil, it generates a magnetic field that can penetrate the skull and stimulate the underlying brain regions [23]. The stimulation can either increase or decrease the activity of the targeted brain area, depending on the frequency and intensity of the magnetic pulses. The structure of it is depicted in Figure 1. There are two main types of TMS: * Repetitive Transcranial Magnetic Stimulation (rTMS): In this method, multiple magnetic pulses are delivered in rapid succession to the targeted brain region. rTMS can either increase or decrease neuronal activity and is often used as a therapeutic tool for various neurological and psychiatric disorders. * Deep Transcranial Magnetic Stimulation (dTMS): dTMS is a variation of TMS that uses H-coils to target deeper brain structures. It is commonly used to treat conditions like depression and obsessive-compulsive disorder. TMS has shown promise as a treatment option for various conditions, including depression, anxiety disorders, migraines, and certain types of chronic pain. However, its exact mechanisms of action are still not fully understood, and research in this field is ongoing. \begin{table} \begin{tabular}{|c|c|} \hline **Serial No** & **Name of the non-invasive stimulation** \\ \hline 1 & Transcranial Magnetic Stimulation (TMS) \\ \hline 2 & Transcranial Direct Current Stimulation (tDCS) \\ \hline 3 & Transcranial Alternating Current Stimulation (tACS) \\ \hline 4 & Transcranial Random Noise Stimulation (tRNS) \\ \hline \end{tabular} \end{table} Table 2: Non-Invasive neurostimulation devices #### 2.2.2. Transcranial Direct Current Stimulation The transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technology that is used to control the excitability of the central nervous system in individuals [24]. The goal of central nervous system stimulation is to alter the firing of neurons in the brain. The impact of the changed neurons might be beneficial or harmful to a patient. A lot of studies have looked at the best testing criteria for individuals receiving tDCS therapy. Session lengths (minutes), current dosages (mA), and session timelines are among the factors. The goal is to figure out how to provide the best circumstances for long-term cognitive plasticity enhancement [25]. Bikson et al. [26] defined the safety limits for tDCS treatments, recommending a treatment length of 20 minutes on average, with a range of 5-30 minutes. Thair et al. [27] verified that the duration of therapy is determined by the neuro physician's prescription for each session. As a result, any tDCS system that is built must be capable of operating optimally for the duration of the therapy, which might be up to 30 minutes [26, 27]. In addition, studies have been undertaken to assess if current tDCS levels are both safe for patients Figure 1: TMS of the brain. and give enough stimulation to achieve beneficial effects. According to Parazzini et al. [28], 1 mA has no brainstem interference and is therefore a suitable dosage for sustained tDCS therapy of up to 30 minutes. Parazzini et al. [29] showed a current dosage of less than 2 mA had no effect on the heart, indicating a safe current range of 1 to 2 mA. A doctor would once again prescribe a specific dose for the patient [28, 29]. Finally, the number of sessions required to attain the best neurological and cognitive results is an important aspect of the treatment. According to Castillo-Saavedra et al. [30], the ideal number of sessions per week was five. Loo et al. [20] found similar outcomes with treatments lasting between two and eight weeks. After week six, however, no additional gains were detected. There was a potential of modest unfavorable effects on the patients if the number of sessions was surpassed. [31]. As a result, the platform must include a scheduling or control mechanism to ensure that the patient is protected according to the doctor's orders. There have been randomized sham tDCS trials, in which the device implies to the patient that the system is giving the current to the patient, in order to verify any tDCS system is successful in treating a patient. In actuality, no current is applied; this is known as a sham or placebo tDCS study [32, 33]. While such papers illustrate how to conduct sham tDCS trials, they don't go into detail about a device-specific way that would allow the hardware platform to automate the procedure by giving patients both false and genuine treatments. Previous research has only suggested a random crossover mechanism in the midst of a study by randomly assigning patients to sham or genuine tDCS treatments [32]. As an outcome, more research into automating the integration of placebo and genuine therapies into the hardware platform is required. #### Transcranial Alternating Current Stimulation (tACS) Transcranial Alternating Current Stimulation (tACS) is a cutting-edge non-invasive brain stimulation technique that holds promise for understanding and modulating brain activity [34]. By applying weak alternating electrical currents to the scalp, tACS aims to influence brain oscillations and neural synchronization at specific frequencies [35]. The alternating current induces changes in the excitability of neurons, leading to the entrainment of brainwave patterns associated with various cognitive functions. Unlike Transcranial Direct Current Stimulation (tDCS), which delivers a constant electrical current, tACS specifically targets the frequency of the brain's natural electrical rhythms, allowing researchers to fine-tune the effects and achieve more precise and frequency-specific brain modulation. This makes tACS a compelling tool for investigating the causal relationship between brain oscillations and cognitive processes and exploring its potential applications in enhancing cognition or treating neurological and psychiatric disorders. #### Transcranial Random Noise Stimulation (trns) Transcranial Random Noise Stimulation (tRNS) is another non-invasive brain stimulation technique that involves applying random electrical noise to the scalp to modulate brain activity [36]. Similar to Transcranial Alternating Current Stimulation (tACS), tRNS aims to influence neural excitability and brain oscillations. However, unlike tACS, which uses a specific alternating current frequency, tRNS delivers random electrical noise covering a broad frequency range [37]. The random noise in tRNS is thought to increase the overall neural excitability in the targeted brain regions, making neurons more responsive to incoming stimuli and potentially enhancing cortical plasticity. The mechanism of action is not fully understood, but it is believed that tRNS may cause a random firing of neurons, leading to a kind of "stochastic resonance" effect, where noise enhances the detection and transmission of weak signals in the brain. One advantage of tRNS is that it does not require fine-tuning the stimulation frequency, making it a simpler and potentially more broadly applicable technique compared to tACS [38]. Additionally, tRNS may have advantages in certain situations where the optimal frequency for brain modulation is unknown or where multiple frequencies may be involved in a particular cognitive process. tRNS is being explored in research and clinical settings to study its effects on various cognitive functions, learning, memory, motor skills, and as a potential treatment option for neurological and psychiatric disorders. As with other brain stimulation techniques, tRNS also requires careful investigation and supervision to ensure its safety and efficacy in specific applications. ## 3 Proposed IOT-Blockchain based tDCS framework Our suggested tDCS platform includes cloud communication as a crucial component to ensure both patient and physician privacy and confidentiality. Patients are treated remotely after getting the specialty gadget, and doctors deal with patients remotely via video conference, according to studies [39]. Despite the fact that this framework provided a way to provide a patient's needed dosage in accordance with existing tDCS safety rules and recommendations, [5, 26] it did not deliver a guideline for grasping real-time data about individual treatment parameters or details regarding patient's conditions. It also doesn't go over further safety features like the device's ability to administer the right amount to the patient or the doctor's ability to operate it remotely. This research presents a unique IoMT-blockchain-based architecture for bi-directional communication between a patient's tDCS device held remotely (such as at home) and a physician's software interface. Figure 2 demonstrates the treatment process in the proposed tDCS framework. Patients and doctors are the two sorts of users who connect to the system via their cell phones. The system contains all of the components of a distributed network, with the exception of a blockchain network at the network layer, which offers all of the distributed ledger technology's capabilities. ## 4 Components of the Proposed TDCS Framework The components of this proposed framework are briefly explained in this section. **Patients**: Each patient will become a node in the network. To read health data, various IoT gadgets will be implanted on their bodies. The data from the sensors will be collected and processed by a mobile app on the patient's smartphone before being sent to the network. As a result, in our system, the mobile app may be thought of as a virtual patient. Figure 2: Proposed framework for IoMT-Blockchain based neuro-stimulation system. **Doctors:** Doctors are registered individuals who are in charge of monitoring and treating the patients in the system. **Blockchain Network**: The system will have a blockchain network that will connect all of the system's components. To be included in this network, all of the nodes in it must be confirmed. Because the data must be accessible to everybody, the network will be peer-to-peer, and the blockchain will be permissioned. The network is devoid of rogue nodes thanks to permissioned blockchain. Because there is no rogue node in the system, the PBFT consensus process should be used to ensure the legitimacy of each transaction in the network. Healthcare data will not be directly kept in the blockchain; rather, data storage and access will be recorded as blockchain transactions. The patients' processed data should be saved on a cloud server. To provide safe data access, all users should have digital signatures. For real-time monitoring, each transaction must be coupled with many smart contracts. They will be activated in response to data values and peer behavior. **Cloud Storage:** Because monitoring data is acquired on a continual basis and must be retained in the system, the amount of healthcare data will grow with time. If the data is stored on a blockchain ledger, the devices at the user's end will require a large amount of storage. Furthermore, if we wish to store data in the blockchain, a node's disconnection may result in data loss. As a result, cloud storage may be used to store actual data, with the blockchain network storing the link to the data in the cloud server as part of the transaction. _B._ **Operations in the Framework** The tDCS architecture we propose focuses on continuously reading health data, storing it, and notifying authorized people in the event of an emergency. As a result, we can highlight three key processes that must be completed within the constraints. 1. **Doctors' Registration and Assignment**: Patients and doctors will request registration providing all the information to the hospital. Then, the hospital authority will validate them using a defined smart contract in the system. Because of smart contracts, the system is responsive and act in real-time [40]. After being validated by the smart contracts, information of the patients and doctors will be added to the ledger of the blockchain. Lastly, hospital authorities will assign a doctor to a patient. 2. **Data collection and storage on a continuous basis**: All health data is constantly gathered and kept in the system, and any odd occurrence must be reported to the appropriate parties. Raw sensor and device values will be delivered to the patient's smartphone or tablet through a mobile application. The raw data is then formatted and processed by the mobile application. The data is processed and then passed through a smart contract for additional analysis. Because the system requires real-time monitoring, smart contracts should be developed to detect any abnormalities in the patient's condition based on the monitoring data. The data from the smart contract should be kept in the cloud storage after it has been analyzed. Each information upload and data access event must be recorded in the ledger as a transaction. All monitoring data should be kept in the cloud (Figure 3) and adhere to the Health Insurance Portability and Accountability Act (HIPAA) procedure to prevent data mapping. * **Data Requests and Access**: Doctors may also view a patient's monitoring data, which is an important feature of the system. A request is issued to the system when a doctor wants to view a patient's data. If the patient wants the doctor to have access to his or her monitor data, he or she will provide the doctor his or her key in exchange for the request. When a doctor submits a request to the system, a smart contract is launched to verify the doctor's identity before granting access to the patient's data. After being authenticated in the system, the doctor may access the patient's data from the ledger. ## 4 Hardware Prototype of Proposed tDCS Framework The system framework is depicted in Figure 4 as a block diagram. The essential component of a tDCS device is the circuit board. A processor ATMEGA328P handles this circuit board. Some current-regulating ICs make up the IC panel (LM334). The amount of current supplied by each IC is varied. On the basis of current requirements, the CPU adjusts the supply power to power up the ICs. The CPU is linked with a real-time clock and an SD card module. The stimulation period is counted in real time. The real-time data is stored in the cloud module. ATMEGA328P is also linked to a Bluetooth device. Bluetooth establishes a link between the mobile app and the circuit board. Anode refers to the positive side of an electrode. It connects to the IC panel and delivers the output to the brain's scalp. The cathode is the electrode that is linked to the ground and is the negative component of the electrodes. A power source is included with the board to provide the system with the necessary electricity. Figure 4: Block diagram of the system. Figure 3: tDCS Session data stored in cloud We utilized LM334 with a digital potentiometer to control a little amount of current (digi-pot). For different values of digi-pot, we receive varied amounts of current from this circuit (Figure 5). The main idea for obtaining the desired amount of current at the electrodes of this device is as follows. One of the most crucial components of this tDCS gadget is the headband (Figure 6). The headband has a numerical basis that goes from (-V) to (+V). The base is made up of two hands that are numbered from (I) to (V) (III). These two hands are able to move in tandem with the base. Each hand has two square compartments in which electrodes are put. These square boxes may also be moved around freely, allowing electrodes to be put wherever on the user's head. Different disorders necessitate various electrode locations. The circuit board can be viewed in Figure 7 to show all of the system's components. The board now has a real-time clock and cloud module, allowing the system to retain current passing data in real time and date. This information will be useful to both the user and the clinician for future treatment. Figure 5: Current regulating circuit Figure 6: tDCS Headband To control this advanced tDCS gadget, we created an Android app (Figure 8). This software must use Bluetooth to communicate with the circuit board. From this app, we can provide commands for the needed current. ## 5 Conclusion and Future Direction We intended to address a pressing issue that requires more attention in the field of biomedical engineering research. This research focuses on using blockchain and Internet of Medical Things (IoMT) technologies to provide efficient and secure remote patient monitoring (RPM) in tDCS devices. In the future, the prototype should be enhanced more so that it may be utilized in clinical trials and provide a more seamless experience for physicians. The IoMT-blockchain-based neurostimulation solutions may be used with machine learning Figure 8: Mobile app interface. Figure 7: The circuit board of the device technologies to automatically customize smart contracts for each patient. Such integration is something we'll work on in the future.
2309.08798
D3: Data Diversity Design for Systematic Generalization in Visual Question Answering
Systematic generalization is a crucial aspect of intelligence, which refers to the ability to generalize to novel tasks by combining known subtasks and concepts. One critical factor that has been shown to influence systematic generalization is the diversity of training data. However, diversity can be defined in various ways, as data have many factors of variation. A more granular understanding of how different aspects of data diversity affect systematic generalization is lacking. We present new evidence in the problem of Visual Question Answering (VQA) that reveals that the diversity of simple tasks (i.e. tasks formed by a few subtasks and concepts) plays a key role in achieving systematic generalization. This implies that it may not be essential to gather a large and varied number of complex tasks, which could be costly to obtain. We demonstrate that this result is independent of the similarity between the training and testing data and applies to well-known families of neural network architectures for VQA (i.e. monolithic architectures and neural module networks). Additionally, we observe that neural module networks leverage all forms of data diversity we evaluated, while monolithic architectures require more extensive amounts of data to do so. These findings provide a first step towards understanding the interactions between data diversity design, neural network architectures, and systematic generalization capabilities.
Amir Rahimi, Vanessa D'Amario, Moyuru Yamada, Kentaro Takemoto, Tomotake Sasaki, Xavier Boix
2023-09-15T22:45:02Z
http://arxiv.org/abs/2309.08798v1
# D3: Data Diversity Design for Systematic Generalization in Visual Question Answering ###### Abstract Systematic generalization is a crucial aspect of intelligence, which refers to the ability to generalize to novel tasks by combining known subtasks and concepts. One critical factor that has been shown to influence systematic generalization is the diversity of training data. However, diversity can be defined in various ways, as data have many factors of variation. A more granular understanding of how different aspects of data diversity affect systematic generalization is lacking. We present new evidence in the problem of Visual Question Answering (VQA) that reveals that the diversity of simple tasks (i.e. tasks formed by a few subtasks and concepts) plays a key role in achieving systematic generalization. This implies that it may not be essential to gather a large and varied number of complex tasks, which could be costly to obtain. We demonstrate that this result is independent of the similarity between the training and testing data and applies to well-known families of neural network architectures for VQA (i.e. monolithic architectures and neural module networks). Additionally, we observe that neural module networks leverage all forms of data diversity we evaluated, while monolithic architectures require more extensive amounts of data to do so. These findings provide a first step towards understanding the interactions between data diversity design, neural network architectures, and systematic generalization capabilities. ## 1 Introduction Systematic generalization is a crucial aspect of human intelligence [1], allowing us to solve novel tasks by combining knowledge gained from previously seen, related subtasks [2; 3; 4]. Deep learning methods have traditionally struggled to achieve systematic generalization due to their limited ability to generalize beyond the patterns and examples present in the training data, particularly in the presence of complex compositional and structural biases [5; 6; 7; 8]. In response, researchers have recently developed various architectures and training regimes to improve systematic generalization [9; 10; 11; 4; 12]. Yet, studies prove that achieving systematic generalization remains very difficult [13]. Recent studies have highlighted the critical role of diversity in the training data, as it has been shown to impact systematic generalization performance significantly [13; 14]. Madan et al. [14] demonstrated that increasing training data diversity substantially improves generalization to out-of-distribution (OOD) category-orientation combinations. Similarly, Ruis and Lake [13] showed that augmentation of training data and modularity increases systematic generalization performance substantially in a natural language compositional generalization task. However, there is still a lack of study on which specific aspects of data diversity are responsible for enhancing systematic generalization and under what conditions and how they can be applied effectively. This calls for further investigation to unravel the relationships between different types of data diversity and their impact on systemic generalization. Figure 1 illustrates a motivating example where given a budget of \(N\) training questions and a test set in Visual Question Answering (VQA) [15; 16; 17], training on different question complexity distributions results in different systematic generalization performances. To study the impact of question complexity on systematic generalization, we consider two factors that influence question complexity: (i) _Attribute composition_ that focuses on specific combinations of attributes and question types during training. For instance, a model is trained on questions that involve only certain combinations, such as Count questions with Color attributes and Exist questions with Shape attributes. However, during testing, the model needs to generalize to novel combinations of questions and attributes, such as Exist questions with Color attributes. (ii) _Length_ that refers to the number of reasoning steps required to answer the question1. In our study for systematic generalization, we require a model to generalize to novel question lengths. We measure question length by the number of spatial relations it includes. Spatial relations refer to relations such as "behind", "left", "right", and "in front of", between entities in an image. The hop notation, such as 0-Hop, 1-Hop, and 2-Hop, is used to categorize questions based on the number of spatial relations they involve. The hop count indicates the number of reasoning steps (analogous to the number of sub-tasks) required to answer a question. Higher hop counts imply more advanced spatial reasoning and compositional understanding. An example for length generalization is illustrated in the bottom part of Figure 1. Figure 1: Data diversity and its impact on systematic generalization in VQA. **(top)** Impact of question complexity distribution on systematic generalization. In our study, question complexity is varied in two aspects: **(middle)** Attribute composition. In this example, the training set (in-distribution) contains Count and Exist question with different compositions (left) while Count or Exist questions have novel combinations of attributes in the test set (out-of-distribution). **(bottom)** Length. The in-distribution question has shorter length (different syntactic structure) compared to the out-of-distribution question. In this paper, we investigate various factors in designing training data that contribute to improving systematic generalization and explore the conditions under which these factors can be effectively applied. By creating datasets in controlled settings in VQA, we analyze the relationship between different factors in training question complexity (e.g., length and compositions of object attributes) and systematic generalization performance. Our empirical analysis reveals that simple diverse questions, i.e., questions that are simple (require less reasoning steps), but cover a diverse range of attributes, are effective for systematic generalization. This can be valuable for practitioners and dataset engineers since collecting simple questions and obtaining their associated answers is more cost-effective than curating a large number of complex questions with many attributes and spatial relations. Moreover, simple questions facilitate the development of models that are less susceptible to overfitting. This is because obtaining a diverse representation of attributes through unbiased sampling is considerably more manageable with simple questions as opposed to complex ones. Our finding is in line with recent research [18] that highlighted the inefficiency of solely relying on the neural scaling law [19; 20; 21; 22; 23] to enhance model performance through data scaling. Instead, it has been shown that designing datasets strategically based on difficulty and dataset size can lead to greater cost efficiency without compromising performance. ## 2 Data Diversity Benchmark for Systematic Generalization in VQA In this section, we describe the datasets we generate using the CLEVR [24] repository. We define biased training questions where the biases are formed by limiting the attribute compositions and question lengths in the training sets while we test on out-of-distribution questions. Our objective is to gain insights into the models' capabilities and limitations in different aspects of systematic generalization. We propose our Data Diversity Design, referred to as D3, for each task which involves incorporating diverse set of questions to enhance systematic generalization. By doing so, our methodology helps mitigate the impact of biases in the training data, thereby allowing VQA models to generalize effectively to previously unseen question attribute compositions and lengths. We begin by introducing biases in simple questions about the composition or comparison of a set of objects using two attributes, focusing on compositional generalization. Next, we incorporate more complex questions regarding the spatial relations between multiple sets of objects to incorporate length generalization and explore various aspects of systematic generalization. Finally, we explore the impact of question complexity distribution as an additional aspect of diversity on systematic generalization when full diversity is present. Figure 2: Specification of different biases for train and test datasets. The gray boxes display the question type, while the valid attributes corresponding to each question type are shown in white boxes below. A valid example question for each dataset is shown below the boxes. ### Attribute Composition Generalization Here, we provide detailed description of two biased training sets, sets used for applying D3 (also called D3 sets), and systematic generalization test sets for evaluating attribute composition generalization. **Base training set for compositions of two attributes.** This dataset comprises simple elemental questions involving a composition of two attributes as the biased in-distribution (InD) set for training. The images have been generated in a way that three to five objects are present in each image, thereby resulting in a simplified dataset. We have used the original 0-Hop template from the CLEVR dataset to generate InD and OOD questions for this dataset. Additionally, we have imposed constraints on the types of questions to specify the biased train and test splits. We have limited the pairs of attributes and question types appearing together in this dataset to only Count and Exist question types, as shown in Figure 1(a). Specifically, Count questions are only associated with Color and Size related attributes, while Exist questions are employed only when Color and Shape attributes are present in the questions. For instance, the question _"How many big red objects are in the image?"_ is a valid InD question for this dataset as it contains two attributes relating to Color and Size for the Count question type. However, the question _"How many rubber objects are there?"_ will never appear during training in this dataset. **Test set for composition of two attributes.** The systematic generalization test set for this dataset contains questions where the attributes associated with Count and Exist questions are swapped with respect to InD questions. For instance, in the OOD test set, Color and Size attributes are used in Exist type questions. By designing this test set, we can evaluate if the neural networks have learned to identify and understand the elemental attributes and their relationships to different question types. **Base training set for attribute comparison.** The dataset for attribute comparison includes four question types: equal_size, equal_shape, equal_material, and equal_color. Similar to the composition of two attributes dataset, the images are generated such that three to five objects are present in each image. For each question type, a specific attribute of the same type between two different objects is compared. Different attribute and question type combinations for the training set of this dataset is shown in Figure 1(b). As an example, equal_size type questions always filter Materials of two objects in an input image and compare if they have the same Size. In this case, a question like _"Does the rubber object have the same size as the shiny object in the image?"_ is considered a valid question. Note that for simplicity, there is no combination of attributes of objects in these type of questions. **Test set for attribute comparison.** The OOD test set for this dataset is constructed in a way that the comparison questions appear with different attributes from the training set as shown in Figure 1(b). For example, the equal_size questions are about the Color attributes of the objects in the test set. **Applying D3.** To apply D3, we utilize 0-Hop questions that have no spatial relations with a constraint that questions from this set contain only a single attribute. Our diversified training sets after D3 contain \(70\%\) of the original biased questions and \(30\%\) of the questions from the respective D3 set. While we have observed promising results with the \(70\%\) and \(30\%\) configuration, we have also explored other proportions, which will be detailed in Appendix B. Our goal is to investigate the impact of incorporating such simple questions on systematic generalization. We use questions about Color, Size, and Shape attributes for both Exist and Count questions. As an example, the question _"Is there a shiny object?"_ is a valid question for this set. ### Incorporating Length Generalization To analyze the interplay between different aspects of diversity for systematic generalization, we introduce a set of datasets with varying compositions of attributes and question lengths. The biased training set is based on the 2-Hop template, which includes questions with two spatial relations. This is a more complex setting where spatial relations between objects and attributes come into play and allows us to study how the complexity of the questions and the diversity of attributes can affect the model's ability to generalize systematically. By exploring different combinations of attributes and question lengths, we can gain insights into which factors contribute most to achieving high levels of systematic generalization in VQA models. The images from the original CoGenT A and CoGenT B splits of CLEVR are used for the training and test sets, respectively. We only consider Count and Exist question types for these sets. Different variations of the sets are illustrated in Figure 1(c). Next, we begin by introducing the biased training set, OOD test sets, and how we apply D3. **Base training set for two spatial relations between multiple objects (2-Hop).** The biased training set for this dataset is named 2-Hop A, in which only Color, Size, Left, and Front attributes are allowed for Count type questions, and Material, Shape, Right, and Behind attributes are used for Exist questions. **Test sets for 2-Hop.** We introduce four different test sets to evaluate attribute composition generalization, length generalization, and their combinations using the 2-Hop dataset: - _0-Hop:_ The 0-Hop test set is composed of questions regarding a single attribute. It contains all possible combinations of attributes and question types (i.e., Count and Exist questions). We employ this test set to assess the models' ability to generalize to questions with shorter lengths. - _2-Hop OOD:_ This set contains the same set of combinations for both Exist and Count question types. The attribute and spatial relation combinations are carefully chosen such that the possible combinations have an edit distance of two with attribute combinations in 2-Hop A. For example, Material, Shape, Left, and Front can appear with both Count and Exist questions. This set is solely used for evaluating attribute composition generalization on the more complex 2-Hop questions than the previously introduced attribute composition test sets. - _3-Hop A:_ This test set comprises questions with three spatial relations and uses only type A attribute compositions. The sole purpose of this test set is to evaluate the network's ability to generalize to longer questions. - _3-Hop OOD:_ This test set is similar to 2-Hop OOD in terms of attribute and question combinations. However, the questions contain three spatial relations (3-Hop), making it a more challenging set that tests both OOD length and OOD attribute composition generalization. **Applying D3.** The sets for applying D3 consist of various combinations of attributes and spatial relations, enabling us to systematically analyze the model's ability to generalize to different levels of complexity. As before, we replace \(30\%\) of the base training questions with questions from one of the sets below to create a new diversified training set: - _1-Hop Full:_ This set includes questions that have only one spatial relations between two objects, hence called 1-Hop Full. There is no combinations of attributes for a single set of objects in this D3 set. We do not impose any constraint on the combinations of questions and attribute types for the 1-Hop Full set. This set is analogous to the diverse simple D3 set of questions that we used for attribute comparison and composition sets for questions with spatial relations. - _1-Hop A:_ The 1-Hop A set contains the same combinations of questions and attributes as the 2-Hop A training set, but with only one spatial relation. This allows us to study the model's ability to generalize to longer questions and analyze its similarity to the test distribution. - _1-Hop B:_ This set is similar to 1-Hop A except that the attribute and spatial relations of Count and Exist are swapped from the ones in 1-Hop A. Specifically, in 1-HopB, Exist questions can only be paired with Color, Size, Left, and Front attributes. This set serves the same purpose as 1-Hop A, allowing us to analyze the model's similarity to the test distribution. - _3-Hop Full:_ This set comprises 3-Hop questions with no restrictions on the combinations of question types and attributes. The main purpose of this set is to evaluate the model's ability to generalize to questions with shorter lengths than the ones seen during training. ### Question Complexity Distribution In order to explore the impact of question complexity distribution, we introduce the datasets with full diversity. These datasets allow us to examine the systematic generalization performance by sampling questions from various question types in different proportions, while maintaining a fixed number of questions. **Training sets.** We sample different fractions of training questions from 0-Hop A, 1-Hop Full, and 2-Hop A training sets. 1-Hop Full and 2-Hop A are the same sets as the ones we defined in Section 2.2. The 0-Hop A set is similar to 2-Hop A introduced in Section 2.2 and Figure 1(c) except that it does not contain spatial relations. In other words, the Count questions appear with Color or Size attributes, and Exist questions appear with Material or Shape attributes. Only a single attribute can be used in the questions of the 0-Hop A set. A valid question for this set would be _"Is there a matte object in the image?"_ **Test sets for question complexity distribution.** The test sets for these wide datasets are identical to the test sets we defined for the 2-Hop dataset in Section 2.2. **Applying D3.** For simplicity, we sample different fractions from each training set using only multiples of \(1/6\) as a fraction of questions sampled from each training set. We generated \(13\) and \(10\) different sets of fractions for \(100\)k and \(600\)k questions, respectively. ## 3 Results In this section, we provide our results and analysis about the effect of dataset diversity on systematic generalization. Our implementation is based on the original implementation of [9]2. We used MAC [5], FiLM [25], Vectorized Neural Module Networks [9; 26] (VectorNMN), its counterpart with given program generator (VectorNMN GT), and the version introduced in [10] with modular image encoders (VectorNMNSepStem). To save time and computational resources, we excluded FiLM and VectorNMNSepStem from our experiments about question complexity distributions due to their inferior performance compared to MAC and VectorNMN. We conducted hyperparameter search using the attribute comparison and composition datasets and identified a set of effective hyperparameters that yielded consistent results. Further implementation details and information on the hyperparameter search can be found in Appendix A. To generate InD and OOD objects in the images, we rely respectively on the CoGenT condition A and CoGenT condition B splits in CLEVR. Footnote 2: [https://github.com/rizar/CLOSURE](https://github.com/rizar/CLOSURE) ### Effects of Introducing Diversity in Training Data on Systematic Generalization The results for composition of two attributes and attribute comparison datasets (Section 2.1) are shown in Table 1. We also present the full results for the 2-Hop datasets (Section 2.2) in the form of a heatmap displayed in Figure 3. The heatmap shows the change in accuracy after applying the respective D3 compared to training on the base 2-Hop A dataset. The results on the base 2-Hop A dataset are shown in Table 3 in Appendix E. Additinally, we present the results for a transformer model in Table 4 in Appendix E. As expected, our observations indicate that diversity plays a crucial role in enhancing systematic generalization across various scenarios and D3 with maximum diversity leads to better systematic generalization. However, we note that when the diversity aligns more closely with the specific aspect being tested, we observe improved systematic generalization performance. The key findings are detailed in the following: **D3 with diverse simple questions improves both compositional and length generalization.** The results for compositions of two attributes and attribute comparisons presented in Table 1 show that using diverse simple questions of 0-Hop type with only a single attribute as D3 can significantly improve attribute composition generalization for both of our simple datasets. To specifically examine the impact of diverse simple questions within the D3 framework, we analyze the results of the 2-Hop OOD test set, displayed in the second column of Figure 3, for the respective architectures. Notably, we observe noteworthy improvements in systematic generalization performance when considering both the 1-Hop Full and 1-Hop B sets, which demonstrate diversity in attribute composition aspect. We employ 3-Hop A test set to isolate the impact of data diversity on length generalization. This test set maintains the same attribute composition as the biased training set 2-Hop A, but with longer questions. Remarkably, we found that introducing diversity through shorter questions with the same attribute compositions, as seen in 1-Hop A, improves OOD generalization by incorporating more variations in length. Additionally, we observe that including 1-Hop B slightly improves the results. It shows that although it introduces more diversity in attribute compositions along adding diversity in length, the networks may learn to associate questions with type B attribute compositions with shorter questions, making it more challenging to generalize to longer questions. On the other hand, since 1-Hop Full contains all forms of compositions and shorter question lengths altogether, the \begin{table} \begin{tabular}{l|c c c c c} \hline \hline Training dataset & FiLM & MAC & VectorNMN GT & VectorNMNSepStem GT & VectorNMN \\ \hline Two attributes & \(33.57\pm 0.03\) & \(46.56\pm 0.02\) & \(45.39\pm 0.01\) & \(44.10\pm 0.01\) & \(42.80\pm 0.01\) \\ + D3 (0-Hop) & \(64.21\pm 0.03\) & \(71.09\pm 0.04\) & \(66.72\pm 0.01\) & \(64.88\pm 0.01\) & \(60.15\pm 0.01\) \\ \hline Comparisons & \(49.28\pm 0.01\) & \(53.31\pm 0.01\) & \(66.27\pm 0.04\) & \(67.97\pm 0.04\) & \(58.43\pm 0.04\) \\ + D3 (0-Hop) & \(56.29\pm 0.03\) & \(56.58\pm 0.03\) & \(83.29\pm 0.02\) & \(76.16\pm 0.04\) & \(73.32\pm 0.02\) \\ \hline \hline \end{tabular} \end{table} Table 1: _Accuracy and standard deviations comparing training on the biased datasets and training after D3 with 0-Hop. \(30\%\) of the training questions from biased datasets are replaced with 0-Hop questions for D3._ networks learn a better strategy to generalize to longer questions. To examine the influence of length and attribute composition together, we utilize the 3-Hop OOD and 0-Hop test sets. In the case of 3-Hop OOD, D3 with 1-Hop A results in a slight decrease in accuracy across all architectures, except for MAC, which demonstrates an improvement of \(5.7\%\). Conversely, D3 with 1-Hop B leads to a significant increase in accuracy. This highlights the importance of diversity in both length and attribute composition, as 1-Hop B exhibits such diversity while 1-Hop A only diversifies in terms of length. Furthermore, in line with previous observations, 1-Hop Full outperforms the other D3 sets on the 3-Hop OOD test set, indicating superior generalization capabilities when full compositional diversity is available. We observe that all forms of D3 lead to improvements on the 0-Hop dataset (with exclusion of VectorNMN when D3 uses 1-Hop A). For instance, incorporating 1-Hop A D3, which diversifies only the length aspect of the questions compared to the base training of 2-Hop A, results in noticeable improvements. On the other hand, D3 with 1-Hop B, which is more diverse in terms of both length and attribute composition, outperforms D3 with 1-Hop A. The best performance is achieved by 1-Hop Full as it is closer in length to the test set and contains full attribute composition diversity. Despite 3-Hop Full having longer question length than the base training and thus a further distance to the 0-Hop test set in terms of length, it still exhibits full attribute diversity and yields significant improvements over the baseline. In conclusion, the diversity of both length and attribute compositions proves beneficial for the 0-Hop test set. **D3 with maximum attribute composition diversity leads to better results for attribute composition generalization.** Comparing the performance of 1-Hop B and 1-Hop Full, we observe a notable difference when it comes to having the same attribute compositions as the base 2-Hop A training set. Specifically, 1-Hop Full, which shares the attribute compositions with 2-Hop A, demonstrates superior performance. The reason is that 1-Hop B introduces a swap in attribute compositions for Count and Exist questions compared to 2-Hop A. While this diversification contributes to increased variability in the dataset, it causes the model to struggle to generalize effectively to new attribute compositions, leading to a slightly lower performance compared to 1-Hop Full. Conversely, when focusing on D3 with 1-Hop A, which solely introduces diversity in terms of question length while keeping attribute compositions constant, we observe minimal changes in performance. **D3 enhances systematic generalization without getting closer to the target distribution.** Since diversity provides a better generalization to OOD questions, one may ask if by applying D3, the training set is getting closer to the target OOD distribution. To answer this question, we provide the following observations: - _Shorter questions facilitate length generalization on longer questions, and vice versa:_ We observe that any form of D3 utilizing 1-Hop variants of questions leads to improved results on 3-Hop A, which contains longer questions compared to the ones in the D3 set. Similarly, employing 3-Hop Full D3, which consists of longer questions than the base 2-Hop OOD training set, enhances accuracy on the 0-Hop test set, which comprises shorter questions. - _D3 with questions that exhibit more diversity tend to yield better results while maintaining the same distance in terms of attribute composition to the OOD test set:_ This can be observed when comparing D3 with 1-Hop A and 1-Hop B. Both D3 sets have the same attribute compositions, but they differ in their question types (only Exist and Count questions have been swapped). Consequently, their distance to the 2-Hop OOD test set remains constant since the 2-Hop OOD test set encompasses identical attribute compositions for both Exist and Count questions. However, the resulting set after D3 with 1-Hop B exhibits more diversity and gains better accuracy. Figure 3: Change in accuracy after replacing \(30\%\) of base training questions (2–Hop A) with different type of questions as D3 to have more diversity. In most cases diversity helps systematic generalization significantly. These observations highlight an important aspect of D3 training: it does not necessarily bring the training data distribution closer to the target test distribution, yet it can still improve systematic generalization. We also note that it is crucial to consider the alignment between the diversity introduced in the training data and the characteristics of the target test set to obtain optimal performance. For instance, when evaluating the performance on the 0-Hop test set, we observed that the 3-Hop Full D3 set, which consists of longer questions compared to the base 2-Hop A training set, results in worse accuracy compared to the 1-Hop Full D3 set. This is because the questions with longer length in the 3-Hop Full set created a greater distance between the training and test distributions in terms of question length, affecting the generalization performance. In contrast, the 1-Hop Full D3 set, which encompassed a more similar question length distribution to the target test set, exhibited better accuracy on the 0-Hop test set. ### Effect of Question Complexity Distribution on Systematic Generalization We have shown that using as much diversity as possible is beneficial for systematic generalization. However, the impact of question complexity distribution, specifically the question sampling strategy, remains unknown. The conventional method of uniformly sampling questions from different levels of complexity may be suboptimal for systematic generalization. In the following set of experiments, our goal is to identify the impact of question complexity distribution, the number of training questions, and the behaviour of different neural network architectures in low and large data regimes on systematic generalization. The results obtained from the experiments conducted with different fractions of diversity are presented in Figure 4 for \(100\)k questions. To provide a summary of the findings, we calculate the average result across the four test sets for each model and display it in the AVG column. **VectorNMN has superior performance and is more robust to distribution changes in low data regimes.** We observe that when limited number of training questions are available, MAC performs poorly except for dataset number \(3\) and shows high sensitivity to distribution changes. VectorNMN on the other hand has more consistent results and achieves an overall better performance. The results can be further improved when program generator (VectorNMN GT) is provided. **MAC gains strong systematic generalization performance and robustness to distribution changes when sufficient data is available.** To assess the impact of question complexity distribution in large data regimes, we identified the best set of fractions (dataset 3 in Figure 4) and progressively increased Figure 4: Training complexity distribution effect. The number of questions is kept fixed and is equal to \(100\)k. Each row corresponds to a different training set where different fractions of questions from 0-Hop A, 1-Hop Full, and 2-Hop A are selected. For each model, the accuracies on the respective test set are shown in each column. The average across the four test set is also shown in the AVG column. the number of questions until reaching saturation accuracy, as depicted in Figure 5. We determined that employing \(600\)k questions would suffice for this dataset to observe the behavior of networks when using a large amount of data. Compared to low data case, the patterns alter when expanding the number of questions to \(600\)k, as depicted in Figure 6. MAC becomes more robust to distribution changes as the amount of data increases and outperforms VectorNMN. **Program generator plays an important role in systematic generalization.** The gap between VectorNMN and VectorNMN GT also increases as the number of training questions grows, indicating that the performance bottleneck of modular networks lies in learning the program generator, specifically in large data scenarios. **Using abundant simple questions can lead to strong performance.** We also note that the inclusion of a distribution with more simple questions proves to be valuable in both data regimes, as it results in improved systematic generalization performance. Dataset \(1\) in Figure 4 and dataset \(7\) in Figure 6 show that it is not necessary to sample a large amount of complex questions to achieve high systematic generalization performance. **Uniform sampling from different complexities can be sub-optimal.** MAC's performance on the uniform dataset (\(7\)th in Figure 4 and \(6\)th in Figure 6) is relatively lower compared to the best results achieved. In contrast, modular networks show similar performance to other full distributions when using the uniform distribution. ## 4 Discussion Summary.We introduced Data Diversity Design (D3) for designing datasets with improved systematic generalization performance in VQA. We showed that diverse simple tasks significantly enhanced systematic generalization in VQA. We demonstrated that the inclusion of diverse questions, particularly those aligned with the particular systematic generalization aspects, led to better overall performance. Additionally, we investigated the impact of different sampling strategies from different question complexities and noted that distributions with enough simple and diverse questions can attain better systematic generalization results than the common uniform sampling practice. Our results showed that modular networks such as VectorNMN are more data efficient and more robust to question complexity distribution variations. On the other hand, the monolithic networks attain their best performance and robustness to distribution changes when sufficient data is available. Figure 5: Average accuracy of the four test sets by varying the number of questions using fractions from dataset 3 in Figure 4. Figure 6: Training complexity distribution effect for \(600\)k questions. Refer to Figure 4 for more details. Limitations.While large language models [27; 28; 29] have shown promise in addressing systematic generalization [30; 31; 32], their training data biases are not well-understood. This motivated us to focus on datasets created in controlled settings to isolate and investigate specific factors related to data diversity and systematic generalization. However, our studies may not fully capture the complexity and variability of real-world data. To show the effect of D3 on length generalization in a dataset with realistic images, we have incorporated additional results on the GQA dataset [33] in Appendix F. These results support our paper's conclusions, although broader investigations are encouraged. While we have observed that diversity contributes to improved systematic generalization, further investigation is needed to fully understand the underlying reasons behind this effect. Our hypothesis is that diversity promotes modularity in neural networks, which in turn enhances systematic generalization. To provide additional insights into the modularity of networks, we present an experiment in Appendix C. Lastly, the evaluation is primarily based on the VQA task, and it would be valuable to investigate the transferability of the observed effects to other domains and tasks. ## Acknowledgements We would like to thank members of the Sinha Lab for Developmental Research and Fujitsu Research for helpful comments and feedback during the development of this project. In particular, Ian Mason, Anirban Sarkar, Hojin Jang, Avi Cooper, and Ece Ozkan Elsen. This work was supported by Fujitsu Limited (Contract No. 40009568).
2309.10869
SOS TUTORIA UC: A Diversity-Aware Application for Tutor Recommendation Based on Competence and Personality
SOS TUTORIA UC is a student connection application aimed at facilitating academic assistance between students through external tutoring outside of the application. To achieve this, a responsive web application was designed and implemented, integrated with the WeNet platform, which provides various services for user management and user recommendation algorithms. This study presents the development and validation of the experience in the application by evaluating the importance of incorporating the dimension of personality traits, according to the Big Five model, in the process of recommending students for academic tutoring. The goal is to provide support for students to find others with greater knowledge and with a personality that is \'different\', \'similar\' or \'indifferent\' to their own preferences for receiving academic assistance on a specific topic. The integration with the WeNet platform was successful in terms of components, and the results of the recommendation system testing were positive but have room for improvement.
Laura Achon, Ana De Souza, Alethia Hume, Ronald Chenu-Abente, Amalia De Gotzen, Luca Cernuzzi
2023-09-19T18:27:12Z
http://arxiv.org/abs/2309.10869v1
SOS TUTORIA UC': A Diversity-Aware Application for Tutor Recommendation Based on Competence and Personality ###### Abstract 'SOS TUTORIA UC' is a student connection application aimed at facilitating academic assistance between students through external tutoring outside of the application. To achieve this, a responsive web application was designed and implemented, integrated with the _WeNet_ platform, which provides various services for user management and user recommendation algorithms. This study presents the development and validation of the experience in the application by evaluating the importance of incorporating the dimension of personality traits, according to the _Big Five_ model, in the process of recommending students for academic tutoring. The goal is to provide support for students to find others with greater knowledge and with a personality that is "different", "similar" or "indifferent" to their own preferences for receiving academic assistance on a specific topic. The integration with the _WeNet_ platform was successful in terms of components, and the results of the recommendation system testing were positive but have room for improvement. D 2023: Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI), June 26-27, 2023, Munich, Germany [email protected] (L. Achon); [email protected] (A. De Souza); [email protected] (A. Hume); [email protected] (R. Chenu-Abente); [email protected] (A. De Gotzen); [email protected] (L. Cernuzzi) D 2023: Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI), June 26-27, 2023, Munich, Germany [email protected] (L. Achon); [email protected] (A. De Souza); [email protected] (A. Hume); [email protected] (R. Chenu-Abente); [email protected] (A. De Gotzen); [email protected] (L. Cernuzzi) D 2023: Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI), June 26-27, 2023, Munich, Germany [email protected] (L. Achon); [email protected] (A. De Souza); [email protected] (A. Hume); [email protected] (R. Chenu-Abente); [email protected] (A. De Gotzen); [email protected] (L. Cernuzzi) D 2023: Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (HHAI), June 26-27, 2023, Munich, Germany [email protected] (L. Achon); [email protected] (A. De Souza); [email protected] (A. Hume); [email protected] (R. Chenu-Abente); [email protected] (A. De Gotzen); [email protected] (L. Cernuzzi) ## 1 Introduction This paper presents 'SOS TUTORIA UC', an application developed by Universidad Catolica Nuestra Senora de la Asuncion (UC) in Paraguay and integrated into the _WeNet_ platform, which aims to facilitate connections between students for tutoring based on academic competencies and personality, utilizing the _Big Five_ model [1] to measure personality traits. Additionally, the application uses hybrid machine artificial intelligence, leveraging the information of the personality traits and the level of academic competencies of potential tutors to make recommendations. This allows users to choose tutors with "different" or "similar" personalities to their own, and they can also choose to disregard this parameter if they feel "indifferent" towards it. The study explores how these personality traits can influence tutoring effectiveness, in addition to academic knowledge in the specific subject to be taught. This is in line with various previous studies that have examined the impact of peer-to-peer tutor recommendation systems [2, 3, 4, 5] and personality-based tutor recommendation systems [6, 7] on the educational process. Furthermore, the current research builds upon previous studies [8, 9] that have shown a correlation between personality traits and interaction levels in _WeNet_ project pilots, highlighting the importance of diversity in social interactions. ## 2 'SOS TUTORIA UC' The 'SOS TUTORIA UC' web application was designed and implemented, providing an interface for students to request tutoring sessions and find suitable tutors based on their academic expertise and personality traits. To incorporate personality into the matching process, we adopted the _Big Five_ model [1], that uses the traits, namely Extraversion, Agreeableness, Conscientiousness, Emotional Stability/Neuroticism, and Openness to Experience, to provide a comprehensive framework for describing and understanding individual differences in personality. The 'SOS TUTORIA UC' application was integrated with the _WeNet_ platform [10], ensuring good user management and leveraging the platform's recommendation algorithms. Figure 1 showcases the integration process, which involved analyzing the components and APIs of the _WeNet_ platform. The goal was to create and adapt the logic for tutoring requests to align with the existing _WeNet_ model. The Service REST API manages model entities and communicates with modules such as the Profile Manager (user management) and the Task Manager (handling task and message creation as transactions). In 'SOS TUTORIA UC', the creation of a task corresponds to a new tutoring request. The _WeNet_ platform provides five recommended users as potential tutors, and they receive a notification to review the request details. Based on the information, they can accept or reject the request for tutorship. The responses to these requests are recorded as task response transactions, which can Figure 1: Communication between the modules of ’SOS TUTORIA UC’ and the _WeNet_ platform be approval or rejection responses determined by an attribute value. Once a request is approved by at least one student, the requester can select the desired tutor through a transaction type indicating the best response. This informs the platform which response sent by the tutors has been ultimately chosen by the requesting student. As part of the integration, the application logic had to be adapted on the rules side as well. The relevant dimensions that contribute to diversity in the algorithm are academic competence, personality, and physical proximity. Finding individuals based on physical proximity is determined by their distance from the person who posted the question. A distance greater than 500 meters is considered far and out of reach, while a shorter distance is classified as closer. Competence is also considered a strict constraint. The system only selects individuals who are "better than me" in terms of academic competence, if they are available. Finding individuals with "similar" or "different" personality profiles is based on whether they share a "similar" personality traits with the person requesting tutoring or not. Competence and physical proximity requirements are considered essential and given more importance than other requirements (such as personality). However, it is acceptable to "diversify" the list of potential tutors by selecting someone who is not physically close in order to diversify based on gender. ## 3 Analysis of results To evaluate the system, a pilot study was conducted at the UC. Participants were invited to use the 'SOS TUTORIA UC' application and provide feedback on their experience. Since this work corresponds to the integration of a new type of technology (i.e., a responsive web application) into the _WeNet_ platform, differentiating it from other consortium pilots, a component-level integration analysis was conducted. It involved evaluating the communication through the interfaces of the _WeNet_ platform; that is, the Service REST APIs in the management of transactions and users, as well as the integration with the authentication and authorization module. The tests were performed with a test user in production environment to the three main modules (Auth Controller, User Controller and Task Controller). All calls to the _WeNet_ APIs during the tests returned a satisfactory response, in addition to executing the expected results. Therefore, the component-level integration was successful. Additionally, we also discuss a preliminary experience with the recommendation process. For this purpose, a testing scheme was established to analyze whether the user profiles suggested by _WeNet_ to fulfill a tutoring request adequately match the parameters indicated by the requester. In the tests performed for applications with personality "different", "similar" or "indifferent" to the applicant, the algorithm prioritized students with high scores in the requested competency and the specified personality preference. However, it is worth noting that for some of the tests, the algorithm should have recommend other tutors that better met the applicant's requirements. Indeed, despite the recommendation algorithm yielded positive results, further improvements are needed to better align with user preferences and selection filters specified. This could be attributed to low student participation during the pilot experience or the platform's gender diversification approach. In the pilot experience participated 43 students from different campuses and careers at the UC. After completing the pilot of 'SOS TUTORIA UC' three primary instruments were used to analyze the experience and evaluate the relevance of incorporating personality as a search parameter for potential tutors, thereby introducing an additional element of diversity: i) questionnaires for tutors and requesters after their connection; ii) an exit questionnaire for the pilot; and iii) a focus group. The pilot study revealed valuable insights from the participants. Mainly of the suggestions arose from the focus group in which participated 7 students in a hybrid meeting (3 in person and 4 virtually). They expressed the importance of being able to choose tutors based on personality traits, as this can significantly impact the dynamics and effectiveness of the tutoring relationship. Participants suggested the inclusion of additional information, such as the "level of compatibility" or "level of personality traits", to aid in the selection process and provide a more informed decision-making framework. One notable challenge observed during the pilot study was the low usage of the 'SOS TUTORIA UC' application. This was attributed to the limited availability of participants during the exam period, which resulted in a reduced number of tutors available to provide support. Consequently, there was a decreased incentive for students to request tutoring sessions through the application. To address this issue, participants recommended extending the usage of the application to the entire academic semester, allowing for more opportunities for tutoring and increasing overall engagement. Based on the results obtained and the feedback from participants during the focus group, several priority areas for future work were identified. Firstly, there is a need to fine-tune the recommendation algorithms to better align with user preferences and selection filters. This could involve incorporating additional parameters related to personality traits, compatibility levels or other diversification elements. Secondly, developing a mobile application with push notifications for events could enhance the user experience and interaction, making it more convenient for students to engage with the system. Lastly, conducting a longer-term study that covers an entire academic semester would provide more data and insights into the student's experience with the personality filtering feature. ## 4 Discussion and conclusions This paper presented the incorporation of personality traits based on the _Big Five_ model into a peer tutoring system, 'SOS TUTORIA UC', aimed at providing academic support. The integration with the _WeNet_ platform successfully enabled the matching of students with tutors based on "compatible", "complementary" or "indifferent" personalities. The pilot study yielded positive results in terms of system integration, and the recommendation algorithm showed favorable outcomes by prioritizing the academic level in making recommendations. However, there is still potential for improvement concerning the consideration of potential tutor's personality traits. The feedback from participants highlighted the importance of considering personality traits in the tutor-tutee matching process. They expressed the desire for additional information and parameters to facilitate the selection of tutors, such as compatibility levels and more detailed personality trait profiles. The limited usage during the exam period underscored the need to extend the application's availability throughout the academic semester. ## Acknowledgments This research received funding from the proactive FET Horizon 2020 project of the European Union, _WeNet: Internet of Us_, Grant Agreement No. 823783. We would also like to express our gratitude to the various institutions associated with the _WeNet: Internet of Us_ project for their valuable cooperation.
2301.00280
RECOMED: A Comprehensive Pharmaceutical Recommendation System
A comprehensive pharmaceutical recommendation system was designed based on the patients and drugs features extracted from Drugs.com and Druglib.com. First, data from these databases were combined, and a dataset of patients and drug information was built. Secondly, the patients and drugs were clustered, and then the recommendation was performed using different ratings provided by patients, and importantly by the knowledge obtained from patients and drug specifications, and considering drug interactions. To the best of our knowledge, we are the first group to consider patients conditions and history in the proposed approach for selecting a specific medicine appropriate for that particular user. Our approach applies artificial intelligence (AI) models for the implementation. Sentiment analysis using natural language processing approaches is employed in pre-processing along with neural network-based methods and recommender system algorithms for modeling the system. In our work, patients conditions and drugs features are used for making two models based on matrix factorization. Then we used drug interaction to filter drugs with severe or mild interactions with other drugs. We developed a deep learning model for recommending drugs by using data from 2304 patients as a training set, and then we used data from 660 patients as our validation set. After that, we used knowledge from critical information about drugs and combined the outcome of the model into a knowledge-based system with the rules obtained from constraints on taking medicine.
Mariam Zomorodi, Ismail Ghodsollahee, Jennifer H. Martin, Nicholas J. Talley, Vahid Salari, Pawel Plawiak, Kazem Rahimi, U. Rajendra Acharya
2022-12-31T20:04:31Z
http://arxiv.org/abs/2301.00280v2
# Recommoder: A Comprehensive Pharmaceutical Recommendation System ###### Abstract **Objectives:** To extract datasets containing useful information from two drug databases and recommend a list of drugs to physicians and patients with high accuracy while considering the most important features of patients and drugs. The history and review of the target patient and similar patients, and drug information, are used as a reference to recommend drugs. **Methods:** A comprehensive pharmaceutical recommendation system was designed based on the patients' and drugs' features extracted from Drugs.com and Druglib.com. First, data from these databases were combined, and a dataset of patients and drug information was built. Secondly, the patients and drugs were clustered, and then the recommendation was performed using different ratings provided by patients, and importantly by the knowledge obtained from patients and drug specifications, and considering drug interactions. To the best of our knowledge, we are the first group to consider patients' conditions and history in the proposed approach for selecting a specific medicine appropriate for that particular user. Our approach applies artificial intelligence (AI) models for the implementation. Sentiment analysis using natural language processing approaches is employed in pre processing along with neural network-based methods and recommender system algorithms for modeling the system. In our work, patients' conditions and drugs' features are used for making two models based on matrix factorization. Then we used drug interaction to filter drugs with severe or mild interactions with other drugs. We developed a deep learning model for recommending drugs by using data from 2304 patients as a training set, and then we used data from 660 patients as our validation set. After that, we used knowledge from critical information about drugs and combined the outcome of the model into a knowledge-based system with the rules obtained from constraints on taking medicine. **Results:** The results show that our recommendation system is able to recommend the best combination of medicines according to the existing real-life prescriptions available. It also has the best accuracy and other metrics in recommending a specific drug compared to other existing approaches, which are generally based only on patient ratings or comments. Our proposed model improves the accuracy, sensitivity, and hit rate by 26%, 34%, and 40%, respectively, compared with conventional matrix factorization. In addition, it improves the accuracy, sensitivity, and hit rate by an average of 31%, 29%, and 28% compared to other machine learning methods. We have also open-sourced our implementation in Python. **Conclusions:** Our proposed RECOMMED system extracts all vital information from the drug, patient databases and considers all necessary factors for recommending accurate medicine which can be trusted more by doctors and patients. We have shown the efficacy of our proposed model in real test cases. **Keywords:** recommendation system; drug recommendation system; drug information extraction; hybrid recommendation method ## 1 Introduction Recommendation systems (RS) are knowledge extraction systems that use information retrieval approaches to help people make better decisions and discover items through a complex information space [1, 2]. They have been around for many years, and with the advancement in machine learning approaches, their use has been widened, and it helps people to make more appropriate decisions in using different products. The popularity of using RS in different fields has increased since the announcement of the Netflix Prize competition that aimed to predict movie rates [3]. The application of recommender systems is very extensive: from entertainment to e-commerce, the tourism industry, and medical recommender systems. Also, with rapid progress in artificial intelligence, there has been a greater acceleration in the application of recommender systems and their development. Medical recommender systems are a particular type of recommender system, and they have some distinct features that make them special: They have to be used very carefully, and because they affect people's health, there are many concerns about using RS for them. On the other hand, many people die every year because of medication errors. It has been reported as the third leading cause of death in the world [4]. This makes the use of intelligent systems in medical science valuable and necessary. Drug prescription is also vital for physicians, and it involves considering different aspects. Patient history of using drugs, the specification of drugs for diseases related to the recommendation in question, and the drug's effectiveness for that specific case are among such concerns. Having as many different medicines as 24000 [5] in just one database, they can benefit from a recommendation system to perform a set of suggestions for a particular patient with a specific disease to help physicians in prescribing the most appropriate medicines and also help patients to have a better choice in using drugs. Recommender systems can be distinguished by the degree of risk imposed when a user accepts a recommendation [6]. In this regard, the medical domain can be seen as high risk, mostly due to the recommendation given to the user. On the other hand, while having a comprehensive drug recommender system is important, designing a complete system requires a dataset of drugs with patient ratings, reviews, and also information about drugs. We gathered this information from two different and well-known databases Drugs.com[5] and Druglib.com[7] and we built three datasets which train the system and construct the final model. Finally, putting all of them together, we proposed a novel drug recommender system called RECOMMED that learns the patient and drug features and their previous drugs taken, and also the user reviews for different drugs to recommend a new drug to a patient. The novelty of this work lies in the following parts: 1. Propose a pharmaceutical recommender system by considering the features of patients and drugs, including patients' conditions, age, gender, drug side effects, and drug categories. 2. Performing pre-processing steps on databases Druglib.com and Drugs.com websites to gather the appropriate data for our recommendation system, leading to comprehensive datasets for drug information. 3. Our system considers sentiment analysis of reviews, the prescriptions of doctors, and different similarity measures for recommending a medicine and its dose and other recommendations, including side effects and warnings for their usage. 4. Our system consists of a knowledge-based component to exclude drugs with serious side effects for a specific patient. 5. We proposed a model to predict the efficiency of medicine for patients. In the next section, we provide some background in the general and general recommender systems field; later in Section 3 we particularly introduce the drug recommender systems and their challenges. Then in Section 4 we provide the current state-of-the-art of recommender systems methods, specifically drug recommender systems. Section 5 is an elaborate explanation of our proposed comprehensive drug recommender system in detail. Section 6 provides the results of this work, plus a discussion about them. Finally, in Section 7 we conclude the paper and present the future directions of this research. ## 2 Recommendation Systems Background Recommender systems are decision-making systems that extract information from different kinds of knowledge. For many years, many recommender systems have been in various domains with different purposes [8]. In this section, we review the basic concepts of various types of recommender systems and the way they are categorized. According to the type of data that recommender systems use to make decisions, the algorithms utilized in recommender systems have two major categories: - _collaborative filtering_ (CF) and - _content-based filtering_ (CB). CF approaches are further divided into user-based and item-based approaches. CF and CB approaches have shown acceptable results when they are used in recommending different kinds of products like movies, books, and music. Also, the recommendation system can combine these two major techniques, usually called hybrid recommendations. In addition, there are some specific types of recommender systems that have their strength in various domains. One of these types is a knowledge-based recommender system. Here, we briefly introduce the major recommender system techniques considered in this work. **2-1 Collaborative recommender systems** The idea behind this group of recommendation methods is to use a measure of similarity between users or items to recommend something to a given user. It states that if two users share some interest in the past, they will likely have similar interests in the future. A collaborative approach is based on the rating a user gives to items; in its basic form, it doesn't need any other information about users and items. CF approaches can be divided into two basic types, _neighborhood_ methods (also known as _memory-based_) and _latent factor_ models. Neighborhood methods are divided into one of the following two basic methods: **2-1-1 User-based neighborhood recommender system** This approach aims at suggesting the products based on the similarity between users. In this regard, several similarity measures can be used. We denote \(\mathcal{U}\) as the set of users, \(\mathcal{I}\) as the set of items, and \(R\) as the set of existing ratings, Pearson Correlation (PC) is one of the popular ones, which is computed as equation (1) for users \(u\) and \(v\)[9]: \[Pearson\_Correlation(u,v)=\frac{\Sigma_{i\in\mathcal{I}up}(r_{ui}-\bar{r}_{u})( r_{vi}-\bar{r}_{v})}{\sqrt{\Sigma_{i\in\mathcal{I}up}(r_{ui}-\bar{r}_{u})^{2}} \sqrt{\Sigma_{i\in\mathcal{I}up}(r_{vi}-\bar{r}_{v})^{2}}} \tag{1}\] In this equation, \(\mathcal{I}_{u}\) is the set of items rated by user \(u\) and \(\mathcal{I}_{up}\) is the items rated by both \(u\) and \(v\). Also, \(r_{ui}\) is the rating of the user \(u\) for a new item \(i\) and \(\bar{r}_{u}\) is the average of the ratings given by \(u\) And the prediction for the rating of user \(u\) for item \(i\) is calculated as equation (2): \[pred(u,i)=\frac{\sum_{j\in users\,sim(u,j)*(r_{i,j}-\bar{r}_{j})}}{\sum_{j\in users \,sim(u,j)}}+\bar{r}_{u}\quad\quad(2)\] Where \(sim\) is the measure of similarity between user \(u\) and item \(i\) and \(users\) is the set of users most similar to user \(u\). Therefore, the ratings are weighted by the similarity measure in this prediction. #### 2-1-2 Item-based neighborhood recommender system In contrast to user-based recommender systems, item-based recommender systems use item similarity to suggest a product to a specific user. Similar to the user-based approach, here the prediction for user \(u\) for item \(i\) is also calculated as equation (3): \[pred(u,i)=\frac{\sum_{j\in N}sim(i,j)*(r_{u,j}-\bar{r}_{j})}{\sum_{j\in N}sim( i,j)}+\bar{r}_{i}\quad\quad(3)\] Where \(sim\) is the measure of similarity between items \(i\) and \(j\) and \(N\) is the set of items similar to item \(i\) rated by \(u\). #### 2-1-3 Matrix factorization One of the biggest challenges for standard methods in CF is the sparsity of the rating matrix (or user-item matrix); model-based CF can help overcome this challenge. There are many ways to build models based on which we can make recommendations. Matrix factorization is one the popular methods with the idea of decomposition of a matrix into the product of two or maybe three matrices. Having a dataset of the ratings of various users for different items, this model transforms the rating matrix to the individual user and item matrices. So the model is defined as follows: Assume there is a set of users \(U\) and items \(D\), with rating matrix \(R(M\times N)\), which is the ratings given by users on items. \(M\) and \(N\) are the total numbers of users and items, respectively. Matrix factorization in recommender systems aims to find \(k\) total latent features/factors by decomposing \(R\) according to equation (4) to user matrix \(U\) and item matrix \(I\). \[R\approx U\times I^{T}=\hat{R} \tag{4}\] U is a \(M\times k\) embedding matrix and, I is a \(N\times k\) embedding matrix. **2-2 Content-based recommender systems** Content-based recommendation uses the attributes of the users or user profile and the attributes of items to recommend an item to a user [10]. Providing this information requires extra work and effort to represent items properly and build a user profile appropriate for the recommendation process [10]. This kind of recommender system learns the user preferences and tries to recommend items similar to the user's preferences. Having \(D\) rated items by user \(U\), content-based RS aims to find the rating for item \(i\), which is not seen by user \(U\). In this method, items' features are extracted, then used to find similarities between items. Then, in a simple nearest neighbor approach top-\(n\) nearest neighbors of item \(i\) in \(D\) are selected. This selection is based on a similarity measure like cosine similarity which is calculated as equation (5): \[\cos(\theta)=\frac{\mathbf{X}\cdot\mathbf{Y}}{\|\mathbf{X}\|\times\|\mathbf{Y}\|}=\frac{\sum_{ i=1}^{n}X_{i}Y_{i}}{\sum_{i=1}^{n}X_{i}^{2}\sqrt{\sum_{i=1}^{n}v_{i}^{2}}} \tag{5}\] The ratings of these \(n\) items are used to predict the rating for item \(i\) by user \(U\). In most content-based recommender systems, item features are textual descriptions and don't have well-defined values. So, natural language processing approaches like TF-IDF or the bag-of-words are used to assign numerical values to the textual features. **2-3 Hybrid recommender systems** Hybrid approaches combine different recommender system algorithms to make a more accurate system that considers the benefits of different approaches for recommending an item to the users. A combination of content-based and collaborating filtering is the most common type of hybridization method [11]. **2-4 Knowledge-based recommender systems** This RS aims to produce recommendations based on existing rules that satisfy a user's needs. In the context of drug recommendation, this knowledge involves many different conditions. For example, death reports for a specific drug and drug interactions are two important information that a drug recommender system has to consider before recommending a list of medicine to a patient. **2-5 AI-based recommendation systems** Over time many different artificial intelligent approaches have been applied to recommendation systems. However, the tendency to use AI methods in recommender systems is mostly because of the big data availability and diversity of recommendation systems approaches, which can benefit from AI, particularly machine learning algorithms. Deep learning as a subfield of machine learning has attracted many researchers from a broad variety of disciplines due to its learning capabilities from data. Recently there have been many researches on deep learning-based recommendation systems [12], and Multilayer Perceptron (MLP), Autoencoder (AE), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), are among the mostly used deep learning models in RS [13]. Many of these deep learning-based approaches have contributed to the works on CB, CF, and other types of RS [14]. Also, some works utilize hybrid deep networks, like the combination of RNN and CNN [15]. Moreover, to integrate the advantages of memorization and generalization for recommender systems, a wide & deep neural network has been used [16], and the model shows better results with increased acquisitions on the Google Play app. **2-6 Other types of recommendation systems** Although these techniques are the basic and mostly used recommender system approaches, several more types of recommender systems are suggested in the literature, and authors in [17] give a detailed classification. Medical and drug recommendation is one of the important applications of recommender systems which uses techniques in recommender systems to recommend medicine, predict the usefulness of drugs, etc. In the following sections, the position of recommender systems in medical science, particularly in the pharmaceutical sector and the state-of-the-art in this field, is discussed. ## 3 Recommender Systems in Medical Science One of the attractive and important applications of recommendation systems is medical recommendations and drug products. Here are the major differences between medical recommender systems and other recommender systems: * Medical recommender systems care more about the health of patients than to make a profit. * Security is the primary goal in drug recommender systems. * Many existing recommender system techniques cannot be used, and others must be used with caution because of safety issues. * In the long term, time is considered an important factor in recommending a drug. There are many situations where some drugs' negative effects are discovered over time. One example is the drug zimeldine [18]. So, a comprehensive medical recommender system should consider ratings in different time stamps. In the drug recommender system, the domain is medicine, and the exact contents to be recommended are one or more of the following lists: 1. A list of drugs, at least one. 2. The dose, is the amount of drug taken at one time. 3. The frequency at which the drug doses are taken over time. 4. Duration, which is how long the drug is taken. Numbers 2 to 4 in the above list are referred to as the _dosage_. Therefore, we can define _a drug recommender system_ as a smart system that is able to recommend a list of drugs plus their dosages. with high accuracy in terms of a real prescription of a physician and also to have a positive effect on a patient, which available data can partially verify. It should be noted that, of course, there is no medicine recommender system that we can trust thoroughly, and like other artificial intelligence systems applied in healthcare, their use and ethical issues must be addressed appropriately [19], [20]. ## 4 Literature review Medical recommender systems have been around for many years, even before the emergence of recommender systems as a new field in computer science. According to [21], medicine recommender systems fall into two broad categories named "_ontology and rule-based approaches"_ and "_data mining and machine learning-based"_ approaches. Ontology-based recommender systems use the hierarchical organization of users and items to improve the recommendation [22]. Data mining and machine learning algorithms in the medical field are used to predict and recommend things like drug usefulness, having a disease [23], [24], the condition of the user, or ratings [25], [26]. For example, SVM, backpropagation neural network, and ID3 decision tree have been used in [27] for recommending drugs. The performance of these approaches has been compared in the above work, and the authors have shown that SVM has better accuracy and efficiency compared to the other algorithms. Their data set contains patients' features age, sex, blood pressure, cholesterol, Na and K levels, and drug. Some other researchers, while described as medical or medicine recommender systems, consider a detection and classification task where the dataset which is trained has some patient attributes, and based on that, the objective of the work is the detection or prediction of a disease and then for each disease a set of medicines is recommended [27]. Sentiment analysis of drug reviews is one of the basic approaches for drug recommendations [28], [29], [30], [31], [32]. The sentiment analysis in these works mainly aims to recommend a drug or extract useful information like adverse drug reactions. In [31], different deep learning approaches, such as CNN, LSTM, and BERT, have been investigated for sentiment analysis of patients' drug reviews. In another work, the combination of CNN-RNN has been applied In addition to recommendation systems, sentiment analysis and opinion mining of drug reviews is an active research area in drug review processing [33]. This analysis can be used for automatic opinion mining and recommending drugs. A hybrid knowledge-based medical prescription approach has been presented in [34]. The authors use historical medical prescriptions to recommend a list of medicines to physicians. The approach uses the similarity between cases where a case is medical information like demography, treatment, age, sex, symptoms, and diagnosis. Based on the degree of similarity, a drug list is produced. The list is complemented by Bayesian reasoning, where a model of the conditional probability of drugs is built. This approach has been applied in Humphrey & Partners Medical Services Limited medical center. Some works in medical recommendation have focused on particular drugs like diabetes [35]. Their model is based on the ontology of medical knowledge and a decision decision-making approach for multiple criteria and computes the medication. Then by using the entropy, the information about patient' history has been computed, and finally, the most appropriate medications have been recommended to the physicians. Many recommender system approaches have not been well considered in the medical and pharmaceutical recommendations. However, using polarity in sentiment analysis of user comments is one of the important parts of using NLP in recommendation systems. It can be viewed as determining whether a word or phrase in the document or even a whole document is positive, negative, or neutral in general. Figure 1 shows the broad classification of different recommendation system approaches in pharmaceutical research. We can see a growing tendency to use machine learning approaches in this field. ## 5 Material and methods In this section, we formulate the medicine recommender system problem and present our approach for the general medicine recommender system. Many recommendation systems, like collaborative filtering and content-based approaches, mostly rely on past information to make decisions for the current situation. It is not always the case in the domain of drug recommendation. The patient condition is different compared to the other patients and compared to the same patient over time. So, in addition to the history information like general rates, reviews, and the effective rate of the drug, it is necessary to use the patient's current condition to make a more accurate decision. We also cannot rely on diversity-based recommendations as it is used in some recommender systems, like the one used in Netflix, even if the drug is not rated high, it can be suitable for some patients. On the other hand, many recommender systems rely on knowledge from users; when there is a lack of users' knowledge, we cannot personalize them. While we have an adequate dataset for our recommendation task, the problem emerges when new inputs enter the system. In our medicine Figure 1: Broad classification of recommendation system techniques. recommender system, these inputs can be new patients or new drugs. _Cold start_ problem is a term used for this problem, and it is a challenging issue in designing any recommender system. We reduced this effect by applying a clustering-based approach. Because drugs are clustered into a specific category, we can put a new drug in the category which belongs to it, so we use the same rating for the new drugs as those in that category. This is effective in solving the cold start problem in our recommender system. **Proposed method** Since every recommendation technique has its own benefits, a universal recommender system should be able to take advantage of all of these techniques to improve the outcome of a recommender system. The drug recommendation system in our work has the benefits of different recommendation categories and combines their advantages by using several steps. First, natural language processing and machine learning algorithms are applied in the context of basic recommender system techniques. This section discusses all phases of our model for building a comprehensive drug recommender system. This paper presents a novel hybrid drug recommender system (RS) with features of several recommender systems. It uses natural language processing (NLP) and other machine learning techniques to implement the system. The proposed RS approach is a new recommendation system method for pharmaceutical recommendation, which can be considered a hybrid of CB, CF model-based, knowledge-based, and AI-based methods. Here in this section, we elaborate on each step toward the final drug recommendation for each patient. After a very intensive web crawling through two well-known pharmaceutical websites, Drugs.com and Druglib.com, and building three different datasets, feature extraction and modeling are performed. Then in the next step, recommendations for proper drugs are performed. At the final stage, the list of drugs is refined based on defined rules in addition to the ratings and drug features which is an important aspect of our medicine recommender system. Figure 2 presents the whole RECOMMED model in the training stage of our work, consists of four components, and we elaborate on each phase of our approach in more detail in the following parts: **5-1 Dataset extraction** In this work, any recommendation for drugs and their dosage is based on the patients' features like age, gender, previous illness, and other drugs they consume, and drugs' features like drug classification, side effects, and drug interactions. So, in this phase, the extraction of user features, drug features, and drug interaction datasets from Drugs.com and Druglib.com databases is accomplished. In the second step of this phase, the dataset is prepared for clustering and modelling the recommendation system. The review field in the drug recommendation database contains users' and caregivers opinions about drugs' effectiveness. According to our knowledge, none of the existing datasets have complete and comprehensive patient and drug information. Figure 2: Components of RECOMMED drug recommendation system in the training stage. We built three different datasets named _users_, _drugs_, and _interactions_. #### 5.1.1 Drugs and users datasets In this work, _Druglib.com_ and _Drugs.com_ were employed to extract information about patients and drugs and build two datasets named drugs and patients. We should mention that there are also other databases for drug information and recommendations, like SIDER [50], for drug side effects. We will include them in future works to build a complete dataset for drugs. Three features consisting of side effects, benefits, and membership in a given drug category were considered for drugs. First, different drug categories and side effects were extracted in tables 1 and 2. There are 150 different drug categories, and 128 different side effects were extracted from the _Druglib.com_ database. Then drug _benefits_ were also extracted and combined with the information in the above tables, and finally, the _drugs_ dataset was prepared, as is partially shown in Table 3. We also extracted the _users_ dataset of patient features and comments on different drugs. Six features are considered for _users_ datasets: age, gender, current disease (condition), other conditions, other drugs are taken, and user level, which is patient or caregiver. Table 4 represents the structure of this dataset. #### 5.1.2 Interactions dataset The last dataset prepared in this work is the _interactions_ dataset. This information is important for recommending the appropriate medicine list to the patients. We extracted drug interaction information from Drugs.com, and after mapping drugs' names with their counterparts in Druglib.com, the interaction dataset, partially presented in Table 5, was created with 180 drug interaction information. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{2-10} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Abilifish} &... & \multicolumn{1}{c|}{Cimbaita} &... & \multicolumn{1}{c|}{Syntroid} &... & \multicolumn{1}{c|}{zyban} \\ \hline Abilifish & - & - & Moderate &... & - &... & - \\ \hline Accupril & - & - & - &... & Moderate &... & - \\ \hline Aciphex & - & - & Major &... & - &... & - \\ \hline... &... &... &... &... &... &... &... \\ \hline Zyban & - & - & - &... & - &... & - \\ \hline \end{tabular} \end{table} Table 4: **Drug interaction dataset** ### Dataset preparation In this phase, our dataset is prepared for creating the recommendation model in the next step. First, using Natural Language Processing (NLP) techniques, user and drug features are extracted, and then normalization and clustering are accomplished to prepare the datasets for modeling the recommendation system. Here, we elaborate on each of these steps: #### 5.2.1 Feature extraction The first pre-processing step is feature extraction from user feature and drug features datasets. Bag-Of-Words (BOW) method is used for this purpose. **NLP for extracting drug and user features** The feature extraction was mainly performed using natural language processing (NLP) techniques. Two well-known methods to extract text features by NLP are Bag-of-Words (BOW) and term frequency-inverse term frequency (TF-IDF). Our proposed pharmaceutical recommendation system uses the BOW feature extraction method to perform feature extraction from database texts. This method consists of four steps: * Text-pre-processing pre-processing * Vocabulary creation * Building feature matrix * Polarity of user comments Here, every part of this process has been described: **Text-pre-processing pre-processing** In the text- pre-processing step, all punctuations and symbols are removed, and abbreviations are converted into their full names or phrases. Some of these conversions are presented in Table 6. Moreover, spelling mistakes were corrected using the TexBlob library of Python, and stop words were removed using a predefined list of stop words. #### Vocabulary creation Using NLP techniques, a vocabulary of words is created in the second step of feature extraction. For this purpose, an array of words is created by checking all registered words in the dataset. This array is constructed from unique words of the dataset and their frequency. To deal with the random filling of the feature matrix, words are rearranged according to their frequency. Moreover, to deal with the sparseness of the feature matrix, words with low frequency are removed. Some of the most frequent words extracted from the datasets created and discussed in the previous section can be seen in Table 7. #### Building feature matrix The feature matrix is created in the third step of extracting the features. For this purpose, a unique word is assigned to each matrix column, and a new row is considered for each user review. Each cell of this matrix represents the existence of the word in the user's review, which is essentially zero or one. #### Polarity of user comments (PUC) We used NLP and opinion mining to extract PUC. This approach aims at extracting the opinion of users as a positive or negative comment. The output of this component is used in the users' rating matrix. \begin{table} \begin{tabular}{|c|c|} \hline Abbreviations & Original Form \\ \hline HBP & high blood pressure \\ \hline COPD & chronic obstructive pulmonary disease \\ \hline PMS & premenstrual syndrome \\ \hline OCD & obsessive-compulsive disorder \\ \hline \end{tabular} \end{table} Table 6: EXAMPLES OF ABBREVIATIONS TO FULL NAME CONVERSIONS \begin{table} \begin{tabular}{|c|c|} \hline Term & Frequency \\ \hline Pain & 33 \\ \hline Infection & 22 \\ \hline Surgery & 15 \\ \hline Chronic & 13 \\ \hline \end{tabular} \end{table} Table 7: EXAMPLES OF MOST FRQUENT WORDS IN DATASETS. ``` Input:\(UserComments,StopWords\) Output:\(PolarityOfUserComments\) 1.Removing Capital Letters and Emojis from UserComments 2.Removing \(StopWords\) from UserComments 3.Word Tokenize (UserComments) 4.Word\(Lematation(UserComments)\) 5.Frequency of \(Words(UserComments)\) 6.Text\(Blob(UserComments)\) ``` **ALgortihm1.** Comment Polarity Acquisition To have a more accurate rating for drugs, we considered the combined user comments and ratings from different sources. This overall rating is called Combined User Rating Acquisition (CUR) parameter and is obtained from analyzing user comments and ratings as follows: 1. \(Overall\ Rating\ \in Z,0\leq Overall\ Rating\ \leq 10\) 2. \(Effectivness\ \in E,E=\{\text{Ineffective,}\text{ Marginally Effective,}\text{ Moderately Effective,}\text{ Considerably Effective,}\text{ Highly Effective}\}\) 3. \(Side\ Effect\ \in S,S=\{No\ Side\ Effect,\ Mild\ Side\ Effect,\ Moderate\ Side\ Effect,\ Severe\ Side\ Effect,\ Extereml\ Severe\ Side\ Effect\}\) 4.User Comment CUR parameter is calculated as equation (6), and the above parameters are replaced by CUR in the user feature matrix: \[\text{CUR}=\frac{\frac{\binom{Overall\ Rating}{10}+\frac{DOE}{4}}{2}-\frac{ DOS}{4}\text{\ \ **Normalization-** After extracting features from drug and user datasets, these features should also be normalized to perform better in training the model. Figure 3 is the final user rating dataset after applying the combined user rating acquisition stage. This stage converts the dataset on the left side into the right side dataset. Each column in both datasets has a given user's features along with the drug name they rate. In the left side dataset, we can see different ratings of the user, and then in the right side dataset, these ratings are combined into CUR using equation (6). ### Clustering Clustering is considered one of the main steps in a recommender system for improving the diversity, consistency, and reliability [51], which has been considered in many works in recommender systems, particularly for reducing the sparsity of data [52, 53]. Due to the sparseness of the rating matrix, we consider a clustering-based approach, and patients are clustered Figure 3: Combination of different user ratings for a given drug before performing the matrix factorization, which is elaborated in the next part. This clustering is mostly required because users usually review only one drug corresponding to a specific disease, so the rating matrix is highly sparse. Clustering can help group the users and drugs with similar features and significantly resolve the sparsity problem. Users are clustered based on their gender, age, comments, and being patient or caregiver. It is clear that after clustering, each class of users reviews several drugs, which can improve the matrix factorization process. We used a modified K-means algorithm in [54] to perform this clustering. While the original K-means algorithm is unsupervised, which is used for clustering, the number of clusters is pre-determined, and so it couldn't be utilized in the same way in our proposed drug recommendation system. Therefore, in this paper, we employed the U-Kmeans method [54]. This method performs the unsupervised K-means and determines the best cluster numbers that lead to better classification performance. If each row of the dataset and the center of each cluster are represented by F\(=\{f_{1},...,f_{n}\}\) and A\(=\{a_{1},...,a_{k}\}\) respectively, the K-means objective function is defined as (7). \[J(M,A)=\sum_{i=1}^{n}\sum_{j=1}^{k}M_{ij}\left\|f_{i}-a_{j}\right\|\qquad\text{ (7)}\] Where in (7), \(k\) is the number of clusters, \(n\) is the number of dataset features, and \(M_{ij}\) indicates the membership of \(F_{i}\) to the \(j_{th}\) cluster. In the K-means algorithm, this objective function must be minimized. In [55], an entropy-based method is proposed to improve K-means. In this method, to determine the centers of the clusters, Equation (8) is added to the objective function. \[B_{n}\sum_{j=1}^{k}a_{j}\ln a_{j}\qquad\text{ (8)}\] In (8), the effect of the cluster imbalance is added to the objective function. As can be seen in (9), when the \(B_{n}\) coefficient of the improved objective function is zero. The following K-means objective function is obtained. \[J(M,A)=\sum_{i=1}^{n}\sum_{j=1}^{k}M_{ij}\left\|x_{i}-a_{j}\right\|-B\sum_{j=1 }^{k}\eta_{j}\ln\eta_{j}\] Where in this equation, \(\eta_{j}\) represents the number of members of a cluster, which is determined by (10). \[\eta_{j}=\frac{\sum_{i=1}^{n}M_{ij}x_{i}}{\sum_{i=1}^{n}M_{ij}} \tag{10}\] In [54], equation (11) is considered to determine the optimized number of clusters. By adding this term to equation (10), the final objective function is obtained as (12). \[\mathrm{L}\sum_{i=1}^{n}\sum_{j=1}^{k}M_{ij}\ln a_{j} \tag{11}\] \[J(M,A,\alpha)=\sum_{i=1}^{n}\sum_{j=1}^{k}M_{ij}\big{\|}x_{i}-a_{j}\big{\|}-B \sum_{j=1}^{k}a_{j}\ln a_{j}-\mathrm{L}\sum_{i=1}^{n}\sum_{j=1}^{k}M_{ij}\ln a _{j} \tag{12}\] The pseudocode of the U-K-means classification method based on the approach in [54] is presented in Algorithm 2. **Algorithm2**: **. Our modified Pseudo code of U-Kmeans based on [54].**__ ``` 1.initial\(c^{(0)}=n,\alpha_{k}^{(0)}=\frac{1}{n},a_{k}^{(0)}=x_{i}\) 2.Initial learning rates\(L^{(0)}=B^{(0)}=1\) 3.Set\(t=0\,,\varepsilon>0\) 4.whilemax\(\|a_{k}^{t+1}-a_{k}^{t}\|<\varepsilon\) 5.If\(\|x_{i}-\alpha_{k}\|^{2}-Lln\alpha_{k}=\min\limits_{1\leq k\leq c}\|x_{i}-a_{k}\| ^{2}-Lln\alpha_{k}\) 6.\(M_{ik}^{(t+1)}=1\) 7.Else 8.\(M_{ik}^{(t+1)}=0\) 9.\(L^{(t+1)}=e^{-c(t+1)/250}\) 10.\(\alpha_{k}^{(t+1)}=\sum_{i=1}^{n}\frac{M_{ik}}{n}+\left(\frac{B}{L}\right) \alpha_{k}^{(t)}\ln a_{k}^{t}-\sum_{s=1}^{c}\alpha_{s}^{t}\ln a_{s}^{t}\) 11.\(B^{t+1}=min\left(\frac{\sum_{k=1}^{c}\exp\left(-\eta^{n}\|a_{k}^{t+1}-a_{k}^{ t}\right)}{c},\frac{1-\max\limits_{1\leq k\leq c}\left(\frac{1}{n}\sum_{i=1}^{n}M _{ik}\right)}{\max\limits_{1\leq k\leq c}\left(\frac{1}{n}\sum_{i=1}^{n}M_{ ik}\right)}\right)\) 12.update\(C^{t}\) to \(C^{t+1}\) by discard those cluster with \(a_{k}^{t+1}\leq\frac{1}{n}\) 13.\(a_{k}^{*}=\frac{a_{k}^{t}}{\sum_{s=1}^{c(t+1)}a_{s}^{t}}\) 14.\(M_{ik}^{*}=\frac{M_{ik}^{*}}{\sum_{s=1}^{c(t+1)}M_{ik}^{*}}\) 15.\(a_{k}=\frac{\sum_{i=1}^{n}M_{ik}x_{ij}}{\sum_{i=1}^{n}M_{ik}}\) 16.if\(t\geq 60\) and \(c^{(t-60)}-c^{t}=0\) 17.\(B^{(t+1)}=0\) 18.t=t+1 ``` **Algorithm 2**: **. Our modified Pseudo code of U-Kmeans based on [54].**__ ### Modeling In the next step, the clustering outcome is used to build a recommender system model able to recommend the best drugs. Later, we filter the model's output with a knowledge-based component for safety reasons. #### Neural Network-based Matrix Factorization Matrix factorization is a popular method for recommender systems aiming at finding two rectangular matrices called user and item matrices with smaller sizes than the rating matrix [56]. The dot product between these two matrices results in the rating matrix. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Users} & \multirow{2}{*}{1} & \multirow{2}{*}{2} & \multirow{2}{*}{3} & \multirow{2}{*}{4} & \multirow{2}{*}{5} & \multirow{2}{*}{6} & \multirow{2}{*}{...} & \multirow{2}{*}{615} & \multirow{2}{*}{616} & \multirow{2}{*}{617} & \multirow{2}{*}{618} & \multirow{2}{*}{619} \\ \cline{1-1} To reduce the computational overhead, copeTo reduces the computational overhead, cope with the sparsity of the ratings, and increase accuracy. We proposed a neural network-based matrix factorization technique. The first two matrices, Rating and Effectiveness, are constructed by extracting information from Druglib.com. In our model, the rating matrix \(Rating\in\mathbb{R}^{n*m}\) is estimated as the multiplication of two matrices \(Clusters^{n*k}\) and \(Drugs^{k*m}\) as (13): \[Rating\approx\text{Clusters.}\,Drugs^{T} \tag{13}\] This model applies a neural network algorithm to estimate the users' comments for each medicine. Clustered users and drugs and users' and drugs' features are used in building this new model, as illustrated in Figure 4. **Figure 4- Our proposed customized matrix factorization method.** The input to the neural network is user and drug-clustered features. Drug Embedding and User Embedding matrices are the input to this network, and drug and user are the network's outputs. With sparse rating matrices, the forward and backward pass calculations are accomplished just for non-zero ratings to reduce the computation load. The neural network layer output is calculated as: \[a^{x+1}=f^{x+1}\big{(}\sum_{i=1}^{K}w_{i}^{x+1}.\psi^{x+1}(n,m).a_{i}^{x}+b_{j}^ {x+1}\big{)}\quad\ i\in(1,K),j\in(1,MN),z\in(0,Z-1),n\in(0,N),m\in(0,M) \tag{14}\] In this equation, \(f^{z+1}\) is the activation function, \(w_{i}^{z+1}\) are the weights, \(b_{j}^{z+1}\) are the biases, \(Z\) is the number of layers, \(M\) is the number of drugs, \(N\) is the number of users, \(\alpha^{Z}\) is the output of the network, \(K\) is the number of the features for drugs or users, and \(MN\) represents the number of drugs in the user network and represents the number of users in the drugs network. And finally \(\psi^{z+1}\) is the rating existence function defined as: \[\begin{array}{ll}\{\psi^{z+1}(n,m)=1&if(Rating\ n,m\ exit\ or\ z<Z-1)\\ \{\psi^{z+1}(n,m)=0&if(Rating\ n,m\ not\ exist)\end{array} \tag{15}\] Also, the backward pass calculations are as equations (16) to (19) for the output and hidden layers respectively: For the output: \[\begin{array}{ll}\Delta out=\big{(}R_{(n,m)}-a^{Z}\big{)}.\psi^{Z}(n,m).f^{ x+1^{{}^{\prime}}}(a^{Z})&n\in(0,N),m\in(0,M)\\ \Delta W^{Z}=\Delta out.\ \alpha^{Z}.\gamma^{Z}\quad\quad(\ref{eq:17})\end{array} \tag{16}\] For the hidden layers: \[\begin{array}{ll}\Delta Hidden^{z}=f^{{}^{\prime}}(a^{z}).\sum_{i}\Delta out _{i}.w_{i}^{z}\quad\quad i\in(1,K),z\in(0,Z-1)\quad\quad(\ref{eq:18})\\ \Delta w^{z}=\Delta Hidden^{z}.a^{z}.\gamma^{z}\quad\quad z\in(0,Z-1)\quad \quad\quad(\ref{eq:19})\end{array}\] In these equations, \(f^{z+1^{{}^{\prime}}}\) is the gradient of the activation function, \(R_{(n,m)}\) is the rating corresponding to the users or drugs, \(\Delta W^{Z}\) is the error correction for the output layer, \(\Delta w^{z}\) are the error corrections for the hidden layers, and \(\gamma^{z}\) is the learning rate. The weight updates are also according to equation (20): \[w_{new}^{z}=w_{0d}^{z}+\Delta w^{z}\ z\in(0,Z)\quad(\ref{eq:20})\] ### Knowledge-based component After modeling the recommendation system, several constraints on the model output are applied. The final stage in the recommendation process is based on the knowledge-based technique. The knowledge-based recommendation is a specific recommender system that can be used in combination with other algorithms or alone. The aim of using this module is its huge impact in increasing the safety of the recommendations. We extracted and gathered rules in the drug recommendation domain as queries. These rules are based on _Drug Interactions_ and _Adverse Events_. Using these rules, we can prevent recommending drugs that lead to events like death, hospitalization, disability, and life-threatening events. The flowchart of this component has been extracted from Figure 2 and redrawn in Figure 5. The set of these rules which our knowledge-based component considers falls into these two categories: * Based on patients' features: * Gender is allowed to recommend a drug. * The age is allowed for recommending a drug. * Based on drug interactions: * The recommended drug has no interaction with other drugs taken by the user. Table 10 presents knowledge-based rules based on patient's features that have been considered in this work. For example, according to this table, a drug can only be recommended if the patient's age is in the allowed range and the gender is allowed for recommending the drug. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \hline \multicolumn{2}{|c||}{Drug Name} & Ability & Actemra & & Zometa \\ \hline \multirow{2}{*}{Not allowed age} & Minimum & 24 & 29 &... & 31 \\ \cline{2-6} & Maximum & 45 & 64 &... & 67 \\ \hline \multirow{3}{*}{Allowed gender} & None & 0 & 1 &... & 0 \\ \cline{2-6} & Female & 0 & 0 &... & 0 \\ \cline{2-6} & Male & 0 & 0 &... & 0 \\ \cline{2-6} & Both & 1 & 0 &... & 1 \\ \hline \end{tabular} \end{table} Table 10: User Feature-based Rules Figure 5: Knowledge-based component of our proposed approach based on the bottom left of Figure 2. For our proposed knowledge-based component, another adverse events dataset is generated from Druglib.com. The structure of this dataset is presented in Table 11. Features in this dataset include age, gender, the name of the drug taken by a given patient, its adverse event, reaction, and other drugs used by the patient. We used Gaussian and Poisson distribution for patients' age and gender from the above dataset for the adverse events of using a specific drug. These adverse events can be death, hospitalization, disability, or other life-threatening events. Since, in this case, we require the average and standard deviation, by using Poisson and Gaussian distribution, it is possible to compute the allowed gender for recommending a drug to a patient using much less memory than machine learning for this specific task. Assume that on average, by recommending a drug \(\gamma\) for \(\eta\) times to patients with \(gender=female\) they experience one of the adverse events mentioned in Table 11, then the probability that by recommending drug \(\gamma\) to a female patient she experiences one of the adverse events is calculated as equation (21): \[P_{n}(x)=e^{-\lambda^{Y}_{Femate}\lambda^{Y}_{Female}}\quad\lambda^{Y}_{Female }=\frac{Number\ of\ Adverse\ Event}{\eta} \tag{21}\] And similarly, for a male patient, this probability is calculated as equation (22): \[P_{n}(x)=e^{-\lambda^{Y}_{Male}\lambda^{Y}_{Male}}\quad\lambda^{Y}_{Male}=\frac {Number\ of\ Adverse\ Event}{\eta} \tag{22}\] Using the above calculations, if the probability of an adverse event for each gender and each medicine is more than a given threshold value, the medicine is removed from the list and is not recommended to the patient. In this paper, we set the \(threshold\ =\ 50\%\). Also, normal distribution was used for setting the rules related to the patients ages. Suppose the average and standard deviation of a patient's age who have taken medicine \(\gamma\) and has an adverse event is represented by \(\mu\) and \(\sigma\), respectively. In that case, the normal distribution function related to age is as equation (23): \[f(x)=\frac{1}{\sqrt{2\pi}\sigma_{Y}}e^{-\frac{1}{2}\left(\frac{x-\mu_{Y}}{ \sigma_{Y}}\right)^{2}} \tag{23}\] and so for patients who are taking medicine \(\gamma\), equation (17) for age range has to be met to minimize the adverse event (24): \[X\epsilon\left(\mu_{Y}-1.96\left(\frac{\sigma_{Y}}{\sqrt{n}}\right),\mu_{Y}+1. 96\left(\frac{\sigma_{Y}}{\sqrt{n}}\right)\right),1-a=95\%,Z_{0.975}=1.96 \tag{24}\] In this research, we used the rules related to the users' features and the medicine rules and drug interactions we are also considering. In this regard, the drug interactions dataset was used to exclude recommendations for drugs having high interactions with other drugs. ## 6 Results and Discussion This section discusses our proposed drug recommendation system implementation and the newly generated datasets. First, we explain the extracted and newly generated datasets and then we will demonstrate the results of our implemented system. ### The dataset As discussed in the proposed method, we used the information from two databases of drugs Druglib.com[7] and Drugs.com[5]. The first database Druglib.com is a comprehensive resource for drug information. For each drug, a variety of information such as description, side-effects, drug ratings & reviews by patients, and clinical pharmacology has been provided. Also, Drugs.com is another database for drug information, and many recommendation systems have been suggested that use this database to build their models. Both the original and the revised version of Drugs.com have been used in RS to evaluate the performance of the approaches. We crawled these pharmaceutical websites to construct our intended datasets with the required features in a structured way. As a result, we gathered much useful information about drugs and patients' conditions and collected them into three datasets as follows: * The first extracted dataset is the Rating dataset consists of patients' features and their ratings on drugs consisting of 3294 samples. * The second dataset consists of Drug features containing drug categories, side effects, and benefits. * The last dataset is the Interaction dataset containing interactions between drugs. To evaluate the performance of our system, we used the most popular existing machine learning evaluation metrics. Accuracy, sensitivity (recall), specificity, and precision were the basic metrics that we applied to our model. We used 70 percent of the samples (2304 samples) in the dataset for training our model, 20 percent (660 samples) for evaluation, and 10 percent (330 samples) for the test. After obtaining the values for true positive (TP), false positive (FP), true negative (TN), and false negative (FN), different metrics can be calculated. We compared our results with the existing approaches in [27, 48, 57, 58], and [59]. We implemented the algorithms in these papers with the datasets they have applied. In [27], SVM and recurrent neural network (RNN) we been used to recommend a drug to a patient. In [48] the authors first considered the clustering of drugs according to the drug information, like the algorithm proposed in this paper. Then collaborative filtering is used to recommend a drug. But unlike our work, they haven't considered the classification of users and their features. Finally, in [57], an improved matrix factorization has been used, filters the results using NSGA-III to improve the accuracy, diversity, novelty, and recall. Table 12 represents the comparison results between our work and other drug recommendation systems in terms of important machine learning metrics. Comparison results consist of the F2 measure, ROC, and confusion matrix of different approaches depicted in Figures 6 to 8. Figure 7- comparison result of ROC. Figure 8- comparison result of the confusion matrix. The construction of a confusion matrix for different ratings is also shown in Figure 9. The predictions are compared with actual ratings of users, and drugs for the case when they are considered separately and combined according to our proposed approach. One of the important components of our recommender system is the final knowledge-based approach. This component prevents death, hospitalization, and disability by considering drug interactions and the user's age. The adverse Events Dataset is used in this regard to our system's performance for recognizing such cases and recommending the appropriate drugs. This dataset contains 2486 samples, where 80% of them are used for rule extraction, and the remaining 20% are for the test. Figure 9: **-Confusion matrix obtained for the proposed method.** The following parameters are considered for the evaluation: \[\begin{split} Death\ Ratio=\frac{Number\ of\ Death}{Total\ Number\ of\ Recommendation}\\ Disability\ Ratio=\frac{Number\ of\ Disability}{Total\ Number\ of\ Recommendation}\\ Hospitalization\ Ratio=\frac{Number\ of\ Hospitalization}{Total\ Number\ of\ Recommendation}\\ \end{split}\] For comparison, the system's performance for different adverse events was calculated one time without a knowledge-based component and the second time using this component. Table 13 and Figure 10 represent the results of this comparison. Knowledge-based component is an essential part of a drug recommendation system in reducing adverse events and improving the quality of recommendations. We also considered one more important metric for recommender systems evaluations: _hit rate_. \begin{table} \begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{Adverse event} & Without knowledge-based & With knowledge-based \\ & component & component \\ \hline Death rate & 44\% & 6\% \\ \hline Hospitalization & 15\% & 2\% \\ \hline Disability & 4\% & 0.7\% \\ \hline \end{tabular} \end{table} TABLE 13: COMPARISON RESULTS OF KNOWLEDGE-BASED COMPONENT Figure 10: Comparison result of adding knowledge-based in the recommendation system. The data set's testing samples (330) are utilized in hit-rate evaluation. The hit-rate in evaluation is calculated by the ratio of the total hits in the top 10 recommended drugs returned for all users and the total testing samples. So if \(\eta\) is the number of relevant predicted drugs for all users, and \(N\) is the total number of testing samples, according to [60], the hit-rate is calculated as equation (25): \[hit\_rate=\frac{\eta}{N} \tag{25}\] The result of hit rate evaluation is represented in Figure 11. As it can be seen from this figure, our proposed approach has the \(hit\ rate\ =\ 0.49\), which is better than all other approaches. The next evaluation metric is _cumulative hit-rate_, which represents the number of hits with ratings above a given threshold and ignores the predicted ratings lower than the threshold. The result of the cumulative hit-rate with the threshold set to 4 is shown in Figure 12. The cumulative hit-rate is calculated as (26): \[Cumulative\ Hit-Rate=\frac{Number\ of\ hits\ with\ rating\ above\ threshold}{N} \tag{26}\] The utilization of this threshold makes a better match with the user's interest in the recommended drug. Figure 11: Top-10 hit-rate recommendation systems. Our results are encouraging in the field of drug recommendations. It has combined the benefits of basic recommender approaches with less computational overhead through a novel modeling approach and using statistical methods. It also classifies drugs and users in terms of their features, leading to high accuracy compared to state-of-the-art algorithms. However, better results can be achieved by considering the characteristics of diseases and recommending drugs based on disease features in addition to the features of patients and drugs. ## 7 Conclusion In this paper, we proposed a comprehensive drug recommender system that takes advantage of all basic recommender system techniques and applies natural language processing, neural network-based matrix factorization, and, more importantly, employing knowledge-based recommendations to recommend the most accurate drugs to patients. Compared with conventional matrix factorization, our proposed method improves the accuracy, sensitivity, and hit rate by 26%, 34%, and 40%, respectively. In comparison with other machine learning approaches, we obtained an accuracy, sensitivity, and hit rate by an average of 31%, 29%, and 28%, respectively. Our approach can be used as an adjunct tool torecommend drugs to patients and improve the quality of prescriptions and reduce the errors caused by medical practitioners. Figure 12: Top-10 cumulative hit-rate of recommendation systems. In the future, we will extend the knowledge and information extraction from drug databases and include all existing patient features in the user features. Also, we are going to consider the features of the disease in the recommendation. These features can be captured by general practitioners and help improve the proposed drug recommender system performance and make more accurate recommendations by having more relevant features. In the final output of the recommendation, we also include the dosage and effectiveness of a drug in addition to the list of drugs. Also, in our future work, we will extract the information from other drug resources like the SIDER database for drug side effects. At the end, it should be noted that a physician should approve the recommended medicines for safety reasons. ## Data availability The datasets generated during the current study are available in the [https://github.com/DatasetsLibrary/RECOMMED](https://github.com/DatasetsLibrary/RECOMMED) repositories. In addition, preprocessed datasets and source code of this study are also available at [https://github.com/DatasetsLibrary/RECOMMEDTool](https://github.com/DatasetsLibrary/RECOMMEDTool).
2301.07504
Ameliorating transport system focusing on sustainability and inclusiveness through a mixed-method research (A case study in Tehran, Iran)
Numerous studies have examined issues such as sustainable transport and urban inclusion; yet there is a gap about disadvantaged people and the hindrances they encounter in terms of public transportation. This study assesses sustainability of transport system in the context of district 8 of Tehran, Iran, focusing on needs of disadvantaged groups, and giving suggestions to ameliorate the condition of transport planning using a mixed method research consist of face-to-face focus groups with elderlies, children, women and disabled people, questionnaire, map analysis (network analysis using ArcGIS) and key informant interview. The results reveal that there are various burdens for disadvantaged people including lack of enthusiasm toward active modes of transport because of safety problems, cost-ineffectiveness and inefficacy of public transportation for some disabled individuals. The recommendations for the area regarding the problems are dividing the district into smaller sub-districts, designing a complete street and location allocation of a park and ride system.
Melika Soufiemami
2022-12-31T03:33:22Z
http://arxiv.org/abs/2301.07504v1
Amaliorating transport system focusing on sustainability and inclusiveness through a mixed-method research (A case study in Tehran, Iran) ###### Abstract Numerous studies have examined issues such as sustainable transport and urban inclusion; yet there is a gap about disadvantaged people and the hindrances they encounter in terms of public transportation. This study assesses sustainability of transport system in the context of district 8 of Tehran, Iran, focusing on needs of disadvantaged groups, and giving suggestions to ameliorate the condition of transport planning using a mixed method research consist of face-to-face focus groups with elderlies, children, women and disabled people, questionnaire, map analysis (network analysis using ArcGIS) and key informant interview. The results reveal that there are various burdens for disadvantaged people including lack of enthusiasm toward active modes of transport because of safety problems, cost-ineffectiveness and inefficacy of public transportation for some disabled individuals. The recommendations for the area regarding the problems are dividing the district into smaller sub-districts, designing a complete street and location allocation of a park and ride system. ## 1 Introduction Sustainable transportation is an aspect of sustainable development that in case of its establishment, several urban issues will be improved or even eliminated. Sustainable transport planning (STP) is a mean to decrease noise and air pollution, car accidents, traffic congestion, carbon footprint, the cost of inner city travel, and many other problems. Also it can contribute to ameliorate the condition of accessibility in urban areas, justice between different groups of dwellers, safety and etc., [1, 2, 3]. In some parts of the world, motorized transport has undermined human health and has been a burden to sustainability [4, 5], while some cities are developing more active modes by following paradigms like "growth to equity and sustainability" [6]. Exploiting urban infrastructure is a right for all citizens regardless of their gender, wealth, age, health condition and etc., which is called "the right to the city" [7] and obviously the right to accessibility seems to be of an indispensable importance for all urban dwellers. Challenges for excluded people in terms of public transport (PT) have been mitigated over time [8] and numerous experts focused on gender needs in STP which is undoubtedly an important topic, however, we know little about the needs and hindrances faced by disadvantaged groups of people are encountering every day. In this paper, I have adopted four groups as disadvantaged people to assess their transportation needs in the context of Tehran city, the capital city of Iran. These groups include disabled individuals (usually suffering from mental or physical incapability) [9], children who are not aware of their urban rights and might not be able to express their needs, elderlies (people aging more than 65) and women as citizens with socio-cultural burdens, especially in an Islamic country like Iran. This process would contribute to ameliorate sustainable and inclusive transport planning in addition to providing a greater quality of life for all people. ## 2 Literature review A variety of articles on the current literature have focused on policy, governance, or design for one of the excluded groups such as elderlies or disabled people. [10] catechized transport and disability in order to eliminate social exclusion. Many articles have investigated on gender-sensitive inclusion in STP [11, 12, 13]. generally, considering all of the disadvantaged groups (children, women, elderlies and disabled people) is known as a gap in inclusive transport studies and focusing on them must be undertaken. ### Sustainability and inclusiveness there is not a clear and simplified definition of inclusive planning [14], however, STP emphasizes on three main features: economic, social and environmental aspects [15]. In other words, it allows justice between generations without any dispute [16], reduces greenhouse gasses, air and noise pollution, poverty and preserves economic growth [17, 18, 19]. The social aspect of STP however insists on an inclusive transportation services access for all [20, 18]. Inclusive transportation focuses on vulnerable and disadvantaged urban dwellers with the purpose of life standard achievements [21]. For example, 1.4% of Iranians are disabled dwellers [22], who must be given the right of initial public service [23] and if ill-equipped decision continuous in this societies, there soon will be social heterogeneity, and many other challenges [24] the result of ignorance of the inclusiveness might be felt among inhabitants who are feeling lack of land service, insecurity and tension, deregulation of public space and traffic congestion [25]. #### 2.1.1 disabled people Tehran, the capital of Iran, which is known as a megacity with a population of 8.693.706 has been made for motorized vehicles, while a city must have standards for the needs of all groups [26] ranging from cyclist, elderlies, disabled individuals, students, people in poor economic condition. Disadvantaged people are tackling various problems ranging from lack of accessibility to insecurity or safety obstacles. Unfortunately, they are being ignored in many cases. For instance, the number of disabled people had not been counted in the latest notional census of Iran in 2016. #### 2.1.2 women Women in Iran, face certain limitations about the obligatory dressing code known as "Hijab" which does not allow them to enjoy some simple daily physical activities such as cycling [27, 28, 29, 30]. Thus, focusing on minorities such as: women, children, elderlies and disabled people [31, 32, 33, 34] must not be ignored. Another cross-cutting issue for women is the fear of violence and crime in urban spaces according to some dangers presented to their life and their mobility might unwittingly reduce [35, 36, 37, 38, 39, 40, 41]. #### 2.1.3 Elderlies people over 60 will be around twelve million by 2030 [42] and there seems to be a growing correlation between age and disabilities [43]. Thus, the need for an inclusive planning which emphasizes on population aging is necessary [44]. Elderly people usually have a fear of falling [45], need for bathrooms during daily commutes "slow modes of transport such as cycling" [46], and mobility which is more important than accessibility because it can increase the level of walkability and physical activity [47, 48]. #### 2.1.4 Children A reduction physical activity among children is occurring [49, 50], so there must be solutions such as active modes of transport to this dramatic concern. However, accessibility and mobility ought to have certain features for children, including; safety, security, street connectivity, special facilities for walking and cycling. ## 3 case study and conceptual model Municipality of Tehran has divided the city into 22 administrative districts. however, in this work, I have examined district number 8 which is presented in Fig 1.. Tehran's (PT)includes variety of PT modes such as bus, BRT, subway. Albeit, private modes of transport (taxi, fix-route taxi, hailing taxies) have grown their popularity among Tehran dwellers [52]. A conceptual model was according to literature review and the context of the case study shown in Fig 2.. ## 4 Data collection and methodology The chosen method adopted for this study is a mixed research method in which qualitative and quantitative methods are used to contribute the robustness of the results. According to the available data, literature review and context of the case study, 6 factors were chosen (Low carbon transport, Acceptable accessibility, Cost-effectiveness and affordability, Promoting cycling and walkability, Usability of PT for disadvantaged people, PT diversity). Each factor contains one or some indicators and colleting method which are presented in table 1. These indicators were assessed through 180 face to face questionnaire, 9 focus groups including disadvantaged dwellers, key informants' interviews and map analysis by means of ArcGIS, and finally available data in Portal of Tehran municipality ([https://Tehran.ir](https://Tehran.ir)), and comprehensive plan of Tehran. ## 5 Analysis and results Based on table 1, the order of analysis will be revealed in this section of the study. ### Low carbon transport Fossil fuels consumption has a correlation with carbon emission and air pollution. The comparing level of AQI of Tehran city and district 8 as a factor related to low carbon transport system is demonstrated in table 2. It is clear that the case study area is less polluted than other districts on average. Figure 1: Two maps on the top show Iran and Tehran city from left to right, and the map below shows district 8 of Tehran. ### Acceptable accessibility Three indicators including number of transportation stations in the area and the approximate distance between stations and basic urban service land uses were chosen for this factor. Basic urban land use for disadvantaged people are: primary schools, shopping centers, parks and public transportation stations. Disadvantaged people are capable to walk for 10 minutes without being exhausted. In this part of study, I have analyzed the distance between mentioned land use in the case study area by the means of network analysis2 in ArcGIS which are shown in Fig. 3 to Fig.6. Footnote 2: Network analysis (Finding Coverage), \(n\) this type of network analysis, drive-time areas correspond to the distance that can be reached within a specific amount of time. The accessibility for educational centers seems to be ideal for the central part of the district, while eastern and some western part of the district lack in accessibility to these centers. While, in case of parks, shopping centers and public transportation platforms, the network system covers roughly all of the case study area. ### Cost-effectiveness and affordability Two questions were prepared in a 3-point Likert scale to estimate cost-effectiveness from the aspect of time-saving, and financial affordability of monthly PT costs. 180 people participated in the questionnaire and the results indicate that; the PTS is roughly wastes dwellers time, because more than half of the participants were not eager to use public transportation because of shortage in time-cost-effectiveness. Conversely, most of them were satisfied with the monthly cost which they pay for public transportation fees. The results of the questionnaire are available in Fig.7. ### Promoting cycling and walkability Four of the indicators are assessed through the focus groups. Number of 9 focus groups consist of elderlies who were suffering from one or more than a disability, women and children were made. People pointed out problems in terms of cycling and walkability, and the interview was extended around the using a bicycle or walking for short daily trips. According to an interview with the mayor there is not any bicycle lane in the area. Results of the focus groups are available in table 3. ### Usability of PT for Disadvantaged people Number of subway stations with elevator and textile paving for elderly and disabled groups is illustrated in Fig. 8. According to this map, only 3 stations are usable for people with blindness and 5 stations are proper for disability use. The result of the focus groups for hindrances faced by disadvantaged groups are lack of elevator for level change in streets, pedestrian bridges, stations and etc., certain limitations of cycling for women, lack of bicycle lanes in the area, safety issues and fear of accident with motorized vehicle while passing the street during walking or cycling and level change on streets and stations. ### PT diversity There is a variety of public and private modes of transportation is available in this district, including 8 subway stations, 23 BRT stations and more than 29 bus stops. Variety of public transportation platforms in the case study area are depicted in Fig.9. Figure 2: the conceptual model of the study. \begin{table} \begin{tabular}{c c c c} \hline No. & Factor & Indicator(s) & method \\ \hline 1 & Low carbon transport & Air quality index (AQI) in the study area within a year compared to Tehran city & Available data \\ & The amount of fossil fuel consumed by private mode of transport & Key informant interview \\ & & Available data \\ 2 & Acceptable accessibility & The distance between basic urban service and public transportation platforms & Map analysis \\ Figure 4: The accessibility and mobility to shopping centers in the area within 10 minutes’ walk. Figure 3: The accessibility and mobility to primary schools in the area within 10 minutes’ walk. Figure 5: The accessibility and mobility to parks in the area within 10 minutes’ walk. Figure 6: The accessibility to public transport stations in the area within 10 minutes’ walk. \begin{table} \begin{tabular}{l l l} \hline \hline **Problems** & **recommendations** \\ \hline **Accessibility problems in some parts of the case study** & **Provision of land use per capita** \\ **Cost-ineffectiveness of the PT system** & **Location allocation of PT stations in needed** \\ & **neighborhoods of the area** \\ **Inexistence of bicycle lanes** & **Design a complete street in a suitable location, and** \\ & **bicycle lane** \\ **Safety problems and fear of car accidents** & **Reducing the width of the street and increasing** \\ & **sidewalks width** \\ **Apathy of active modes of travel use** & **Location allocation of a park and ride system** \\ \hline \hline \end{tabular} \end{table} Table 4: Problems revealing in the study and given recommendations. Figure 7: results of the questionnaire. Figure 8: Subway stations equipped for disabled and blind individuals, and inappropriate subway stations for them. Figure 9: Variety of PT stations including subway, BRT and bus stations. ## 6 Recommendations problems in the study area vary in terms of sustainable and inclusive transport. Based on the achieved results in section 5, a number of recommendations ranging from provision of land use per capita (for instance for educational centers such as primary schools), Location allocation of PT stations for neighborhoods which have transit deserts [53], designing a complete street and special active modes of transport lanes in a suitable location, promoting active modes such as walking by increasing sidewalks width and Location allocation of a park and ride system. ## 7 Conclusions In this paper sustainability and inclusiveness of transportation system in district 8 in Tehran was explored. A mixed method research including a face to face questionnaire, 9 in person focus groups, key informant interview, available data analysis and map analysis was applied. The results indicate that the transport system is low carbon in the case study area comparing to Tehran city. However, it does not have the maximum standard of the air quality yet. In terms of inclusiveness of transport system, there was found that disadvantaged people face hindrances while using public transport modes. These burdens range from lack of elevators and ramps while level change, lack of textile paving for blind individuals, fear of falling down and car accidents while walking and apathy to use active modes of transport because safety problems and specific social limitations for women for cycling. In addition, dwellers in this district, were not satisfied with cost-effectiveness of PT. conversely they were pleased with the affordability of PT. The results also show that active transport modes such as cycling and walking are not popular among citizens due to lack of infrastructures such as specific bicycle lanes. Thus, the transport system in the study area needs to ameliorate which can be done through various actions such as providing basic land use per capita, allocating PT stations for transit deserts, location allocation and designing a park and ride system. To conduct this study in another context, it should be noticed that; although needs of excluded people might be approximately resembling, sustainability of the transport system might differ from city to city. Hence, this study may not be directly generalizing in another context, however, it has valuable results which can ameliorate quality of life for all people, enhance air quality, social equity, and finally lead to a low carbon city. ## 8 Declaration of competing interest The author declare that she has no competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2307.16787
The Ethics of AI Value Chains
Researchers, practitioners, and policymakers with an interest in AI ethics need more integrative approaches for studying and intervening in AI systems across many contexts and scales of activity. This paper presents AI value chains as an integrative concept that satisfies that need. To more clearly theorize AI value chains and conceptually distinguish them from supply chains, we review theories of value chains and AI value chains from the strategic management, service science, economic geography, industry, government, and applied research literature. We then conduct an integrative review of a sample of 67 sources that cover the ethical concerns implicated in AI value chains. Building upon the findings of our integrative review, we recommend three future directions that researchers, practitioners, and policymakers can take to advance more ethical practices across AI value chains. We urge AI ethics researchers and practitioners to move toward value chain perspectives that situate actors in context, account for the many types of resources involved in co-creating AI systems, and integrate a wider range of ethical concerns across contexts and scales.
Blair Attard-Frost, David Gray Widder
2023-07-31T15:55:30Z
http://arxiv.org/abs/2307.16787v3
## The Ethics of AI Value Chains ## Abstract Recent criticisms of AI ethics principles and practices have indicated a need for new approaches to AI ethics that can account for and intervene in the design, development, use, and governance of AI systems across multiple actors, contexts, and scales of activity. This paper positions AI value chains as an integrative concept that satisfies those needs, enabling AI ethics researchers, practitioners, and policymakers to take a more comprehensive view of the ethical and practical implications of AI systems. We review and synthesize theoretical perspectives on value chains from the literature on strategic management, service science, and economic geography. We then review perspectives on AI value chains from the academic, industry, and policy literature. We connect an inventory of ethical concerns in AI to the actors and resourcing activities involved in AI value chains to demonstrate that approaching AI ethics issues as value chain issues can enable more comprehensive and integrative research and governance practices. We illustrate this by suggesting five future directions for researchers, practitioners, and policymakers to investigate and intervene in the ethical concerns associated with AI value chains. ## Keywords Artificial intelligence, AI ethics, value chains, supply chains, governance ## 1 Introduction Dominant AI ethics principles and practices are unable to prevent many societal and environmental harms. Researchers and practitioners have noted that AI ethics practices predominantly focus on narrow operational interventions such as algorithmic impact assessments, checklists, and audits at the expense of broader interventions in the societal and environmental contexts of AI systems (Greene, Hoffman, & Stark, 2019; Gupta, 2021; Morley et al., 2021; Resseguier & Rodrigues, 2020; Stark, Greene, & Hoffman, 2021). The principles underlying dominant AI ethics practices have also come under scrutiny: principles such as fairness, accountability, transparency, safety, and explainability often neglect to consider broader socio-material, political-economic, organizational, and ecological contexts in which AI systems are situated (Attard-Frost, De los Rios, & Walters, 2022; Hagendorff, 2020; Keyes, Hutson, & Durbin, 2019; Lauer, 2021; Xiang & Raji, 2019). In response, some have proposed alternative principles to guide practices, such as care (Gray & Witt, 2021), sustainability (van Wynsberghe, 2021), and justice (Le Bui & Noble, 2020). In parallel with those criticisms, many have called to re-center AI ethics around new conceptual focal points. Communitarian calls suggest re-centering AI ethics around social contracts (Hausfermann & Lutge, 2021; Rahwan, 2017), community values (Ricaurte, 2022; vans-Greenwood et al., 2020), and participatory design practices (Berditchevskaia, Maliaraki, & Peach, 2021; Birhane et al., 2022; Bondi et al., 2021). Others call to interrogate professional conduct and business practices (Attard-Frost, De los Rios, & Walters, 2022; Cox, 2022; Greene, Hoffman, & Stark, 2019), labor conditions (Miceli & Posada, 2022; Miceli, Posada, & Yang, 2022), and organizational design practices (Mantymaki et al., 2022; Schneider et al., 2022). Others propose relational approaches that focus on material, infrastructural, and ecological relations (Bratton, 2021; Crawford, 2021; Crawford & Joler, 2018; Halpern, 2021), institutional relations (Birhane, 2021), the modularized relations in AI value chains (Widder & Nafus, 2022, 2023), and AI-mediated careerwork (Yew, 2021). Many Indigenous approaches to AI ethics also advance relational perspectives, including suggestions to decenter human relations from AI ethics and instead center relational principles and practices based on Indigenous protocols, such as kinship, sustainability, decoloniality, locality, and stewardship (Abdilla et al., 2021; Irwin & White, 2019; Lewis et al., 2018; Lewis et al., 2020; Ricaurte, 2022). Perspectives on Al ethics based on the value system of Ubuntu-practiced by some living in and originating from Africa-similarly call to center Al ethics around relational principles such as communitarianism, justice, solidarity, reciprocity, and reconciliation (Friedman, 2022; Gwagwa, Kazim, & Hilliard, 2022; Mhlambi, 2020). In this paper, we respond to criticisms of-and alternative perspectives on-Al ethics by positioning _Al value chains_ as an integrative concept. As situated structures that enable inter-actor relations across multiple contexts and scales of action, Al value chains can provide an integrated account of phenomena observed in communitarian, organizational, and relational critiques of Al ethics. Outside of Al ethics, value chain theories and analysis methods have been applied to critically account for a wide range of resources, relationships, activities, and ethical concerns implicated in national, subnational, and transnational political economies and ecologies (see for example Bair et al., 2021; Campling & Havice, 2019; Palpacuer, 2019; Suwandi, 2019; Werner & Bair, 2019). In addition, many value chain theories observe a more context-sensitive and phenomenologically grounded understanding of "value" than theories from mainstream economics or supply chain management, focusing more directly on the situated phenomena through which diverse actors perceive, experience, and create value differently from one another (e.g., Akaka & Parry, 2019; Berthod, Helfen, & Sydow, 2018; Chandler & Vargo, 2011; Yamauchi, 2019). Therefore, we argue that through careful analysis of the value chains in which Al systems are situated, Al ethics research and practice can more comprehensively account for, integrate, and intervene in the wide range of ethical concerns implicated in the development, use, and governance of Al systems. In the following section, we synthesize literature on value chains from three fields of study: (1) strategic management, (2) service science, engineering, management, and design (SSMED), and (3) economic geography. In Section 3, we then review the emerging body of industry, government, and applied research literature on the value chains of Al systems. In Section 4, we identify some of the major ethical concerns, actors, and activities implicated in Al value chains, and we connect our findings to relevant literature on the political economy, ecology, ethics, and governance of Al systems. We conclude in Section 5 by proposing future directions for implementing value chain ethics in the study, development, use, and governance of Al systems. ## 2 Value Chains Value chains were first introduced by Porter's (1985) "value chain model," a business management framework that specifies five "primary activities" (inbound logistics, outbound logistics, operations, marketing & sales, and service) and four "support activities" (firm infrastructure, human resource management, technology development, and procurement) which iteratively transform resource inputs into outputs, adding more value to the firm as resources move through further down the value chain through each activity. Later perspectives in the business management and economic geography literature apply the value chain concept to contexts beyond Porter's predefined "primary" and "support" activities, accounting for the role of value chains in more complex organizational systems and economic networks such as _global value chains_(Gereffi, Humphrey, & Sturgeon, 2006; Humphrey & Schmitz, 2000; Kano, Tsang, & Yeung, 2020; Gereffi, 2018) and _global production networks_(Coe, Dicken, & Hess, 2008; Coe & Yeung, 2019; Henderson et al., 2001). More recently, researchers have further extended those approaches to more critically account for power, labor, spatial, temporal, and organizational relations throughout the value chains and networks of digital platforms (e.g., Butollo et al., 2022; Butollo & Schneidemesser, 2022; Howson et al., 2022a, 2022b). Alongside that literature, many researchers in the field of service science, management, engineering, and design (SSMED) understand value chains as being situated and networked within diverse social, political, economic, environmental, and phenomenological contexts. In the SSMED literature, _value networks_ are typically theorized as interactive structures that enable value to be co-created between multiple interdependent actors with varying value systems across multiple spaces, times, scales, and contexts of activity (Edvardsson, Skalen, & Tronvolt, 2015; Frost, Cheng, & Lyons, 2019; Frost & Lyons, 2017; Lusch, Vargo, & Tanniru, 2010; Lyons & Tracy, 2013; Siltaloppi & Vargo, 2014; Vargo & Lusch, 2016;). Foundational to value network ontologies are _resource activities_: the activities through which multiple actors assemble and exchange their resources with the goal of co-creating value, a process often described in the SSMED literature as "resource integration" (Frost, Cheng, & Lyons, 2019). While some regard value _network_ ontologies as a conceptually stronger successor to value _chain_ ontologies (Basole, 2019; Buhman et al., 2005; Dyer, 2000; Nenonen & Storbacka, 2010; Normann & Ramirez, 1993), others see them as highly compatible, recognizing value chains to be important network sub-structures that enable multiple actors to situate, pattern, and integrate a subset of their resourcing activities spatially, temporally, as well as vertically (within a particular industry) and horizontally (across multiple industries) (Chen & Chiu, 2015; Lim et al., 2018; Polese et al., 2009; Wirtz & Ehret, 2019). For example, Alter's "service value chain framework" (2007; 2008) assumes that value chains enable resourcing activities to be "continuously or repeatedly" (2008, p. 76 ) performed within a pre-negotiated service delivery workflow. Similarly, the "data-value chain" model of Lim et al. (2018) characterizes data as a resource from which multiple networked actors may gradually co-create value through a temporal chain of data collection, data analysis, and information use activities that are situated within many service contexts. Building upon those perspectives, we suggest that value chains enable resourcing activities to occur with three main structural properties: (1) _Situatedness_: The actors and resourcing activities that occur within value chains are situated within specific contexts. (2) _Pattern_: The resourcing activities that occur within value chains are spatially, temporally, and organizationally patterned, and thus capable of recurring with some degree of regularity. (3) _Value relations_: The resourcing activities are perceived and evaluated differently by multiple actors with aligned or misaligned values within the value chain and across a broader network of value chains. Importantly, SSMED perspectives tend to understand value chains as having different ontological properties than _supply chains_ are typically understood to have in mainstream economics and supply chain management. According to proponents of SSMED, supply chains are organized according to a "goods-dominant logic," characterized by Vargo and Lusch (2006) as an outmoded (though still widely observed) logic of economic organization in which "tangible output and discrete transactions were central" (p. 4). In contrast, value chains are organized according to a "service-dominant logic" in which "intangibility, exchange processes, and relationships are central" (p. 4). This ontological distinction between value chains and supply chains is reflected in many influential theories and practices of value chain management (e.g., Feller, Shunk, & Callarman, 2006; Gurria, 2012; Rayport & Sviokla, 1995; Normann, R. & Porter, 1985; Ramirez, 1993; United Nations Industrial Development Organization, 2015). More recently, the ontological distinction has been re-affirmed by Widder and Nafus (2022, 2023), who call for AI development practices to move away from the inter-actor distance and task modularity inherent to supply chains and toward the co-creative relationality of value chains. Crucially, SSMED theories of _value_ differ greatly from those favored by mainstream economics. The "value" embedded in value chains is not positivistic, quantifiable, priceable, or objectively measurable as generally theorized as in mainstream economics (see criticisms of mainstream economics from Kranjc, 2021; McCloskey, 1983; 1998; Spash, 2012). In an SSMED perspective, there is not some innate, objective value in economic structures that can be identified by assuming a god's eye view of the global value network. In that regard, SSMED epistemologies of value typically align with Haraway's (1988) theory of knowledge as partial and socially situated: there is no epistemic "god-trick" that can enable knowledge of the value of a particular activity from outside of the specific situations and partial perspectives from which it is perceived. Instead of the characteristic god-tricks of mainstream economics, SSMED perspectives understand value as socially situated and co-created preferences for action. Preferences for action intersubjectively emerge when the different values, experiences, and abilities of multiple actors interact with one another through the structure of a value network (Akaka & Parry, 2019; Berthod, Helfen, & Sydow, 2018; Chandler & Vargo, 2011; Edvardsson, Tronvolt, & Gruber, 2011; Vargo & Lusch, 2008; Yamauchi, 2019). An actor's particular social situation may therefore impact their perception of the value of a given resourcing activity as being beneficial or harmful to their interests or to the interests of other actors. ## 3 AI Value Chains A growing body of recent industry, government, and applied research literature examines _AI value chains_: the value chains involved in the development, use, and governance of AI systems. Much of the research literature on AI value chains comes from a strategic management or industrial engineering perspective, examining the role of AI systems in adding value or risk to pre-existing industrial value chains (e.g., Agrawal, Gans, & Goldfarb, 2022; Chan-Olmsted, 2019; Eling, Nuessle, & Guenther et al., 2022; Iansiti & Lakhani, 2020; Liu, Chen, & Chen, 2022; Naude, 2022; Oosthuizen et al., 2020; Staubli, 2022). However, some researchers study the unique structural, functional, ethical, and policy implications of the value chains of the AI systems themselves. Engler and Renda (2022) note that the European Union's _AI Act_ sets "obligations on all value chain participants" (p. 2) involved in the development and use of AI systems. The value chain actors and resourcing activities referenced within the AI Act (2021) include the developers and users of AI systems as well as "relevant third parties, notably the ones involved in the sale and the supply of software, software tools and components, pre-trained models and data, or providers of network services" (p. 32). However, the Act does not define "AI value chain", nor does it precisely specify the obligations that participation in such a value chain might involve. In response, Engler and Renda broadly define the AI value chain as "the organisational process through which an individual AI system is developed and then put into use" (p. 2). They then propose a typology of AI value chains, common resourcing activities involved in AI value chains, and recommendations for EU policymakers seeking to set more specific obligations for AI value chain participants. Other policy researchers have examined how accountability is distributed during the resourcing of AI value chains, and attendant ethical and policy implications (Brown, 2023; Cobbe, Veale, & Singh, 2023; Kak & West, 2023; Kuspert, Moes, & Dunlop, 2023). These studies primarily focus on the _computational resourcing activities_ involved in AI systems (e.g., the preparation and use of training and testing data, the purchasing and use of compute, the development and use of models, algorithms, code, and APIs) and propose practicable policy interventions that target those resourcing activities. Widder and Nafus (2022, 2023) combine theories from computer science and feminist science and technology studies to take a more critical approach to the ontologies and ethics of AI value chains. In examining the practices of 27 AI engineers, they describe AI value chains as "heterogenous, cross-cutting, not always linear social interactions and relations that occupy multiple social locations and cultural logics at the same time" (2022, p. 3) and advocate for "value chain thinking" (p. 1) in AI ethics practices as a means to widen the scope of developer accountability. Widder and Nafus emphasize that AI value chains are situated across social, political, and economic contexts with varying patterns of resource distribution and diverse perceptions of developer responsibility. Alongside the research literature, perspectives on AI value chains from industry and government represent another emerging body of literature. Similarly to the research literature, industry perspectives on AI value chains are often most interested in how AI systems might add value to pre-existing industrial value chains by increasing efficiency, effectiveness, or productivity (e.g., Appen, 2021; Fife, 2022; Harlin et al., 2023; Shaw & Arkan, 2019; TheSequence, 2022). When industry perspectives do discuss the ethics of AI value chains, they do so with ontological assumptions that are similar to those of Engler and Renda (2022): claims about "responsible AI" or "ethical" AI practices center on the _computational resources_ required to develop and use AI systems (e.g., datasets, models, compute, APIs) rather than the larger social, political, economic, and ecological contexts in which the computational resources and resourcing activities are situated. Some government perspectives take a broader, more context-sensitive view of AI value chains than industry in order to intervene in a larger set of societal and environmental impacts than industry is typically concerned with. In addition to the value chain participants and computational resourcing activities referenced in the EU AI Act (discussed above), other EU initiatives recognize that the impacts of AI value chains extend deep into existing digital and industrial infrastructures (European Commission, 2023) and the environment, with the European Commission (2018) promising support for "more energy-efficient technologies and infrastructure" and "making the AI value chain greener" (p. 9). In addition to the EU, the founding proposal for the OECD's AI Policy Observatory (2019) and some AI governance tools that they have catalogued (e.g., Ferrandis, 2022; OECD, 2023) recognize that AI value chains extend into and pose risks within a wide range of social, political, economic, technological, material, and ecological contexts. Although the EU and OECD initiatives do not explicitly acknowledge that AI systems are multicontextual phenomena, these initiatives suggest that AI value chains are entwined with digital, data, and industrial resources and value chains. The interpretive flexibility of AI value chains may present analytical obstacles to researchers, practitioners, and policymakers, but is also an opportunity to fulfill calls to expand the scope and practices of AI ethics (e.g., Attard-Frost, De los Rios, & Walters, 2022; Hagendorff, 2020; Lauer, 2021). AI value chains share much in common with recent approaches that have been used to study the resourcing activities and ethics of digital platforms. For example, Howson et al. (2022a; 2022b) develop "digital value networks" as a framework to critically analyze "the impact of digital technologies on distributive outcomes in value chains" (2022a, p. 4), and to critique the power asymmetries, exploitative labor relations, and extractive colonial practices enabled by transnational cloudwork platforms. Other recent studies of ethical concerns in AI value chains do not directly apply value chain or value network theories or analysis methods, but nonetheless reference a boundary-spanning "supply chain" or "production chain" through which AI systems are resourced or used to create resources (e.g., Crawford, 2021; Dyer-Witheford, Kjosen, & Steinhoff, 2019; Miceli & Posada, 2022). ## 4 Ethical Implications of AI Value Chains ### Overview of Ethical Concerns, Value Chain Actors, & Resourcing Activities Value chains can serve as theoretical and analytical bridges between many actors, activities, and contexts, enabling researchers and practitioners to better account for the wide range of ethical concerns implicated in the development, use, and governance of AI systems. In this section, we inventory ethical concerns implicated in AI value chains and their related actors and resourcing activities. There are many inventories and typologies of AI-related harms and ethical concerns (e.g., AIAAIC, 2023; Al Incident Database, 2023; Shelby et al., 2023; Stahl et al., 2022; Stahl, 2021), but Stahl et al. (2022) is especially notable for its breadth and depth of coverage. Their typology includes 4 high-level categories of ethical issues and is further subdivided into 6 types of potential AI benefits, 8 types and 30 sub-types of potential AI harms, and 5 "metaphysical issues." Those categories and sub-categories of ethical concerns enabled us to identify several value chain actors, resourcing activities, and interventions related to each of the concerns (see Figure 1). Other inventories overlap with the concerns identified by Stahl et al.: for example, Shelby et al. (2023) focus on a more granular set of harms, while the AI Incident Database (2022) catalogues and categorizes specific, real-world instances in which AI systems caused harm. We use the wide-ranging inventory of Stahl et al. to illustrate the generalizability of a value chain approach across a broad set of benefits and harms, as well as the flexibility of such an approach to also account for more specific, situated issues. Figure 1: The four high-level ethical issues and lower-level ethical issues identified by Stahl et al. (2022). We map the implications of those ethical concerns onto a series of related value chain actors, resourcing activities, and interventions. While policy and industry perspectives on AI value chains primarily focus on what Stahl et al. (2022) describe as "issues arising from machine learning" (e.g., Appen, 2021; Engler and Renda, 2022; Harlin et al., 2023), a larger breadth of ethical concerns, actors, and resourcing activities can be accounted for through a value chain approach. The following sub-sections demonstrate how the ontological and analytical affordances of AI value chains can account for and integrate the ethical concerns inventoried by Stahl et al. We do not aim to provide an exhaustive account of all of the actors, resourcing activities, and ethical concerns that may potentially be involved in AI value chains, but rather, to demonstrate the applicability of a value chain approach to a wide range of ethical concerns. We then advocate for five interventions in AI value chains research and governance practices in Section 5. ### AI Value Chains & Benefits of AI Stahl et al. (2022) acknowledge that although AI ethics usually foregrounds harms of AI systems, AI systems may also present benefits that should be accounted for. The potential benefits include: insights or efficiencies from automating the processing of large volumes of data to make predictions, decisions, or generate synthetic data outputs; improvements in economic output and reductions of environmental damage as a result of more effective and efficient production processes; contribution to United Nations Sustainable Development Goals (SDGs) as well as other international and national pursuits of socially beneficial AI adoption. \begin{table} \begin{tabular}{l|l|l} \multicolumn{3}{l}{**Key-level issues**} & **Examples of related** \\ \multicolumn{3}{l}{**identified by Stahl at-value chain factors**} & **Examples of related** \\ \multicolumn{3}{l}{**Novel insights from data,**} & **ulative chain factors** & resourcing activities \\ \multicolumn{3}{l}{**efficiency improvement,**} \\ \multicolumn{3}{l}{**economic benefits,**} \\ \multicolumn{3}{l}{**environmental benefits,**} \\ \multicolumn{3}{l}{**contribution to**} \\ \multicolumn{3}{l}{**sustainable development**} \\ \multicolumn{3}{l}{**goals, AI for Good**} \\ \end{tabular} & \begin{tabular}{l|l|l} \multicolumn{1}{l}{**Industry**} \\ \multicolumn{3}{l}{**Governments**} \\ \multicolumn{3}{l}{**Intergovernmental**} \\ \multicolumn{3}{l}{**bodies**} \\ \multicolumn{3}{l}{**Civil society**} \\ \end{tabular} & \begin{tabular}{l|l} \multicolumn{1}{l}{**Analytics platform**} \\ \multicolumn{3}{l}{**development \& use** \\ \multicolumn{3}{l}{**Adoption of industrial AI**} \\ \multicolumn{3}{l}{**applications**} \\ \multicolumn{3}{l}{**Job creation \& hiring** \\ \multicolumn{3}{l}{**IP creation \& licensing**} \\ \multicolumn{3}{l}{**Energy consumption**} \\ \multicolumn{3}{l}{**optimization**} \\ \multicolumn{3}{l}{**Policy development \&**} \\ \multicolumn{3}{l}{evaluation**} \\ \end{tabular} \end{table} Table 1: Lower-level ethical issues associated with “Benefits of AI” as identified by Stahl et al. (2022). We map the issues onto examples of related value chain actors and resourcing activities. However, accounting for these potential benefits within the contexts of AI value chains enables us to identify many concomitant harms: novel insights or gains to efficiency in some parts of an AI value chain may raise new risks in others (Cobbe, Veale, & Singh, 2023; Gansky & McDonald, 2022; Widder & Nafus, 2023); contributions to SDGs or "Al for good" initiatives may only be successful relative to a narrow set of measures (Aula & Bowles, 2023; Madianou, 2021; Moore, 2019); economic prosperity or environmental benefits may be inequitably distributed across different groups, communities, or geographies. While AI systems may produce beneficial outcomes for some value chain actors, pre-existing structural injustices in the social, political, and economic contexts of AI systems and their value chains warrant an assumption that the same systems will also produce harmful outcomes for other actors, particularly those who belong to historically marginalized communities (Birhane, 2021; Hind & Seitz, 2022). Stahl et al. (2022) outline ethical concerns related to the use of machine learning (ML) technologies and methods in AI systems. These concerns include issues related to (1) _control of data_, (2) _reliability_, and (3) _lack of transparency_, which we associate with specific actors and resourcing activities in AI value chains. _Control of data_ in AI value chains has been widely studied in resourcing activities such as: the development and enforcement of data privacy laws and policies for machine learning technologies and methods (e.g., European Parliament, 2020; MacKinnon & King, 2022; Veale, Binns, & Edwards, 2018); informed consent from data subjects in data collection and use activities; the sale, purchase, brokerage, and ownership of training and testing data (Crain, 2018; Lamdan, 2022); gathering and using knowledge needed to identify or exploit vulnerabilities in machine learning systems through methods such as inversion attacks or injection attacks (Fredrikson, Jha, & Ristenpart, 2016; Greshake et al., 2023; Wang et al., 2022); corporate capture of funding and other resources, including data, needed to conduct machine learning research (Whittaker, 2021). Ethical concerns related to the control of data in AI value chains can be observed in many real-world cases. For example, the company Clearview AI scraped billions of images from platforms like Facebook and YouTube to develop facial recognition and other surveillance applications of machine learning technologies that have been used by thousands of law enforcement agencies globally (Hatmaker, 2022; Office of the Privacy Commissioner of Canada, 2021; Perrigo, 2022). Clearview's collection and use of this scraped data-often without consent from the data subjects-raises ethical concerns regarding consent, ownership, financing, public procurement, and regulation of facial recognition applications in policing, as well as of the data sourcing and other resourcing activities further upstream from Clearview. Similar ethical concerns are implicated in the value chains of generative AI systems such as ChatGPT, Stable Diffusion, and Midjourney, which are trained on large volumes of data scraped from the open web, typically without consent from the creators or copyright holders (Create Don't Scrape, 2023). Consent and ownership of data throughout the value chains of generative AI applications-including developer and user data inputs as well as user-generated data outputs-are of particular importance to current legal and regulatory activities aiming to more justly create, distribute, and/or redistribute legal, financial, data, and other resources between actors in generative AI value chains (De Vynck, 2023; GitHub Copilot Litigation, 2023; Stable Diffusion Litigation, 2023; Vincent, 2023). Activities related to the _reliability_ of machine learning technologies and methods have also been widely studied. These include: accuracy of model predictions, recommendations, decisions, data outputs, and other informational resources created through the development and use of machine learning models (Angwin et al., 2016; Bender et al., 2021; Grote & Berens, 2022; Mokander & Axente, 2023; Rankin et al., 2020); the development and implementation of ethical quality assurance practices for model training, testing, and management (Burr & Leslie, 2023; Eitel-Porter, 2021); use of cloudwork platforms and outsourcing practices in data work and model work to improve data quality and accuracy (Irani, 2015; Miceli & Posada, 2022; Miceli, Posada, & Yang, 2022; Perrigo, 2023). Ethical concerns related to lack of transparency in machine learning technologies involved in AI value chains include: incentivization and disclosure of funding sources for AI development and AI ethics research (Ahmed, Wahed, & Thompson, 2023; Ochigame, 2019; Whittaker, 2021); documentation, disclosure, and explanation of machine learning and automated decision-making processes and outcomes (Miceli et al., 2022; Mitchell et al., 2019; Raji et al., 2020); inclusion or exclusion of stakeholder knowledges in model design, development, deployment, and application, particularly the exclusion of vulnerable data subjects, impacted groups, and marginalized communities (Birhane et al., 2022a, 2022b; Widder & Nafus, 2023); distribution and enforcement of accountability and liability for harms amongst value chain actors (Bartneck et al., 2020; Brown, 2023; Cobbe, Veale, & Singh, 2023; European Commission, 2022; Zech, 2021); possibilities for collective organizing, and protest against discriminatory and harmful AI practices (e.g., ACLU, 2023; Broderick, 2023). Loworder level issues: Examples of related research associated with value chain actions: features of living in a digital world: * Public and private funding of administators * Public service * Administators * Public service * Public services * People seeking public services or * Public services or * Justice * Economically * disadvantaged * Exploited & * Inclusion of knowedges from * vulnerable and marginalized * Employers * Resource processes * Redistribution of resources [MISSING_PAGE_POST] f resources * List of resources * of resources * List of resources * of resources * List of resources * of resources * List of resources * of resources * List of resources * List of resources * of resources * List of resources * of resources * List of resources * of resources * List of resources * of resources * List of resources * of resources * List of resources * of resources * List of resources [MISSING_PAGE_POST] esources * of resources * resources * of resources * resources * of resources * resources * of resources * of resources * of resources * resources * of resources * resources * of resources * resources Stahl et al. (2022) outline many ethical concerns related to issues that arise in AI systems as a consequence of "living in a digital world" which we associate with specific actors and resourcing activities in AI value chains. These concerns include: (1) _economic issues_, (2) _justice_, (3) _human freedoms_, (4) _broader societal issues_, and (5) _unknown issues_. Economic issues are especially significant in AI value chains, and issues of particular note can be found in the use of automation and biometrics in hiring, contracting, dismissal, and surveilling workers (Bales & Stone, 2020; Hickok & Maslej, 2023); labor exploitation, distributions of wealth, capital, and other financial resources, particularly in transnational and inter-class contexts (Dyer-Witheford, Kjosen, & Steinhoff, 2019; Miceli & Posada, 2022; Miceli, Posada, & Yang, 2022); open-sourcing of and access to Al-related data, code, and other software resources (Langenkamp & Yue, 2022; Masiello & Slater, 2023); distributions of data, computational, technological, and financial resources obtained through the development and/or use of Al systems, as well as distributions of political and economic power that emerge from those resource distributions (Dyer-Witheford, Kjosen, & Steinhoff, 2019; Pasquale, 2020). Examples of specific cases involving some of these economic issues include: OpenAI's outsourcing of data labeling to workers employed by Sama Al in Kenya, many of whom were psychologically harmed and undercompensated during their employment (Perrigo, 2023); consolidation of models and datasets in an increasingly small group of companies (Ahmed, Wahed, & Thompson, 2023); Amazon's development, use, and subsequent disuse of a hiring automation tool that discriminated against women (Dastin, 2018); striking Screen Actors Build and Writers Guild of America workers demanding their employers refrain from using their likenesses or union-protected creative materials in training datasets and from introducing generative Al applications into production processes (Broderick, 2023; Webster, 2023). The actors and resourcing activities we associate with _justice_ in Al value chains include: the public and private funding, procurement, data preparation, design, and development of automated decision-making systems with impacts on justice, as well as subsequent impacts of automated decisions on justice and public service outcomes (Angwin et al., 2016; Eubanks, 2018; Gans-Combe, 2022; Mulligan & Bamberger, 2019); the inclusion or exclusion of knowledge and perspectives from vulnerable, marginalized, and underrepresented groups in Al education, design, development, and governance processes, and particularly those who have been historically marginalized due to their race and/or gender (Birhane et al., 2022; West, Whittaker, & Crawford, 2019); redistribution of resources required to develop and use Al systems, as well as the just or unjust distributions of value co-created throughout Al system lifecycles; macro-scale social, political, and economic outcomes of widespread Al adoption (Dyer-Witheford, Kjosen, & Steinhoff, 2019; Pasquale, 2020; Solaiman et al., 2023). The actors and resourcing activities that we associate with _human freedoms_ in Al value chains include: harmful outcomes of exploitative labor practices as well as algorithmic discrimination experienced by vulnerable groups and individuals, resulting in further loss of access to resources needed to pursue social, political, and economic opportunities (Angwin et al., 2016; Eubanks, 2018; Miceli & Posada, 2022); restrictive ownership and use of government and private sector information resulting in disproportionate levels of access to and benefit from a variety of data resources, information resources, and computational resources by researchers, companies, governments, and civil society (Ahmed, Wahed, & Thompson, 2023; Whittaker, 2021). What Stahl et al. refer to as _broader societal issues_ is a category of ethical concerns containing a variety of large-scale impacts on potentials for physical conflict, environmental degradation, and erosion of democratic institutions. The actors and resourcing activities that we associate with these broader societal issues include: military and police procurement and use of Al applications and contracting relationships (Hoijtink & Hardeveld, 2022; Mahoney, 2020; Mulligan & Bamberger, 2019; Taddeo et al., 2021); energy and water usage of Al models and infrastructures resulting in carbon emissions, depletion of freshwater reserves, and other global and local environmental harms (GPAI, 2021; Li et al., 2023; Luccioni & Hernandez-Garcia, 2023); mineral extraction and other harmful mining, manufacturing, transportation, and assembly processes involved in the co-creation of the material resources needed to develop and use Al systems, as well as the disposal and recycling of environmentally harmful e-waste at the end of Al hardware lifecycles (Crawford, 2021; Crawford & Joler, 2018); the creation and reinforcement of epistemic and social filter bubbles based on algorithmic profiling and manipulation of social media users (Kronke, 2019; Woolley, 2018). What Stahl et al. refer to as _unknown issues_ is a category of ethical concerns containing a variety of complex harms that are difficult to predict the potential consequences of. The actors and resourcing activities that we associate with these unknown issues include: unforeseen misuses and abuses of personal data, digital identities, and mis/disinformation in the development and/or use of AI applications by malicious actors (Brundage et al., 2018); implementation and enforcement of excessively strict or excessively permissive AI regulations (Ada Lovelace Institute, 2021; Smuha, 2021); excessive funding of AI research that prioritizes finding solutions to the wrong problems (Tiku, 2023), such as the "metaphysical issues" we discuss in greater detail in the following sub-section. ### AI Value Chains & Metaphysical Issues Stahl et al. also describe several "metaphysical issues" that pertain to speculative ethical concerns such as machine consciousness, artificial moral agents, artificial "super-intelligence", and changes to "human nature" enabled by new AI technologies. The actors and resourcing activities we associate with these metaphysical issues include: data and knowledge being assembled to develop hypothetical conscious machines, artificial moral agents, and superintelligent agents; the consolidation of resources under the control of those hypothetical agents; uneven distribution of future technological capabilities between different groups of human actors, resulting in divergent evolutionary trajectories. These "metaphysical issues" are speculative, and the AI value chains and ethical concerns implicated within these hypothetical activities are futurological extrapolations of comparable ethical concerns arising from present-day, real-world distributions of benefit and harm. For example, concerns related to the distribution of benefit/harm through the development and use of a speculative "autonomous" _artificial moral agent_ are comparable to present-day concerns related to _human moral agents_ distributing benefit/harm through the development and use of automated systems. Similarly, issues of resource distribution, resource consolidation, and power asymmetry arising from the development of speculative _superintelligent agents_ are comparable to issues of resource distribution, resource consolidation, and power asymmetry that exist between present-day, real-world _human agents_. Some researchers convincingly argue for disregarding these speculative ethical issues in AI ethics and governance practices, and instead centering analysis on real, present harms (e.g., Gebru & Torres, 2023; Torres, 2023). A value chain approach builds upon such arguments by revealing that the ethical problems underlying these speculative "metaphysical issues" are futurological extensions of ethical problems that can already be observed in real-world, present-day AI value chains. Therefore, greater study can and should be given to the empirical realities of present-day AI value chains. We also add that the analytical affordances of AI value chains-such as actors, resources, and resourcing activities-are flexible enough to account for the potential benefits and harms of AI systems across multiple spatial, temporal, and organizational scales (including those benefits and harms that exist only in speculative futures not meriting significant study). ## 5 Directions for Future Research & Governance Future AI research and practices of AI governance ought to more comprehensively account for, integrate, and intervene in a broader range of ethical concerns, value chain actors, and resourcing activities we outlined in the previous section. An integrative approach to accounting for and intervening in the ethics of AI value chains requires intervening in both _proximal concerns_ and _distal concerns_. _Proximal_ concerns are ethical concerns related to the resourcing activities most closely involved in the technical design, development, deployment, and use of an AI system (e.g., ethical concerns associated with the collection and preparation of training and testing data, with the development and deployment of models, and with the operation and monitoring of the system). _Distal_ concerns are ethical concerns that are further upstream or downstream in the value chains involved in the relatively proximal resourcing activities related to technical design and operation, or located within the system's broader socio-technical, organizational, institutional, political-economic, and/or ecological contexts. Attending to both proximal and distal concerns in Al value chains would enable more integrated practices of research and governance that span a wide range of spaces, times, contexts, and scales of activity. There are two key opportunities for researchers to further investigate and enact an approach to Al ethics grounded in value chain theories and methods: (1) _Conduct more empirical and action research_ into the specific ethical concerns, value chain actors, and resourcing activities we outline in Section 4 (e.g., studies of the impacts of generative Al development and use on artists and workers, or studies of the impacts of outsourcing practices on marginalized workers in Al value chains). By collecting and analyzing more quantitative and qualitative data pertaining to a variety of real-world Al value chains and their related actors and activities, researchers can provide a rich evidence base upon which other researchers, practitioners, and policymakers can conduct further research and governance on the basis of. Additionally, by involving value chain actors directly in research activities, identifying their concerns and needs, and developing and evaluating interventions that are designed to satisfy their needs, researchers can provide detailed insights on stakeholder perspectives, best practices, and policy gaps and policy options. (2) _Develop, apply, and evaluate the effectiveness of methods_ for systematically modeling Al value chains, identifying proximal and distal concerns in those value chains, and planning and enacting interventions in those value chains. Many frameworks for systematically modeling and analyzing value chains and value networks can be applied to micro- and macro-scale studies of Al value chains, such as the service system analysis framework of Frost, Cheng, and Lyons (2019), the data value chain framework of Lim et al. (2018), as well as the framework used by Howson et al. (2022a) to theorize and analyze digital value networks. These and similar frameworks and methods can help to theoretically and methodologically ground future research, practice, and policy interventions in Al value chains. In addition to opportunities for researchers, there are two key opportunities for policymakers and other practitioners of Al governance to enact practices of Al governance based on value chain ethics: (1) _Design and implement ethical sourcing practices_ across all of the value chains involved in the resourcing of AI systems. Ethical sourcing practices are intended to support "managing all processes of supplying the firm with required materials and services from a set of suppliers in an ethical and socially responsible manner" (Kim, Colicchia, & Menachof, 2018, p. 1033). In the context of AI practices, ethical sourcing requires all actors involved in the design, development, and use of AI systems to account for both the proximal and distal impacts of their systems on individuals, vulnerable and marginalized groups, and the environment (Widder & Wong, 2023). Several frameworks for guiding ethical sourcing practices related to various ethical concerns throughout AI value chains have already been developed, such as frameworks for ethical data and model documentation, annotation, and auditing (e.g., Mitchell et al., 2019; Miceli et al., 2022; Raji et al., 2020), Fairwork's principles and practices for preventing harms to workers in AI value chains (Fairwork, 2023; GPAI, 2022), and Global Partnership on AI's principles and practices for mitigating the harmful environmental impacts of AI systems (GPAI, 2021). Governance instruments such as industry standards, certification programs, procurement policies, and codes of conduct should also be used to support the implementation and formalization of ethical sourcing practices, and to establish shared requirements for ethical sourcing and procurement of AI systems across jurisdictions, institutions, companies, and sectors. In the context of AI governance, some existing standards such as the United States National Institute of Standards and Technology's _AI Risk Management Framework_ (2023) do not contain provisions pertaining directly to the ethical sourcing of resources across proximal and distal concerns in AI value chains, while other standards such as those developed by the ISO/IEC JTC 1/SC 42 committee on AI (2023) contain some provisions related to value chain concerns such as societal and environmental benefit and sustainability of AI systems. Future iterations of these and other AI standards can be strengthened by adding and integrating more comprehensive requirements and other provisions for ethical sourcing of AI systems and ethical value chain design with existing technical design requirements. (2) _Design and implement legislation, regulations, and other policy instruments_ that are intended to equitably distribute benefits and responsibilities for preventing harms throughout AI value chains. Although the European Union's AI Act (2021) aims to set requirements on many value chain actors involved in "the sale and the supply of software, software tools and components, pre-trained models and data, or providers of network services," other emerging legislative and regulatory frameworks such as Canada's proposed _Digital Charter Implementation Act_ target a relatively narrow set of value chain actors and resourcing activities (Attard-Frost, 2023; Attard-Frost, Brandusescu, & Lyons, 2023). Even the requirements on value chain actors set by the European Union's AI Act primarily target a set of ethical concerns related to software, machine learning models, and data resourcing activities that are highly proximal to the technical context of AI development and use, rather than comparatively distal ethical concerns related to issues such as the sourcing and environmental impacts of computing hardware and infrastructure, equitable distribution of economic gains realized through widespread AI adoption, and labor exploitation, particularly in the context of extractive transnational AI value chains (Kak & West, 2023). Organizational policies in the public and private sectors intended to govern internal AI development, use, and procurement processes can also be created or amended to more equitably distribute benefits and prevent harms in AI value chains. For example, the Treasury Board Secretariat of Canada has implemented a policy called the _Directive on Automated Decision-making_ (2023a) that is intended to reduce the risks and maximize the benefits of automated decision-making systems developed or procured by several Canadian federal institutions. Although the Directive is accompanied by an algorithmic impact assessment tool (2023b) that is intended to identify and mitigate a variety of risks posed by developed and procured systems (including risks to "economic interests" and the environment), both the Directive and the tool do not directly identify most of the more distal concerns related to the economic, justice, human freedom, and broader societal issues we inventory in Section 4 as potentially harmful impacts. Future public and corporate policy initiatives and future iterations of existing AI policies can be strengthened by making greater efforts to account for and intervene in the value chain actors and resourcing activities we inventory in Section 4. Although AI value chains are highly complex in scope and scale, context-sensitive research practices and governance practices can be judiciously applied to better account for and intervene in the ethical concerns implicated in their inter-actor relationships and resourcing activities. The ethics of AI value chains remains an emerging area of research and practice, but the opportunities for future research and governance we outline above provide a foundation for developing more integrative and effective theories and practices of AI ethics and AI governance grounded in value chain thinking.
2301.13571
Exceptional-point-assisted entanglement, squeezing, and reset in a chain of three superconducting resonators
The interplay between coherent and dissipative dynamics required in various control protocols of quantum technology has motivated studies of open-system degeneracies, referred to as exceptional points (EPs). Here, we introduce a scheme for fast quantum-state synthesis using exceptional-point engineering in a lossy chain of three superconducting resonators. We theoretically find that the rich physics of EPs can be used to identify regions in the parameter space that favor a fast and quasi-stable transfer of squeezing and entanglement, or a fast reset of the system. For weakly interacting resonators with the coupling strength $g$, the obtained quasi-stabilization time scales are identified as $1/(2\sqrt{2}g)$, and reset infidelities below $10^{-5}$ are obtained with a waiting time of roughly $6/g$ in the case of weakly squeezed resonators. Our results shed light on the role of EPs in multimode Gaussian systems and pave the way for optimized distribution of squeezing and entanglement between different nodes of a photonic network using dissipation as a resource.
Wallace S. Teixeira, Vasilii Vadimov, Timm Mörstedt, Suman Kundu, Mikko Möttönen
2023-01-31T11:48:51Z
http://arxiv.org/abs/2301.13571v2
Exceptional-point-assisted entanglement, squeezing, and reset in a chain of three superconducting resonators ###### Abstract The interplay between coherent and dissipative dynamics required in various control protocols of quantum technology has motivated studies of open-system degeneracies, referred to as exceptional points (EPs). Here, we introduce a scheme for fast quantum-state synthesis using exceptional-point engineering in a lossy chain of three superconducting resonators. We theoretically find that the rich physics of EPs can be used to identify regions in the parameter space that favor a fast and quasi-stable transfer of squeezing and entanglement, or a fast reset of the system. For weakly interacting resonators with the coupling strength \(g\), the obtained quasi-stabilization time scales are identified as \(1/(2\sqrt{2}g)\), and reset infidelities below \(10^{-5}\) are obtained with a waiting time of roughly \(6/g\) in the case of weakly squeezed resonators. Our results shed light on the role of EPs in multimode Gaussian systems and pave the way for optimized distribution of squeezing and entanglement between different nodes of a photonic network using dissipation as a resource. ## I Introduction Quantum mechanics has provided profoundly novel ways of information processing, communication, and metrology [1]. Although non-linearity expressed by the anharmonicity of energy levels is a key metric for physical realizations of qubits, quantum harmonic systems have also a broad range of quantum-technological applications employing, e.g., squeezing and entanglement as resources [2; 3]. The efficient use of such properties in experiments typically requires quick transitions from coherent to incoherent dynamics for different stages of the protocols, and hence dissipation engineering using in-situ tunable components plays an important role towards fast control and scalability of practical quantum systems [4]. In circuit quantum electrodynamics (cQED), for example, efforts have been made to integrate devices with in-situ-tunable dissipation to prepare specific quantum states [5; 6; 7; 8; 9; 10; 11; 12], produce fast reset [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24], and to exploit the potential benefits of open-system degeneracies, referred to as exceptional points (EPs) [25; 26; 27; 21; 28; 29]. In contrast to Hermitian degeneracies, EPs induce the coalescence of eigenvalues and eigenvectors of the dynamical matrix governing the open-system evolution leading to critical dynamics manifested by polynomial solutions in time [30; 31]. These features are key elements for optimized heat flow [25] and sensitive parameter estimation [30]. When EPs are dynamically encircled in the parameter space, counter-intuitive effects not observed in closed systems appear such as the breakdown of the adiabatic approximation and topological energy transfer [32; 33; 34]. Due to their novelty for the observation of open-system phenomena and applications, EPs have also been acknowledged in other physical architectures [35; 36; 37]. However, the relationship between EPs and the emergence of non-classical and non-local features in multipartite continuous-variable (CV) quantum systems has not been fully explored [38; 39; 40; 41; 42; 43]. Quantum harmonic arrays have a practical appeal in cQED for the implementation of quantum memories [44] and for the capability to simulate many-body physics [45]. Even though the transport of quantum correlations has been extensively theoretically studied in related setups [46; 47; 48; 49], the high dimension of such systems and their dissipative features render the characterization of EPs an involved procedure [50; 51; 52]. Motivated by the above-mentioned potential use cases and issues, in this work, we introduce exceptional-point engineering for squeezing and entanglement propagation. We consider a minimal setup for the production of high-order EPs, consisting of a chain of tree linearly coupled superconducting resonators with independent decay channels. To some extent, our system can be described by its first and second moments, so that it can constitute an example of a Gaussian system, i.e., a CV system represented by a Gaussian Wigner function [3]. To analytically describe the EP-related phenomena, we employ the Jordan normal form of the dynamical matrix of the second moments, allowing for investigations beyond energy flow. Interestingly, we observe that even for weakly coupled resonators, the operation in the vicinity of a specific second-order EP may turn the central resonator into a fast squeezing splitter and distant-entanglement generator using only initial squeezing in a single resonator. We calculate theoretical bounds for the squeezing and entanglement of the quasi-stable states and observe their rich dependence on the initial squeezing parameter. On the other hand, operation near a different, third-order EP branch provides substantial speed up of the decay towards the ground state. Therefore, the detailed knowledge of its open-system degeneracies render the system a versatile structure for quantum protocols requiring fast stabilization or reset of the desired properties. This article is organized as follows. In Sec. II, we present the general theory of exceptional points in noisy Gaussian systems. In Sec. III, we provide the details of the considered setup, including the characterization of its EPs. Sections IV and V are dedicated to studies of different effects arising at or near EPs, with a focus on the quasi-stabilization and decay of non-classical Gaussian states, respectively. A discussion on the use cases and limitations of EP engineering is provided in Sec. VI. The conclusions are drawn in Sec. VII. ## II Exceptional points in noisy Gaussian systems Our general model shown in Fig. 1(a) consists of a system of \(N\) harmonic modes and of an environment such that each system mode is interacting with their local Markovian bath. The \(j\):th mode is described by annihilation and creation operators \(\hat{a}_{j}\) and \(\hat{a}_{j}^{\dagger}\), respectively, with the canonical commutation relations \([\hat{a}_{j},\hat{a}_{k}^{\dagger}]=\delta_{jk}\). We assume that the modes are linearly coupled to one another in any desired topology yielding up to quadratic terms in their coupling Hamiltonian. In Secs. III-V, we explore a linear network consisting of three lossy superconducting resonators as shown in Fig. 1(b). By defining the quadrature operators of the \(j\):th mode as \(\hat{q}_{j}=(\hat{a}_{j}+\hat{a}_{j}^{\dagger})/\sqrt{2}\) and \(\hat{p}_{j}=-i(\hat{a}_{j}-\hat{a}_{j}^{\dagger})/\sqrt{2}\) and their \(2N\)-dimensional vector as \(\hat{\mathbf{x}}=(\hat{q}_{1},\hat{p}_{1},...,\hat{q}_{N},\hat{p}_{N})^{\top}\), the total Hermitian Hamiltonian describing the system classically driven by amplitudes \(\mathbf{c}=(c_{1},...,c_{2N})^{\top}\) can be cast into the compact quadratic form [53] \[\hat{H}=\frac{1}{2}\mathbf{\hat{x}}^{\top}\mathbf{H}\hat{\mathbf{x}}+\mathbf{c }^{\top}\mathbf{\Omega}\mathbf{\hat{x}}, \tag{1}\] where we dropped possible constant energy offsets, introduced the \(2N\times 2N\) symmetric matrix \(\mathbf{H}\) carrying the internal and mode-mode coupling energies, and utilized the symplectic matrix \[\mathbf{\Omega}=\bigoplus_{j=1}^{N}\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right). \tag{2}\] The commutation relations between the elements of \(\mathbf{\hat{x}}\) read \([\mathbf{\hat{x}}_{j},\mathbf{\hat{x}}_{k}]=i\mathbf{\Omega}_{jk}\). Note that \(\{\hat{q}_{j}\}\) and \(\{\hat{p}_{j}\}\) play the role of generalized dimensionless position and momentum operators, such that for superconducting \(LC\) circuits they are related to flux and charge operators, respectively [54]. After tracing out the environmental degrees of freedom, the temporal evolution of the reduced density operator of the system, \(\hat{\rho}\), is given by the Lindblad master equation \(\mathrm{d}\hat{\rho}/\mathrm{d}t=-i[\hat{H},\hat{\rho}]/\hbar+\mathcal{L}_{ \downarrow}(\hat{\rho})+\mathcal{L}_{\uparrow}(\hat{\rho})\), where \[\mathcal{L}_{l}(\hat{\rho})=\frac{1}{2\hbar}\sum_{j=1}^{N}\left[2\hat{L}_{j}^ {l}\hat{\rho}(\hat{L}_{j}^{l})^{\dagger}-\left\{(\hat{L}_{j}^{l})^{\dagger} \hat{L}_{j}^{l},\hat{\rho}\right\}\right], \tag{3}\] describes the incoherent dynamics of the system associated to the jump operators \(\{\hat{L}_{j}^{l}\}\), where the labels \(l=\downarrow,\uparrow\) refer to thermal emission and absorption, respectively. We restrict to the case where such operators are local and linear combinations of the elements of \(\mathbf{\hat{x}}\), i.e., \(\hat{L}_{j}^{l}=(\mathbf{u}_{j}^{l})^{\top}\mathbf{\Omega}\mathbf{\hat{x}}\), with coefficients given by the \(2N\)-dimensional vector \(\mathbf{u}_{j}^{l}\) that has only a single or two adjacent non-zero elements. For example, this corresponds to jump operators with the form \(\hat{L}_{j}^{l}=n_{j}^{l}\hat{a}_{j}+m_{j}^{l}\hat{a}_{j}^{\dagger}\), thus encompassing \(N\) individual squeezed thermal environments [55]. The case in which both thermal excitation and bath squeezing are negligible is thoroughly investigated in Secs. III-V for \(N=3\). Under the above conditions and for an initial Gaussian state of the \(N\) oscillators, the dynamics of the system can be fully characterized by the so-called mean vector and covariance matrix (CM), the components of which are \(\langle\mathbf{\hat{x}}_{j}\rangle=\mathrm{Tr}(\mathbf{\hat{x}}_{j}\hat{\rho})\) and \(\mathbf{V}_{jk}=\frac{1}{2}\left(\langle\mathbf{\hat{x}}_{j}\mathbf{\hat{x}}_ {k}\rangle+\langle\mathbf{\hat{x}}_{k}\mathbf{\hat{x}}_{j}\rangle\right)- \langle\mathbf{\hat{x}}_{j}\rangle\langle\mathbf{\hat{x}}_{k}\rangle\), respectively. Here, we aim to solve the dynamics of the CM, since it captures all squeezing and non-local properties of the system. By differentiating \(\mathbf{V}\) with respect to time and using Eq. (3), we verify that the CM evolves according to the differential Lyapunov equation [56] \[\frac{\mathrm{d}\mathbf{V}}{\mathrm{d}t}=\mathbf{\Gamma}\mathbf{V}+\mathbf{V }\mathbf{\Gamma}^{\top}+\mathbf{D}, \tag{4}\] where we defined the \(2N\times 2N\) matrices \(\mathbf{\Gamma}=\mathbf{\Omega}(\mathbf{H}-\mathrm{Im}\mathbf{\Gamma})/\hbar\), \(\mathbf{D}=\mathrm{Re}\mathbf{\Gamma}/\hbar\), and \(\mathbf{\Upsilon}=\sum_{l,j}[\mathbf{u}_{j}^{l}(\mathbf{u}_{j}^{l})^{\dagger}]\). The CM is a real, symmetric and positive-definite matrix. As a compact statement of the uncertainty principle, the CM must also fulfill the condition \(\mathbf{V}+i\mathbf{\Omega}/2\geq 0\)[57]. Below we focus on the scenario where \(\mathbf{\Gamma}\) and \(\mathbf{D}\) are independent of time. Given an initial CM \(\mathbf{V}(0)\equiv\mathbf{V}_{0}\) Figure 1: (a) Schematic diagram of the general system considered in this paper consisting of \(N\) harmonic quantum modes linearly coupled to one another (black lines). In addition, each mode is coupled to their own Markovian environment (rounded squares). (b) Particular realization of the system explored in this work, where three superconducting resonators are capacitively coupled in a linear-chain configuration. In addition, each resonator has their own drive lines (triangles), using which the system can be prepared and measured. The decay rates of resonators R2 and R3 can be controlled by quantum-circuit refrigerators (QCRs) placed at the resonator input ports. Each QCR is comprised of a normal-metal–insulator–superconducting (NIS) junction and can remove photons incoherently from the system mediated by electron tunneling at specific bias-voltage pulses [22; 23]. the solution of Eq. (4) in this case is given by [58] \[\mathbf{V}(t)=\mathrm{e}^{\boldsymbol{\Gamma}t}\left(\mathbf{V}_{0}-\mathbf{V}_{ \mathrm{ss}}\right)\mathrm{e}^{\boldsymbol{\Gamma}^{\top}t}+\mathbf{V}_{\mathrm{ ss}}, \tag{5}\] where \(\mathbf{V}_{\mathrm{ss}}\) is the steady-state CM obtained as the solution of the algebraic Lyapunov equation \(\boldsymbol{\Gamma}\mathbf{V}_{\mathrm{ss}}+\mathbf{V}_{\mathrm{ss}}\mathbf{ \Gamma}^{\top}+\mathbf{D}=0\). We observe from Eqs. (4) and (5) that \(\boldsymbol{\Gamma}\) has the role of a dynamical matrix so that all possible EPs are determined by its structure. Since the entries of \(\boldsymbol{\Gamma}\) are real numbers with units of angular frequency, its eigenvalues are the complex-conjugate pairs \(\lambda_{\mathbf{z}_{m}}^{\pm}\). Here, we define the index \(\mathbf{s}_{m}=(m,\mu_{m})\) to refer to the \(m\):th pair of the eigenvalues of \(\boldsymbol{\Gamma}\), each eigenvalue having a multiplicity \(\mu_{m}\). Observe that the maximum allowed multiplicity is thus \(\max(\mu_{m})=N\). The matrix \(\boldsymbol{\Gamma}\) admits a Jordan normal form \(\boldsymbol{\Gamma}=\mathbf{P}\mathbf{J}\mathbf{P}^{-1}\), where \(\mathbf{P}\) is a non-singular matrix and \(\mathbf{J}=\mathrm{diag}[\mathbf{J}_{\mathbf{s}_{1}}^{-}(\lambda_{\mathbf{s}_ {1}}^{-}),...,\mathbf{J}_{\mathbf{s}_{p}}^{+}(\lambda_{\mathbf{s}_{p}}^{+})]\). The Jordan blocks \(\mathbf{J}_{\mathbf{s}_{m}}^{\pm}(\lambda_{\mathbf{s}_{m}})\) can be decomposed as \(\mu_{m}\times\mu_{m}\) matrices \(\mathbf{J}_{\mathbf{s}_{m}}^{\pm}(\lambda_{\mathbf{s}_{m}}^{\pm})=\lambda_{ \mathbf{s}_{m}}^{\pm}\mathbf{I}_{\mu_{m}}+\mathbf{N}_{\mu_{m}}\), with \(\mathbf{I}_{\mu_{m}}\) being the identity matrix and \(\mathbf{N}_{\mu_{m}}\) having the elements above the diagonal filled with ones. Naturally, the Jordan blocks for \(\mu_{m}=1\) are just the scalars \(\lambda_{\mathbf{s}_{m}}^{\pm}\). With these definitions, Eq. (5) can be rewritten as \[\mathbf{V}(t)=\mathrm{Pe}^{\boldsymbol{J}t}\mathbf{P}^{-1}\left(\mathbf{V}_{0 }-\mathbf{V}_{\mathrm{ss}}\right)\left(\mathbf{P}^{-1}\right)^{\top}\mathrm{e }^{\boldsymbol{\Gamma}^{\top}t}\mathbf{P}^{\top}+\mathbf{V}_{\mathrm{ss}}, \tag{6}\] where \(\mathrm{e}^{\boldsymbol{J}t}=\mathrm{diag}(\mathrm{e}^{\lambda_{\mathbf{s}_{1} }^{-}t}\mathrm{e}^{\mathbf{N}_{\mu_{1}}t},...,\mathrm{e}^{\lambda_{\mathbf{s} _{2}}^{+}t}\mathrm{e}^{\mathbf{N}_{\mu_{2}}t})\). The emergence of EPs and the associated critical dynamics of the CM correspond to the cases where the dynamical matrix \(\boldsymbol{\Gamma}\) becomes non-diagonalizable, i.e., for any \(\mu_{m}>1\). In other words, degeneracies in the spectrum of \(\boldsymbol{\Gamma}\) produce nilpotent matrices \(\mathbf{N}_{\mu_{m}}t\), the exponentials of which yield polynomials in time. Hereafter, these non-Hermitian degeneracies will be referred to as EP-\(\mu_{m}\). Considering the definition of \(\boldsymbol{\Gamma}\), we remark that the term \(\boldsymbol{\Omega}\mathbf{H}\) itself does not promote critical dynamics as it gives rise to unitary evolution of the CM. The production of EPs must be accompanied with the incoherent processes caused by the local environments and attributed to the term \(\boldsymbol{\Omega}\mathbf{Im}\boldsymbol{\Upsilon}\). In summary, Eq. (6) is valid for any time-independent matrices \(\boldsymbol{\Gamma}\) and \(\mathbf{D}\) describing the evolution of a system of coupled quantum harmonic oscillators in noisy Gaussian channels yielding the steady-state CM \(\mathbf{V}_{\mathrm{ss}}\). At an EP, Eq. (6) reveals that the solution linked to the critical dynamics is an exponential function multiplied by a polynomial, which will be explored below in specific cases. Alternatively, the description of EPs for quadratic Liouvillians, such as the one related to Eq. (3), may be given in terms of annihilation and creation operators as recently developed in Ref. [59]. ## III Three coupled resonators under individual losses The system and its environment considered in this work is depicted in Fig. 1(b). Three superconducting resonators, R1, R2, and R3 are capacitively coupled in a linear-chain configuration through a fixed coupling constant \(g>0\). We focus on a single electromagnetic mode for each resonator, which, including the coherent couplings, defines our system. Each mode may dissipate its energy into its independent linear bath. Nevertheless, quantum effects may emerge at low temperatures and for sufficiently high quality factors and for non-classical initial states [54], and consequently we need to employ a quantum-mechanical model. In the single-mode and rotating-wave approximations, the Hamiltonian of the system reads \[\hat{H}=\hbar\sum_{j=1}^{3}\omega_{j}\left(\hat{a}_{j}^{\dagger}\hat{a}_{j}+ \frac{1}{2}\right)+\hbar g(\hat{a}_{1}\hat{a}_{2}^{\dagger}+\hat{a}_{2}\hat{a} _{3}^{\dagger}+\mathrm{h.c.}), \tag{7}\] where \(\omega_{j}\) is the fundamental angular frequency of the \(j\):th resonator, \(\{\hat{a}_{j}\}\) are the corresponding ladder operators defined as in Sec. II, and h.c. refers to the Hermitian conjugate. The losses of the system are modeled here as in Eq. (3), with jump operators \(\hat{L}_{j}^{\downarrow}=\sqrt{\hbar\kappa_{j}}\hat{a}_{j}\) and decay rates \(\kappa_{j}>0\), for \(j=1,2,3\). Some of the decay rates can be adjusted experimentally through the QCRs shown in Fig. 1(b). As we show below, to produce EP-3 with degenerate resonators, we need asymmetric decay rates, a scenario which can be realized by the two independent QCRs shown in Fig. 1(b). In the following analysis, thermal excitations are neglected so that \(\hat{L}_{j}^{\dagger}\approx 0\). By writing the ladder operators in terms of the quadrature operators as \(\hat{a}_{j}=(\hat{q}_{j}+i\hat{p}_{j})/\sqrt{2}\) and using the notation of Sec. II, the \(6\times 6\) dynamical matrix \(\boldsymbol{\Gamma}\) becomes \[\boldsymbol{\Gamma}=\left(\begin{array}{ccc}\mathbf{K}_{1}&\mathbf{G}& \mathbf{0}_{2}\\ \mathbf{G}&\mathbf{K}_{2}&\mathbf{G}\\ \mathbf{0}_{2}&\mathbf{G}&\mathbf{K}_{3}\end{array}\right), \tag{8}\] where \(\mathbf{0}_{2}\) is the \(2\times 2\) null matrix and \[\mathbf{K}_{j}=\left(\begin{array}{ccc}-\frac{\kappa_{j}}{2}&\omega_{j}\\ -\omega_{j}&-\frac{\kappa_{j}}{2}\end{array}\right),\quad\mathbf{G}=\left( \begin{array}{ccc}0&g\\ -g&0\end{array}\right). \tag{9}\] By denoting the single-mode CM of the vacuum state as \(\mathbf{V}_{\mathrm{vac}}^{(j)}=\mathrm{diag}\left(1,1\right)/2\), one readily obtains \[\mathbf{D}=\bigoplus_{j=1}^{3}\kappa_{j}\mathbf{V}_{\mathrm{vac}}^{(j)},\ \mathbf{V}_{\mathrm{ss}}=\bigoplus_{j=1}^{3}\mathbf{V}_{\mathrm{vac}}^{(j)}, \tag{10}\] the latter corresponding to the CM of any product of three coherent states. Since the jump operators here do not promote incoherent displacements, the steady state is actually the three-mode vacuum state \(|0\rangle_{1}|0\rangle_{2}|0\rangle_{3}\) as long as \(\kappa_{j}>0\). ### Characterization of exceptional points Finding the EPs directly from the spectrum of \(\boldsymbol{\Gamma}\) may be challenging as one needs to solve a \(2N\):th degree polynomial equation, or in the studied case, a sextic equation. However, owing to the absence of counter-rotating terms in the form of \(\hat{H}\) here, the characterization of EPs can be simplified to the study of the dynamical equation for the \(3\times 3\) vector \(\mathbf{a}=\left(\langle\hat{a}_{1}\rangle,\langle\hat{a}_{2}\rangle,\langle \hat{a}_{3}\rangle\right)^{\top}\). By moving Eq. (3) to a frame rotating with \(\omega_{1}\), one can obtain \(\dot{\mathbf{a}}=-i\mathcal{H}\mathbf{a}\), with \(\mathcal{H}\) having the role of an effective non-Hermitian Hamiltonian. Explicitly, we have \[\mathcal{H}=\left(\begin{array}{ccc}-i\frac{\kappa_{1}}{2}&g&0\\ g&\delta_{2}-i\frac{\kappa_{2}}{2}&g\\ 0&g&\delta_{3}-i\frac{\kappa_{3}}{2}\end{array}\right), \tag{11}\] where \(\delta_{2}=\omega_{2}-\omega_{1}\) and \(\delta_{3}=\omega_{3}-\omega_{1}\) are frequency detunings. Without loss of generality, we assume that the parameters \(g\), \(\omega_{1}\), and \(\kappa_{1}\) are fixed. Thus it is convenient to express the parameters of R2 and R3 with respect to those of R1. We proceed with this parametrization using complex-valued parameters \(\{\varepsilon_{k}\}\) such that for \(k=2,3\), we have \[\delta_{k}(\varepsilon_{k})=\sqrt{2}g\mathrm{Im}(\varepsilon_{k}),\ \ \kappa_{k}( \varepsilon_{k})=\kappa_{1}+2\sqrt{2}g\mathrm{Re}(\varepsilon_{k}). \tag{12}\] As detailed in Appendix A, degeneracies in the spectrum of \(\mathcal{H}\) appear provided that \[f(\varepsilon)=\frac{1}{2}\left[\varepsilon\pm\sqrt{\frac{\varepsilon^{4}+10 \varepsilon^{2}-2\pm 2\left(1+2\varepsilon^{2}\right)^{\frac{3}{2}}}{\varepsilon^{2}}} \right], \tag{13}\] where \(\varepsilon=\varepsilon_{3}\) and \(f(\varepsilon)=\varepsilon_{2}\). Note that the complex-valued function \(f(\varepsilon)\) presents four branches indicated by the signs '\(\pm\)' as shown for a purely real \(\varepsilon\) in Fig. 2a. At the degeneracies of \(\mathcal{H}\), such a matrix has at most two distinct eigenvalues, from which the effective detunings and decay rates of the normal modes are extracted as \(\delta^{\mathrm{eff}}_{j}(\varepsilon)=\sqrt{2}g\mathrm{Im}[h_{j}(\varepsilon)]\) and \(\kappa^{\mathrm{eff}}_{j}(\varepsilon)=\kappa_{1}+2\sqrt{2}g\mathrm{Re}[h_{j} (\varepsilon)]\) (Appendix A), where \[h_{1}(\varepsilon) =\frac{f^{3}-\varepsilon f^{2}-(\varepsilon^{2}+4)f+\varepsilon ^{3}+\varepsilon/2}{f^{2}-\varepsilon f+\varepsilon^{2}-3},\] \[h_{2}(\varepsilon) =h_{3}(\varepsilon)=\frac{1}{4}\left[\frac{2\varepsilon f^{2}+2( \varepsilon^{2}+1)f-7\varepsilon}{f^{2}-\varepsilon f+\varepsilon^{2}-3} \right], \tag{14}\] and we write \(f=f(\varepsilon)\) for brevity. Consequently, the degenerate eigenvalues of \(\mathbf{\Gamma}\) are given by the pairs (Appendix A) \[\lambda^{\pm}_{\mathrm{s}_{j}}(\varepsilon)=-\frac{\kappa^{\mathrm{eff}}_{j}( \varepsilon)}{2}\pm i\left[\omega_{1}+\delta^{\mathrm{eff}}_{j}(\varepsilon) \right], \tag{15}\] which coincide at an EP-3. The rich structure of the decay rates and frequencies of the normal modes is shown in Fig. 2b for a purely real \(\varepsilon\). Without imposing further restrictions, the considered open system presents six EP-3, two of which are obtained for \(\varepsilon=2f(\varepsilon)=\pm 2\), so that all modes are degenerate, \(\kappa_{2}=\kappa_{1}\pm 2\sqrt{2}g\), and \(\kappa_{3}=\kappa_{1}\pm 4\sqrt{2}g\). These cases correspond to the square-root singularity of \(f(\varepsilon)\) and are highlighted in Fig. 2. The remaining four EP-3 are obtained with \(f(\varepsilon)=(\pm 3\sqrt{3}\pm i)/(2\sqrt{2})\), and \(\varepsilon=2i\,\mathrm{Im}[f(\varepsilon)]=\pm i/\sqrt{2}\), thus requiring equal decay rates for R1 and R3, \(\kappa_{2}=\kappa_{1}\pm 3\sqrt{3}g\), in addition to the detunings \(\delta_{2}=\pm g/2\) and \(\delta_{3}=\pm g\). The degeneracy map for such cases is shown in Fig. 6 of Appendix A for completeness. All other cases expressed through Eqs. (13) and (14) are associated to EP-2. Our numerical tests show the coalescence of eigenvectors of \(\mathcal{H}\) following the branches \(f(\varepsilon)\), indeed indicating open-system degeneracies. The Jordan decompositions of \(\mathbf{\Gamma}\) yielding polynomial-in-time features of the dynamics are shown in Appendix B for relevant EPs in this work. We emphasize that the experimental feasibility of EP engineering in the present model is strongly dependent on the physical limitations of the setup. For instance, to obtain the four instances of EP-3 with non-degenerate frequencies, one needs frequency detunings of the order of \(g/(2\pi)\), which are typically much smaller than the frequency of superconducting resonators themselves [54]. Hereafter, we restrict our discussion to degenerate res Figure 2: Exceptional-point engineering for a linear chain of three lossy resonators with degenerate angular frequencies \(\omega_{1}=\omega_{3}\), expressed by a purely real parameter \(\varepsilon=\varepsilon_{3}\), see Eqs. (12)–(14). (a) Decay rate (top panel) and frequency (bottom panel) offsets of resonator R2 as functions of the decay rate offset of resonator R3, expressed by the complex-valued function \(f(\varepsilon)=\varepsilon_{2}\) defined in Eq. (13). (b) Effective decay rate (top) and effective frequency (bottom) offsets of the eigenmodes of the system as functions of the decay rate offset of resonator R3, expressed by the complex-valued functions \(h_{j}(\varepsilon)\) defined in Eq. (14). All offsets are given with respect to the parameters of resonator R1. In (b), solid (dashed) curves represent the single (double) roots of the characteristic polynomial of \(\mathcal{H}\). In all cases, the labels \(++\), \(-+\), \(+-\), and \(--\) indicate the four branches of \(f(\varepsilon)\) obtained from the corresponding selection of signs in Eq. (13). The vertical dashed lines in all panels highlight the values of \(\varepsilon\) producing EP–3. The shaded area in (a) indicates the relevant region of the \(\mathrm{Re}(\varepsilon_{3})\)–\(\mathrm{Re}(\varepsilon_{2})\) parameter space for this work. onators, i.e., \(\mathrm{Im}(\varepsilon)=\mathrm{Im}[f(\varepsilon)]=0\). By also considering \(\kappa_{1}\) as the smallest decay rate, another restriction for obtaining EPs is imposed, such that both \(\mathrm{Re}(\varepsilon)\geq 0\) and \(\mathrm{Re}[f(\varepsilon)]\geq 0\). In this case, the only allowed branches of \(f(\varepsilon)\) are '\(+-\)' and '\(--\)' for \(\varepsilon\geq 2\), and '\(++\)' for \(\varepsilon\geq 0\), see the shaded region in Fig. 2a. In particular, the branch '\(++\)' at \(\varepsilon=0\) yields weakly dissipative normal modes, with one of them decaying according to \(\kappa_{1}\), see Fig. 2b and Eq. (14). This behavior suggests that a quasi-stabilization of some properties of the system can be obtained with the combination of a small \(\kappa_{1}\) and a proper choice of the EP, as explored in detail in Sec. IV. ### Single-mode squeezing and bipartite entanglement Below, we specifically investigate single-mode squeezing and bipartite entanglement for the three-resonator system. For Gaussian evolution, these quantities can be addressed directly from the specific partitions of the total CM \[\mathbf{V}=\left(\begin{array}{ccc}\mathbf{V}^{(1)}&\mathbf{C}^{(12)}& \mathbf{C}^{(13)}\\ \mathbf{C}^{(12)\top}&\mathbf{V}^{(2)}&\mathbf{C}^{(23)}\\ \mathbf{C}^{(13)\top}&\mathbf{C}^{(23)\top}&\mathbf{V}^{(3)}\end{array}\right), \tag{16}\] where \(\mathbf{V}^{(j)}\) is the reduced CM of resonator R\(j\) and \(\mathbf{C}^{(jk)}\) is the intermodal correlation matrix between resonators R\(j\) and R\(k\)[53]. Since all single-mode Gaussian states can be written as squeezed thermal states apart from local displacements, the components of the reduced CM of resonator R\(j\) can be cast into the form [60] \[\mathbf{V}^{(j)}_{11} =(\bar{N}_{j}+1/2)[\cosh(2r_{j})+\sinh(2r_{j})\cos\phi_{j}],\] \[\mathbf{V}^{(j)}_{22} =(\bar{N}_{j}+1/2)[\cosh(2r_{j})-\sinh(2r_{j})\cos\phi_{j}],\] \[\mathbf{V}^{(j)}_{12} =(\bar{N}_{j}+1/2)\sinh(2r_{j})\sin\phi_{j}, \tag{17}\] where \(r_{j}\) and \(\phi_{j}\) are real-valued quantities defining the squeezing parameter \(\xi_{j}=r_{j}e^{i\phi_{j}}\) and \(\bar{N}_{j}\) is the effective thermal occupation number of resonator R\(j\). As a consequence, one can extract \(r_{j}\) and \(\bar{N}_{j}\) as \[r_{j} =\frac{1}{2}\sinh^{-1}\left[\frac{\sqrt{(\mathbf{V}^{(j)}_{11}- \mathbf{V}^{(j)}_{22})^{2}+4\mathbf{V}^{(j)2}_{12}}}{2(\bar{N}_{j}+1/2)} \right],\] \[\bar{N}_{j} =\sqrt{\det\mathbf{V}^{(j)}}-\frac{1}{2}, \tag{18}\] and the single-mode purity is readily given by \(\mathcal{P}_{j}=(2\bar{N}_{j}+1)^{-1}\). While bipartite entanglement can be quantified by the reduced von Neuman entropy given a pure state of the complete system [61], an entanglement measure for mixed states is not uniquely defined [62]. Here, we focus on the concept of logarithmic negativity [63], which is based on the Peres-Horodecki separability criterion [64; 65] and fulfills the conditions for an entanglement monotone [66]. Given Eq. (16) and considering the subsystems R\(j\) and R\(k\) (\(j<k\)), one can write their joint CM as \[\mathbf{V}^{(jk)}=\left(\begin{array}{cc}\mathbf{V}^{(j)}&\mathbf{C}^{(jk)} \\ \mathbf{C}^{(jk)\top}&\mathbf{V}^{(k)}\end{array}\right). \tag{19}\] For Gaussian states, the logarithmic negativity, \(\mathcal{E}_{jk}\), can then be computed as [63; 67] \[\mathcal{E}_{jk}=\max[0,-\log_{2}(2\tilde{\nu}_{jk}^{-})], \tag{20}\] where \(\tilde{\nu}_{jk}^{-}=\{\tilde{\Delta}_{jk}-[\tilde{\Delta}_{jk}^{2}-4\det \mathbf{V}^{(jk)}]^{\frac{1}{2}}\}^{\frac{1}{2}}/\sqrt{2}\) being the smallest symplectic eigenvalue of \(\tilde{\mathbf{V}}^{(jk)}\), which corresponds to the two-mode CM obtained after the Peres-Horodecki partial transposition of the associated bipartite density matrix, and \(\tilde{\Delta}_{jk}=\det\mathbf{V}^{(j)}+\det\mathbf{V}^{(k)}-2\det\mathbf{ C}^{(jk)}\)[65; 67]. The inequality \(\tilde{\nu}_{jk}^{-}\geq 1/2\) is a necessary and sufficient condition for separability of bipartite Gaussian systems of two modes [65; 67]. ## IV Quasi-Stabilization of squeezing and entanglement In this section, we study the propagation of single-mode squeezing and bipartite entanglement in the open quantum system of Fig. 1b. The initial state is chosen as \(|0\rangle_{1}|0\rangle_{2}\hat{S}_{3}(r)|0\rangle_{3}\), where \(\hat{S}_{3}(r)=\exp\Bigl{[}r(\hat{a}_{3}^{\dagger 2}-\hat{a}_{3}^{2})/2\Bigr{]}\) is the single-mode squeezing operator of R3 and \(r\geq 0\). Such a state has the CM \[\mathbf{V}_{0}=\frac{1}{2}\mathrm{diag}\left(1,1,1,1,\mathrm{e}^{2r},\mathrm{e} ^{-2r}\right), \tag{21}\] which indicates that the variances of R3 are initially modified by the factors \(\mathrm{e}^{\pm 2r}\). We employ Eq. (5) to numerically obtain the \(6\times 6\) time-evolved CM \(\mathbf{V}(t)\) at different points of the parameter space. Here, we set \(\kappa_{1}=\kappa_{3}\) as the smallest decay rates of the system and test different \(\kappa_{2}=\kappa_{1}+2\sqrt{2}g\,\mathrm{Re}(\varepsilon_{2})\) with \(\mathrm{Re}(\varepsilon_{2})\geq 0\) and \(\mathrm{Im}(\varepsilon_{2})=0\). Within these conditions, the only allowed EP-branch is '\(++\)' so that an EP-2 is produced at \(f(\varepsilon=0)=2\), see Eq. (13) and Fig. 2a. In Figure 3a, we observe the emergence of squeezed thermal states for resonator R1 and bipartite quantum correlations expressed through the logarithmic negativity \(\mathcal{E}_{13}\), with a clear passage from underdamped to overdamped dynamics with increasing \(\kappa_{2}\). The squeezing degree of R2 along with the logarithmic negativities \(\mathcal{E}_{12}\) and \(\mathcal{E}_{23}\) (data not shown) is rapidly suppressed for large ratios \(\kappa_{2}/g\). On the other hand, the small values of \(\kappa_{j}/g\), \(j=1,3\), help to delay the decay of the system towards the three-mode vacuum state, and this quasi-stability tends to be achieved faster near the critical-damping regime produced by the EP-2. Such a behavior is not present at the EP-2 if R1 is directly connected to R3, which reduces the dimension of the system to \(N=2\). In such a case, the only two normal modes of the system decay at equal rates [25]. The maximum achieved values of \(r_{1}\), \(\bar{N}_{1}\), and \(\mathcal{E}_{13}\) as functions of the initial squeezing parameter \(r\) for the system dynamics at the EP-2 are shown in Figure 3b. Their values in the limit \(\kappa_{j}\to 0\), \(j=1,3\), and \(t\to\infty\), can be estimated directly from Eqs. (18) and (20) with the help of the Jordan decomposition of \(\mathbf{\Gamma}\) shown in Appendix B. One readily obtains \(r_{2}^{\star}=\bar{N}_{2}^{\star}=\mathcal{E}_{12}^{\star}=\mathcal{E}_{23}^{ \star}=0\), whereas \[r_{1}^{\star}=r_{3}^{\star} =\frac{1}{2}\log\left[\frac{3+\mathrm{e}^{2r}}{\sqrt{10+6\cosh(2r )}}\right],\] \[\bar{N}_{1}^{\star}=\bar{N}_{3}^{\star} =\frac{1}{8}\left[\sqrt{10+6\cosh(2r)}-4\right],\] \[\mathcal{E}_{13}^{\star} =\frac{1}{2}\left[1-\log_{2}(1+\mathrm{e}^{-2r})\right]. \tag{22}\] The superscripts '\(\star\)' in Eqs. (22) indicate that such quantities are bounds for the quasi-stabilized states, shown as dashed lines in Fig. 3. Interestingly, we can generate entanglement between resonators R1 and R3 although the entanglement with resonator R2 is rapidly suppressed. From Fig. 3b and Eqs. (22), we observe that the squeezing splitting increases linearly with \(r\) for \(r\ll 1\), where thermal occupancy is insignificant. The squeezing-splitting capacity \(r_{1}^{\star}/r\) and the degree of entanglement between R1 and R3 tend to saturate to \(1/2\) in the limit \(r\to\infty\) with the expense of also thermally populating these resonators. Using the decibel scale defined by \(r=10\log_{10}(e^{2r})\) dB [68], an initial amount of squeezing \(r\approx 3\) dB is roughly converted into squeezed states with \(r_{1}^{\star}=r_{3}^{\star}\approx 0.772\) dB and purities \(\mathcal{P}_{1}^{\star}=\mathcal{P}_{3}^{\star}\approx 0.997\), with \(\mathcal{E}_{13}^{\star}\approx 0.207\). Despite producing a faster decay towards the actual steady state of the system, an increase of two orders of magnitude in \(\kappa_{1}/g\) does not provide significant differences in the maximum quantities for small \(r\). To further address the quasi-stabilization of entanglement and squeezing transferred to R1 for different \(\kappa_{2}\), we diagonalize Eq. (11) to obtain the effective frequency detunings and decay rates of the system as shown in Figs. 4a and 4b, respectively. For \(\kappa_{1}\ll\kappa_{2}\), we obtain two eigenmodes with frequency detunings \(\delta_{\mathrm{e}}^{\mathrm{eff}}\approx\pm\operatorname{Im}(\sqrt{\kappa_{ 2}^{2}-32g^{2}})/4\) and dissipation rates \(\kappa_{\pm}^{\mathrm{eff}}\approx\kappa_{2}/2\pm\operatorname{Re}(\sqrt{ \kappa_{2}^{2}-32g^{2}})/2\), which coalesce at \(\kappa_{2}\approx 4\sqrt{2}g\). The frequency detuning \(\delta_{0}^{\mathrm{eff}}=0\) and dissipation rate \(\kappa_{0}^{\mathrm{eff}}=\kappa_{1}\) are preserved, thus indicating that one of the eigenmodes remains hidden from the dissipation of resonator R2. Since clearly the speed of quasi-stabilization for the squeezing and entanglement of resonator R1 depend on \(\kappa_{2}\) (Fig. 3a) and since \(\kappa_{\pm}^{\mathrm{eff}}\geq\kappa_{\mathrm{e}}^{\mathrm{eff}}\), we conclude that the time scale for this quasi-stabilization is roughly given by \(1/\kappa_{-}^{\mathrm{eff}}\approx 2/[\kappa_{2}-\operatorname{Re}(\sqrt{ \kappa_{2}^{2}-32g^{2}})]\). To arrive at a more accurate expression for the quasi-stabilization time, we first fit functions of the form \[r_{1}^{\mathrm{fit}}(t) =\frac{r_{1}^{\star}}{2}\mathrm{e}^{-y_{r_{1}}\kappa_{1}t}\left\{ \mathrm{e}^{-\kappa_{-}^{\mathrm{eff}}t/2}\left[1-3\cos\bigl{(}\delta_{-}^{ \mathrm{eff}}t\bigr{)}\right]+2\right\},\] \[\mathcal{E}_{13}^{\mathrm{fit}}(t) =\mathcal{E}_{13}^{\star}\mathrm{e}^{-y_{\ell_{13}}\kappa_{1}t} \left[1-\mathrm{e}^{-\kappa_{-}^{\mathrm{eff}}t}\cos^{2}(\delta_{-}^{ \mathrm{eff}}t)\right], \tag{23}\] to time traces similar to those in Fig. 3a and find \(y_{r_{1}}\approx 0.75\) and \(y_{\mathcal{E}_{13}}\approx 1.3\). Although these functions neglect the polynomial-in-time solution at the EP-2, they capture the main features of the over and underdamped dynamics, and hence are accurate enough from our following analysis. Next, we define the quasi-stabilization time \(t_{\alpha}\) as the earliest time instant after which the quantity \(\alpha=r_{1},\mathcal{E}_{13}\) stays within an uncertainty \(\sigma_{\alpha}\) from the ideal value \(\alpha^{\star}\mathrm{e}^{-y_{\alpha}\kappa_{1}t_{\alpha}}\), where we take into account also the slow decay of the maximum attainable value owing to finite \(\kappa_{1}\). More precicely, \[t_{\alpha}=\min\{t|\alpha^{\star}\mathrm{e}^{-y_{\alpha}\kappa_{1}t}-\tilde{ \alpha}(t)\leq\sigma_{\alpha}\}, \tag{24}\] where \(\tilde{\alpha}(t)\) is the lower envelope of the possibly oscillating \(\alpha(t)\). Note that by this definition, \(\tilde{\alpha}(t)=\alpha(t)\) in the critically and overdamped dynamics. Figure 3: (a) Dynamics of the squeezing parameter \(r_{1}\) and effective thermal occupation \(\bar{N}_{1}\) of resonator R1 and the logarithmic negativity between R1 and R3, \(\mathcal{E}_{13}\), for the indicated values of the damping rate of R2, \(\kappa_{2}/g\). The shown data correspond to a crossover from underdamped to overdamped dynamics, with critical damping at \(\kappa_{2}/g=5.658\). The frequencies of the resonator modes are chosen as \(\omega_{1}/g=\omega_{2}/g=\omega_{3}/g=5000\), the other damping rates as \(\kappa_{1}/g=\kappa_{3}/g=10^{-3}\), and the initial squeezing parameter of R3 as \(r=1\). The corresponding values of \(\varepsilon_{2}\) as defined in Eq. (12) are \(\varepsilon_{2}=0.2\) (gray curves), \(\varepsilon_{2}=2.0\) (blue curves) and \(\varepsilon_{2}=4.0\) (red curves). In the chosen parameter regime, the results are essentially independent of the resonator–resonator coupling strength \(g\). (b) Maximum achieved quantities in temporal evolutions corresponding to (a) at the critical damping as functions of the initial squeezing parameter \(r\) for selected values of \(\kappa_{1}/g\). In all panels, dashed lines correspond to long-time values in the limit \(\kappa_{1}/g\to 0\), see Eq. (22). In Fig. 4c, we show the behavior of the quasi-stabilitation time \(t_{\alpha}\) on the dissipation rates \(\kappa_{2}\) for an error \(\sigma_{\alpha}=10^{-5}\) as obtained from the solutions of the temporal evolution of the system similar to those in Fig. 3a. The shortest quasi-stabilization times are obtained in the vicinity of the EP-2 owing to the peak in \(\kappa_{-}^{\text{eff}}\) illustrated in Fi. 4b. Using the lower envelopes of the fitting functions (23) in Eq. (24), one can estimate the quasi-stabilization time as \[t_{\alpha}\approx\frac{\log\left(\frac{\alpha^{*}}{\sigma_{\alpha}}\right)}{y _{\alpha}\kappa_{1}+z_{\alpha}\kappa_{-}^{\text{eff}}}, \tag{25}\] with \(z_{r_{1}}\approx 0.5\) and \(z_{\xi_{13}}\approx 1\). Therefore, \(t_{\alpha}\) tends to scale logarithmically with the desired error. ## V Fast reset near exceptional points As the final application of EPs, we discuss the reset of the resonator chain to its ground state \(|0\rangle_{1}|0\rangle_{2}|0\rangle_{3}\). Typically, stronger dissipation leads to faster decay, but of course in our system where the coupling between the different resonators is weak compared with the excitation frequencies of the bare resonators, the critical dynamics plays an important role. Similar features are prone to arise in a quantum register of several coupled qubits. To quantitatively study the accuracy of the reset, we define the infidelity \[\mathcal{I}_{\text{ss}}(\hat{\rho})=1-\mathcal{F}_{\text{ss}}(\hat{\rho}), \tag{26}\] where \(\mathcal{F}_{\text{ss}}(\hat{\rho})=\langle 0|_{1}\langle 0|_{2}\langle 0|_{ 3}\hat{\rho}|0\rangle_{1}|0\rangle_{2}|0\rangle_{3}\) is the overlap probability between an arbitrary three-mode state \(\hat{\rho}\) and the ground state. For multimode Gaussian states with null mean vector \(\langle\mathbf{\hat{x}}\rangle\), \(\mathcal{F}_{\text{ss}}\) can be directly computed from the covariance matrix \(\mathbf{V}\), which for the present case becomes [3] \[\mathcal{F}_{\text{ss}}=\frac{1}{\sqrt{\det\left(\mathbf{V}+\mathbf{V}_{\text {ss}}\right)}}, \tag{27}\] where \(\mathbf{V}_{\text{ss}}\) given in Eq. (10). An optimized reset is achieved with the set of free parameters producing the fastest decay to the ground state, i.e., the minimal \(\mathcal{I}_{\text{ss}}\) in a given time. Figure 5 shows the reset infidelity for different parameter values and for an initial state which is obtained by waiting for a preparation time \(\tau_{\text{s}}\) at EP-2 after squeezing the vacuum at resonator R3 by a finite \(r\). Note that if \(\tau_{\text{s}}=0\), one has the initial squeezed state with the covariance matrix given by Eq. (21), and with \(\tau_{\text{s}}=8/g\), one prepares an initial state with entanglement and squeezing split between R1 and R3, see Fig. 3a. In Fig. 5a, we show the dependence of \(\mathcal{I}_{\text{ss}}\) on the decay rates \(\kappa_{2}\) and \(\kappa_{3}\) in the region corresponding to the shaded area in Fig. 2a for the above-mentioned preparation times and immediately following reset times \(\tau_{\text{r}}\). Although the regions of low infidelity are relatively broad if all squeezing is concentrated in R3, so that no entanglement is present, we observe a narrowing of such regions if \(\tau_{\text{s}}=8/g\). These regions tend to cover the EP-3 and follow the real components of the '\(---\)' branch of \(f(\varepsilon_{3})\) as \(\varepsilon_{3}\) is increased. Such a feature is even more prominent for long reset times naturally leading to lower reset infidelities. Note from Fig. 2b that this branch tends to produce highly dissipative normal modes for \(\varepsilon_{3}>2\). In contrast, at least one decay rate produced by the '\(+-\)' and '\(++\)' branches is slow even with increasing \(\varepsilon_{3}\), rendering such branches less favorable for the reset. Figure 5b shows the reset infidelity \(\mathcal{I}_{\text{ss}}\) as a function of the reset times \(\tau_{\text{r}}\) at the EP-3 for different initial states. In all displayed cases, low infidelities \(\mathcal{I}_{\text{ss}}\) are indeed achieved beyond \(\tau_{\text{r}}\sim 6/g\), owing to the exponential dependence on \(\tau_{\text{r}}\). For such reset times, the distribution Figure 4: (a) Effective frequency detunings and (b) effective decay rates of the eigenmodes of the coupled system as functions of the decay rate of resonator R2, \(\kappa_{2}\), in units of the resonator–resonator coupling strength \(g\). (c) Time \(t_{\alpha}\) to yield quasi-stable squeezing (filled circles, \(\alpha=r_{1}\)) and entanglement (filled squares, \(\alpha=\mathcal{E}_{13}\)) within an uncertainty \(\sigma_{\alpha}=10^{-5}\), see main text. The dashed lines represent corresponding results from the fit functions of Eq. (23). In all panels, the parameters are chosen as in Fig. 3a and the colored regions separate the underdamped from the overdamped dynamics, with critical damping at \(\kappa_{2}/g=5.658\), corresponding to an EP–2. of squeezing and entanglement tends to have a minor relative effect on the reset performance. This is in stark contrast with the short-reset-time cases, where the decay towards the ground state tends to significantly accelerate if all initial squeezing is poorly distributed, remaining mostly in R3. We observe that the reset performance is degraded for small ratios of \(\kappa_{1}/g\) and for increasing initial squeezing parameters as displayed in Fig. 5c. In such scenarios, for a finite reset time, the infidelity tends to grow asymptotically to unity in the limit \(r\to\infty\). ## VI Discussion We observed that fast generation of entanglement and propagation of squeezing in a linear chain of three superconducting resonators may benefit from the detailed understanding of critical damping in the system. Here, the highly dissipative resonator R2 acts as an incoherent entanglement generator and squeezing splitter with the cost of reducing the purity of the local states through the increase of their effective temperatures. The role of critical damping towards stabilization has also been acknowledged recently in an autonomous quantum thermal machine with two qubits [69]. The stabilization of squeezed states through reservoir engineering in superconducting circuits has been recently reported in [12]. We highlight that the scheme in our paper differs from typical two-mode squeezing operations, since it arises from the combination of dissipation and only a single-mode squeezing source available in the beginning of the dynamics, thus being also distinct from conventional reservoir-engineering protocols. On the other hand, we do not need continuous driving terms since the structure of couplings and dissipation of the system promote a separation of time scales for the decay of the normal modes. We explicitly show that this can be beneficial if fine-tuning \(\kappa_{2}\) near a particular EP-2 instead of only roughly assuming the conditions \(\kappa_{j}\ll\kappa_{2},g\), for \(j=1,3\). The results shown in Figs. 3 and 4 also suggest that concatenating similar structures can be used for fast and stable distribution of entanglement to every other node in a photonic network. Although, spoiling Gaussian features of the system [70; 71], entanglement distillation protocols [72] may be used in such cases to increase the amount of entanglement shared by the nodes. Particular low-order EPs of high-dimensional systems may be used to speed up the generation of quasi-stable states, and hence they may have potential use cases in quantum protocols, although the open-system-degeneracy map in such cases becomes more intricate. Regarding the unconditional dissipative reset of the system, the role of critical damping becomes more evident. Here, the region near the EP-3 and also following a particular EP-2 branch is a reasonable choice of parameters to produce a substantial performance enhancement of the reset. Since the covariance matrices of the vacuum state and a product of coherent states are identical, such regions in the parameter space could also be used to promote unconditional fast stabilization of coherent states with a proper inclusion of driving terms in the system Hamiltonian. Let us present typical experimental parameters of the circuit 1b that could reproduce the findings of this work. Figure 5: (a) Reset infidelity \(\mathcal{I}_{\text{ss}}\) of degenerate resonators as a function of the dimensionless decay rate offsets \(\text{Re}(\varepsilon_{3})\) and \(\text{Re}(\varepsilon_{2})\) for selected choices of preparation times \(\tau_{\text{s}}\) (top and bottom panels) and reset times \(\tau_{\text{r}}\) (left and right panels). During the time-interval \(\tau_{\text{s}}\), the system is set at the EP–2 with \(\text{Re}(\varepsilon_{3})=0\) and \(\text{Re}(\varepsilon_{2})=2\). Solid curves on top of the contour plots show the components of the EP branches ‘++’ (blue), ‘+\(-\)’ (gray), and ‘\(--\)’ (green) in the \(\text{Re}(\varepsilon_{3})\)–\(\text{Re}(\varepsilon_{2})\) parameter space as in the shaded region of Fig. 2a, with EP–3 indicated by dashed circles. The other parameters are \(\omega_{j}/g=5000\), \(j=1,2,3\), \(\kappa_{1}/g=10^{-3}\), and \(r=1\). (b) Reset infidelity \(\mathcal{I}_{\text{ss}}\) at the EP–3 as a function of reset times \(\tau_{\text{r}}\) (in units of \(g^{-1}\)) for different preparation times \(\tau_{\text{s}}\) and decay rate \(\kappa_{1}/g=10^{-3}\). Solid (dashed) curves show data for \(r=1.0\) (\(r=2.0\)). (c) Reset infidelity \(\mathcal{I}_{\text{ss}}\) at the EP–3 as a function of squeezing parameter \(r\) for different reset times \(\tau_{\text{r}}\) and for preparation time \(\tau_{\text{s}}=8/g\). Solid (dashed) curves show data for \(\kappa_{1}/g=10^{-3}\) (\(\kappa_{1}/g=10^{-1}\)). The remaining parameters are chosen as in (a). For a resonance frequency of \(\omega/(2\pi)=5.0\) GHz, the simulated values of coupling strength and lowest decay frequencies are \(g/(2\pi)=1.0\) MHz and \(\kappa_{1}/(2\pi)=1.0\) kHz, respectively. Such resonance frequency and coupling strength has been conveniently experimentally achievable for longer than a decade, and the quality factor of five million implied by the lowest decay rate can be achieved with state-of-the-art fabrication techniques. The EP-2 used for stabilization is thus achieved with \(\kappa_{2}/(2\pi)\approx 5.66\) MHz and \(\kappa_{3}/(2\pi)=1.0\) kHz, while the EP-3 with \(\kappa_{2}/(2\pi)\approx 2.83\) MHz and \(\kappa_{3}/(2\pi)\approx 5.66\) MHz. Even though the almost four-orders-of-magnitude tunability required to interchange between this particular EP-2 and the EP-3 may be technically challenging, the maximum achievable decay rates with the QCR are beyond the ones considered here and their demonstrated on/off ratios are close to these requirements [23]. ## VII Conclusions We demonstrated the theory of exceptional-point-related phenomena for continuous-variable systems described entirely by their second moments, consequently capturing different non-classical features and non-locality largely neglected in previous work. For a linear chain of three lossy superconducting resonators, we analytically obtained its open-system-degeneracy map and observed that different parameter sets yielding different exceptional points can be used to identify sweet spots for the optimization of squeezing propagation, entanglement generation, and reset. More precisely, we assessed the role of critical dynamics for dissipative state synthesis by numerically simulating the temporal evolution of the covariance matrix of the system. The region of the parameter space considered in the simulations is physically motivated by recent experimental advances in dissipation-tunable devices embedded to superconducting circuits. We found that the quasi-stabilization into mixed bipartite entangled states generated from an initially squeezed resonator R3 is optimized in the vicinities of a particular low-dissipative EP-2 produced with symmetric decay rates of resonators R1 and R3 [see the '++' branch of \(f(\varepsilon)\) in Eq. (13)]. In such scenarios, one observed that the time scale for this quasi-stabilization is minimum for \(\kappa_{2}\approx 4\sqrt{2}g\) and \(\kappa_{1},\kappa_{3}\ll\kappa_{2}\). Using the Jordan decomposition of the dynamical matrix, we obtained analytical bounds for the maximum achievable quasi-stable squeezing-splitting capacity and logarithmic negativity. Remarkably, all residual squeezing of the central resonator is removed within the quasi-stabilitization time-scales, and consequently, the choice of EP-2 also quickly removes the entanglement of R2 with the other resonators. Furthermore, we investigated the dissipative reset of such non-classical states to the ground state. The region in the parameter space producing the lowest reset infidelities at given reset times \(\tau_{\mathrm{r}}\) requires asymmetric resonator decay rates and tend to follow a particular high-dissipative EP branch, which includes the physically attainable EP-3 [see the '\(--\)' branch of \(f(\varepsilon)\) in Eq. (13)]. In this EP-3 case, the distribution of the initial squeezing into the different resonators tends to become irrelevant for the reset performance beyond \(\tau_{\mathrm{r}}\sim 6/g\). In conclusion, this work paves the way for a deep understanding of the role of exceptional points in multimode continuous-variable systems, with potential applications in quantum technology such as in using dissipation as an ingredient for fast transfer of desired quantum properties. For example, heat engines [73] operating with nonequilibrium reservoirs [74] and presenting quantum resources [75] arise as systems with promising near-term opportunities. Moreover, the investigation of exceptional points in such superconducting systems through involved models, see e.g. [76], is also a potential future line of research. As a final remark, we note that the role of the counter-rotating terms in the system Hamiltonian and of squeezed reservoirs on the exceptional points may also be addressed with the tools presented in Sec. II. ## Acknowledgements The authors acknowledge the Academy of Finland Centre of Excellence program (project no. 336810), European Research Council under Consolidator Grant no. 681311 (QUESS) and Advanced Grant no. 101053801 (ConceptQ).
2309.08773
Enhance audio generation controllability through representation similarity regularization
This paper presents an innovative approach to enhance control over audio generation by emphasizing the alignment between audio and text representations during model training. In the context of language model-based audio generation, the model leverages input from both textual and audio token representations to predict subsequent audio tokens. However, the current configuration lacks explicit regularization to ensure the alignment between the chosen text representation and the language model's predictions. Our proposal involves the incorporation of audio and text representation regularization, particularly during the classifier-free guidance (CFG) phase, where the text condition is excluded from cross attention during language model training. The aim of this proposed representation regularization is to minimize discrepancies in audio and text similarity compared to other samples within the same training batch. Experimental results on both music and audio generation tasks demonstrate that our proposed methods lead to improvements in objective metrics for both audio and music generation, as well as an enhancement in the human perception for audio generation.
Yangyang Shi, Gael Le Lan, Varun Nagaraja, Zhaoheng Ni, Xinhao Mei, Ernie Chang, Forrest Iandola, Yang Liu, Vikas Chandra
2023-09-15T21:32:20Z
http://arxiv.org/abs/2309.08773v1
# Enhance Audio Generation Controllability Through Representation Similarity Regularization ###### Abstract This paper presents an innovative approach to enhance control over audio generation by emphasizing the alignment between audio and text representations during model training. In the context of language model-based audio generation, the model leverages input from both textual and audio token representations to predict subsequent audio tokens. However, the current configuration lacks explicit regularization to ensure the alignment between the chosen text representation and the language model's predictions. Our proposal involves the incorporation of audio and text representation regularization, particularly during the classifier-free guidance (CFG) phase, where the text condition is excluded from cross attention during language model training. The aim of this proposed representation regularization is to minimize discrepancies in audio and text similarity compared to other samples within the same training batch. Experimental results on both music and audio generation tasks demonstrate that our proposed methods lead to improvements in objective metrics for both audio and music generation, as well as an enhancement in the human perception for audio generation. Yangyang Shi & _Gael Le Lan_ & _Varun Nagaraja_ & _Zhaoheng Ni_ & _Xinhao Mei_ _Ernie Chang_ & _Forrest Iandola_ & _Yang Liu_ & _Vikas Chandra_ Meta AI Audio Generation, Music Generation, Representation regularization ## 1 Introduction Generating sound effects, music, and speech to meet specific requirements holds immense importance as a pivotal tool in content creation spanning various domains, including augmented, virtual and mixed reality, video game development, and movie production. The advent of recent neural generative models have brought about a transformative shift in the landscape of digital content generation. Drawing inspiration from the remarkable progress in image generation [1, 2], the realm of audio generation has undergone a paradigm shift - transitioning from conventional signal processing approaches to neural generative models [3, 4, 5, 6, 7, 8, 9, 10]. Just as in the case of text-to-image generation models [1, 11], harnessing the potential of diffusion probability models [12, 13], the studies [9, 14, 15, 16, 4, 5, 17, 18] have show-cased impressive capacity in the realms of speech synthesis, sound effects creation, and music generation. Alongside the diffusion-based approach, a parallel avenue has been pursued using transformer-based language models [19], which have also exhibited exceptional performance in audio generation tasks [20, 21, 22, 8, 6, 7]. In language model driven approach like MusicGen [8] and AudioGen [6], it first encodes raw audio into discrete tokens via a neural audio compression model (e.g., [23, 24]). This model is end-to-end trained to compress and reconstruct input audio from discrete tokens with high quality and minimum perceptual loss. The generation model then employs an auto regressive transformer-decoder language model. The language model operates on discrete audio tokens from the first phase and is conditioned on text inputs. Text is processed as text embedding representation using an text encoder pre-trained on a large text corpus, such as T5 [25]. The text representation is used as cross attentions in the language model training. The language model is trained by cross-entropy loss to minimize the entropy to predict next discrete audio token based on the previous audio tokens and the text representation. However, in the whole training process, there is not any regularization to enforce the next audio token prediction to fully leverage representations from both audio token and conditioning text. As a consequence, the generated audio often isn't fully aligned with the provided text prompt. It is often that the music generated based on the description "_Highly rhythmic orchestral piece illustrating wonder and one. Features staccato violins, collos, basses, trombone and grand piano_", misses one or more instruments from the description. The sound effects generated from the condition "_the sound of a ping pong ball bounce back once from the hard wood floor_" has multiple ping pong ball bouncing sounds. This paper introduces a method aiming at improving the training of the generation model to effectively capture representations from text conditions. This is achieved by minimizing the similarity between text and audio representations through regularization. Language model training comprises two modes: text-conditioned training and classifier-free guidance (CFG) training [26, 6]. In CFG, the text condition is omitted during language model training. We enhance the audio and text representation similarity by reducing discrepan cies in audio and text similarity compared to other samples within the same training batch. Experimental results in music and sound effects generation demonstrate the effectiveness of the proposed approach, showcasing improvements in Frechet audio distance (FAD) using VGG classifier [27], kullback-leibler (KL) divergence using PaSST model [28], text and audio alignment score based on the contrastive language audio pretrained models (CLAP) [29], and human subjective evaluation for audio generation. ## 2 Related Work This study applies the language model approach presented in works such as [20, 21, 22, 8, 6, 7], in which the compression model discretizes audio into tokens for training and then decodes these tokens to audio. The language model learns to generate audio tokens. However, our emphasis lies in augmenting the semantic correlation between provided text descriptions and the generated audio. This enhancement is built upon the foundation of the MusicGen [8] and AudioGen [6] for language model-driven audio generation. To model the representation similarity between text and audio, one related work is CLAP [29] which uses contrastive loss. However, we found that using the contrastive loss in CLAP for generation model training did not improve the performance. Instead, we propose a new approach that first computes the representation similarities of audios and texts between different samples. We then minimize the discrepancies between the audios' similarities and the texts' similarities. Additionally, we found that max pooling is better than average pooling for obtaining the sequence level representation from individual time step output. ## 3 Representation Regularization ### Language model based audio generation The language model based audio generation model is composed of several pivotal elements as shown in Fig 1. Firstly, it employs a compression model, such as the EnCodec model [30, 23] to encode the raw audio data into a discrete multi-stream sequence of tokens \(a_{k,i}\). Here \(i\in[1,T_{a}]\) and \(T_{a}\) is the length of the audio token sequence, while \(k\in[1,K]\), indicating the particular codebook indexed as the \(k\)-th. Additionally, the model incorporates a pre-trained text encoder, which transforms the text input into a sequence of embedding representations identified as \(v_{j}\), where \(j\in[1,T_{v}]\), \(T_{v}\) corresponds to the length of the sequence containing text embedding representations. Lastly, there is a language model component that is a stack of Transformer layers. The language model leverages both the text embedding representation and the preceding audio tokens to generate the probability distribution for the subsequent audio token as \(p_{\theta}(a_{k,i+1}|a_{k,1},...,a_{k,i},v_{1},...,v_{T_{v}})\). To render audio generation more manageable, the generation of multi-stream audio tokens is trained in parallel, resulting in a substantial reduction in the effective sequence length during model training. The loss for the language model is the sum of the cross entropy loss for each stream \(k\). \[L_{cond}=-\sum_{k=1}^{K}\sum_{i=1}^{T_{a}}log(p_{\theta}(a_{k,i+1}|a_{k,1},...,a_{k,i},v_{1},...,v_{T_{v}})) \tag{1}\] ### Representation regularization However, the cross entropy loss in language model lacks explicit mechanism to enforce the audio token prediction align with the provided text conditions. Furthermore, the correlation between text and audio gets even loosen as the classifier-free guidance (CFG) method [26, 6, 8] is used in the training to regulate the balance between sample quality and diversity. Employing CFG involves training the language model both conditionally and unconditionally. Similar to AudioGen [6], 10\(\%\) of the training samples have their accompanying text omitted during language model training. In unconditional situation, the loss is simply \[L_{uncond}=-\sum_{k=1}^{K}\sum_{i=1}^{T_{a}}log(p_{\theta}(a_{k,i+1}|a_{k,1},...,a_{k,i})) \tag{2}\] In this work, the proposed representation regularization strengthens the correlation between audio representation and text representation while still maintains the effects of CFG method to train the language model unconditionally on text. Given a batch of training samples, a pooling method \(F\) is used to get the text sequence representation as \(T^{b}=\mathrm{F}(v_{1}^{b},...,v_{T_{v}}^{b})\) and audio sequence representation as \(A^{b}=\mathrm{F}(u_{1}^{b},...,u_{T_{a}}^{b})\) for the particular sample \(b\) in the batch. In our experiments, the max pooling achieved the best results. Rather than directly mapping the text and audio representations to the same space and maximizing the similarity between audio and text as CLAP [29], we propose to minimize discrepancies in audio and text similarity compared to other samples within the same training batch as follows: \[T^{b,\hat{b}}=\frac{T^{b}*T^{\hat{b}}}{||T^{b}||T^{\hat{b}}||T^{\hat{b}}||} \tag{3}\] Figure 1: Illustration of the language model training with cross entropy loss and representation regularization. \[A^{b,\hat{b}}=\frac{A^{b}*A^{\hat{b}}}{||A^{b}||||A^{\hat{b}}||} \tag{4}\] \[L_{rr}=\frac{\sum_{b\to\hat{b}}(T^{b,\hat{b}}-A^{b,\hat{b}})^{2}}{B*(B-1)} \tag{5}\] Here \(T^{b,\hat{b}}\) denotes the representation similarity between text inputs in sample \(b\) and \(\hat{b}\). And \(A^{b,\hat{b}}\) denotes the representation similarity between audio in sample \(b\) and \(\hat{b}\). \(B\) is the batch size. The \(L_{rr}\) enforces the text and audio in one sample have the same differences regarding to the other samples. In this study, the proposed representation regularization is exclusively applied during the CFG phase. The complete model training loss is defined as follows: \[L=\begin{cases}L_{uncond}+\lambda L_{rr}&\text{if CFG is utilized}\\ L_{cond}&\text{if CFG is not used}\end{cases} \tag{6}\] Here, \(\lambda\) represents the weighting factor for the representation regularization. Note that representation regularization is only employed during regular training steps when CFG is in use. We also conducted experiments involving representation regularization in non-CFG scenarios; however, these experiments did not yield improvements in objective metrics. We believe the degradation may be attributed to the fact that representation regularization has the potential to hinder language model learning by copying the text representation from cross-attention as the audio representation in non-CFG. ## 4 Experiments In this work, we use two sets of experiments including the sound effects generation and the music generation to verify the effectiveness of proposed methods. ### Datasets In music generation, we utilize a total of 20K hours of licensed music which comprises an internal compilation of 10K music tracks of high quality, and 390k instrument-only music tracks from the ShutterStock1 and Pond52. All datasets are full-length music with 32 kHz sampling rate, accompanied by comprehensive metadata such as textual descriptions, genre categorizations, BPM, and tags. Our evaluation uses the MusicCaps benchmark [7]. The MusicCaps benchmark comprises 5.5K samples including a subset of 1K samples balanced across various genres. We report objective metrics on the unbalanced subset as [8]. Footnote 1: www.shutterstock.com/music Footnote 2: www.pond5.com Footnote 3: [https://sound-effects.bbcrewind.co.uk/](https://sound-effects.bbcrewind.co.uk/) For sound effect model training, a dataset encompassing 4k hours of training data is employed. This dataset incorporates resources like AudioSet [31], BBC sound effects3, AudioCaps[32], Clotho v2 [33], VGG-Sound [34], FSD50K [35] and Free To Use Sounds4. All audio files are sampled at a rate of 16kHz. We adopt a preprocessing methodology akin to [6] for textual descriptions. To begin, we utilize multi-label annotations from datasets such as AudioSet, VGG-Sound, FSD50K. Pseudo-sentences are constructed by concatenating lists of tags linked with audio samples. Subsequently, we eliminate stop words and numbers, and lemmatize natural language captions available in datasets including AudioCaps, Clotho v2, Free To Use Sounds, and BBC Sound Effects. Lastly, samples containing the term "speech" in their tag or caption are filtered out, given that speech predominates in the data. Footnote 4: [https://www.freetousse](https://www.freetousse) Sounds.com/all-in-one-bundle/ ### Setup Our approach involves a non-causal five-layer EnCodec model tailored for music generation, operating at 32 kHz for monophonic music, and 16 kHz for sound effects generation. These EnCodec models maintain a frame rate of 50 Hz, commencing with an initial hidden size of 64, which doubles across the model's five layers. Embeddings are subjected to quantization using an RVQ comprising four quantizers, each featuring a codebook size of 2048. These EnCodec models are trained using the same audio data as those in the language model training. The transformer models used in this work have 300M parameters. To enhance efficiency with long sequences, we employ memory-efficient Flash attention [36] from the xFormers package [37], improving both speed and memory utilization. For ablations, we consistently employ the sound effects generation model setup. For music generation model training, 30-second audio segments are used, randomly sampled from the complete track. In sound effects generation training, 10-second audio clips are used. Model training spans 100K steps, utilizing the AdamW optimizer [38], a batch size of 192 examples, \(\beta_{1}=0.9\), \(\beta_{2}=0.95\), a decoupled weight decay of 0.1, and gradient clipping of 1.0. A cosine learning rate schedule is employed, with a warmup of 4k steps. Furthermore, an exponential moving average is applied, characterized by a decay factor of 0.99. The model training employs the mixed precision with Fully Sharded Data Parallel (FSDP) bfloat16. We used 16 GPUs and 32 GPUs for sound effects generation and music generation training, respectively. In the sampling process for inference, we adopt top-k sampling [39], retaining the top 250 tokens and applying a temperature of 1.0. ### Ablation Study Table 1 presents the results of the ablation study conducted on the sound effects generation model using the AudioCaps dataset. The optimal model was trained with representation regularization based on max pooling, employing a weight parameter of \(\lambda=3.0\) and allocating \(10\%\) of the training data for CFG training. In contrast, the use of average pooling-based sequence representation regularization did not demonstrate any improvement over the baseline. Furthermore, Table 1 reaffirms the significant role of CFG training in reducing both FAD and KL scores. ### Music Generation Table 2 gives the objective metrics on the MusicCaps data. We report the original metrics for MusicLM, Noise2Music and MusicGen 1.5B model without melody. Notably, the introduction of the proposed representation regularization results in enhancements across all metrics. Our 300M parameter model, which incorporates representation regularization, surpasses the performance of the MusicGen 1.5B parameter model in terms of FAD and CLAP. ### Sound Effects Generation The sound effects generation results on AudioCaps are shown in Table 3. The trend is the same as the music generation experiments. The representation regularization improves the model performance on FAD, KL and CLAP. The results of AudioGen is referring to the github5. Footnote 5: [https://github.com/facebookresearch/audicorat/blob/main/model_cards](https://github.com/facebookresearch/audicorat/blob/main/model_cards) ### Human preference evaluation Table 4 gives the subjective metrics for the sound and music generation models. Our subjective evaluation employed a blind pairwise comparison test, where evaluators were presented with two samples generated by distinct models, all based on the same text prompt. This comparison was conducted across a set of 20 text prompts, and eight human evaluators were tasked with determining their preference for the sample they believed exhibited better quality and better alignment with the provided prompt in each pair. Notably, both music and sound effects generation, when incorporating representation regularization, garnered higher user preference ratings. A possible explanation for the more significant trend in the sound effects generation is that music tends to be more abstract than sound effects. Consequently, any discrepancies in alignment with the provided text may not be as readily apparent to human evaluators. ## 5 Conclusion This paper has introduced representation regularization to improve controllability over audio generation by prioritizing alignment between audio and text representations during model training. The proposed method integrated the audio and text similarity regularization, particularly during the classifier-free guidance (CFG) phase, wherein the text condition is excluded from cross attention during language model training. The experimental results, conducted across various audio and music generation tasks, demonstrate that the proposed representation regularization has led to improvements in objective metrics for both audio and music generation. Moreover, these improvements have translated into a noticeable enhancement in human perception regarding audio generation quality and alignment. \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{Methods} & FAD(\(\downarrow\)) & KL(\(\downarrow\)) & CLAP(\(\uparrow\)) \\ \hline MusicLM [7] & 4.0 & - & - \\ Noise2Music[40] & 2.1 & - & - \\ MusicGen 1.5B[8] & 5.0 & 1.31 & 0.28 \\ \hline ours 300M w/o rr & 5.28 & 1.36 & 0.30 \\ ours 300M w/ rr & 4.83 & 1.32 & 0.31 \\ \hline \hline \end{tabular} \end{table} Table 2: Music generation using MusicCaps. ’w/ rr’ and ’w/o rr’ mean with and without represenation regularization, respectively. \begin{table} \begin{tabular}{c c c} \hline \hline Methods & music & sound effects \\ \hline ours w/o rr & 48\(\%\) & 33\(\%\) \\ ours w/ rr & 52\(\%\) & 67\(\%\) \\ \hline \hline \end{tabular} \end{table} Table 4: Human preference evaluation \begin{table} \begin{tabular}{c c|c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{1}{c}{\multirow{2}{*}{FAD(\(\downarrow\))}} & \multicolumn{1}{c}{\multirow{2}{*}{KL(\(\downarrow\))}} & \multicolumn{1}{c}{\multirow{2}{*}{CLAP(\(\uparrow\))}} \\ \hline \hline \multicolumn{1}{c}{MusicLM [7]} & 4.0 & - & - \\ Noise2Music[40] & 2.1 & - & - \\ MusicGen 1.5B[8] & 5.0 & 1.31 & 0.28 \\ \hline \hline \multicolumn{1}{c}{ours 300M w/o rr} & 5.28 & 1.36 & 0.30 \\ ours 300M w/ rr & 4.83 & 1.32 & 0.31 \\ \hline \hline \end{tabular} \end{table} Table 2: Music generation using MusicCaps. ’w/ rr’ and ’w/o rr’ mean with and without represenation regularization, respectively. \begin{table} \begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Methods}} & \multicolumn{1}{c}{\multirow{2}{*}{FAD(\(\downarrow\))}} & \multicolumn{1}{c}{\multirow{2}{*}{KL(\(\downarrow\))}} & \multicolumn{1}{c}{\multirow{2}{*}{CLAP(\(\uparrow\))}} \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{1.77}} & \multicolumn{1}{c}{\multirow{2}{*}{1.58}} & \multicolumn{1}{c}{\multirow{2}{*}{0.30}} \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{1.52}} & \multicolumn{1}{c}{\multirow{2}{*}{1.60}} & \multicolumn{1}{c}{\multirow{2}{*}{0.30}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{1.43}} & \multicolumn{1}{c}{\multirow{2}{*}{1.57}} & \multicolumn{1}{c}{\multirow{2}{*}{0.31}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{1.43}} & \multicolumn{1}{c}{\multirow{2}{*}{1.57}} & \multicolumn{1}{c}{\multirow{2}{*}{0.30}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{1.43}} & \multicolumn{1}{c}{\multirow{2}{*}{1.57}} & \multicolumn{1}{c}{\multirow{2}{*}{0.31}} \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{1.57}} & \multicolumn{1}{c}{\multirow{2}{*}{1.60}} & \multicolumn{1}{c}{\multirow{2}{*}{0.30}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{1.43}} & \multicolumn{1}{c}{\multirow{2}{*}{1.57}} & \multicolumn{1}{c}{\multirow{2}{*}{0.31}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{1.58}} & \multicolumn{1}{c}{\multirow{2}{*}{1.61}} & \multicolumn{1}{c}{\multirow{2}{*}{0.30}} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{\multirow{2}{*}{1.43}} & \multicolumn{1}{c}{\multirow{2}{*}{1.57}} & \multicolumn{1}{c}{\multirow{2}{*}{0.31}} \\ \hline \hline \end{tabular} \end{table} Table 3: Sound effects generation using AudioCaps. ’w/ rr’ and ’w/o rr’ mean with and without represenation regularization, respectively.
2309.11222
Generalized Few-Shot Point Cloud Segmentation Via Geometric Words
Existing fully-supervised point cloud segmentation methods suffer in the dynamic testing environment with emerging new classes. Few-shot point cloud segmentation algorithms address this problem by learning to adapt to new classes at the sacrifice of segmentation accuracy for the base classes, which severely impedes its practicality. This largely motivates us to present the first attempt at a more practical paradigm of generalized few-shot point cloud segmentation, which requires the model to generalize to new categories with only a few support point clouds and simultaneously retain the capability to segment base classes. We propose the geometric words to represent geometric components shared between the base and novel classes, and incorporate them into a novel geometric-aware semantic representation to facilitate better generalization to the new classes without forgetting the old ones. Moreover, we introduce geometric prototypes to guide the segmentation with geometric prior knowledge. Extensive experiments on S3DIS and ScanNet consistently illustrate the superior performance of our method over baseline methods. Our code is available at: https://github.com/Pixie8888/GFS-3DSeg_GWs.
Yating Xu, Conghui Hu, Na Zhao, Gim Hee Lee
2023-09-20T11:24:33Z
http://arxiv.org/abs/2309.11222v1
# Generalized Few-Shot Point Cloud Segmentation Via Geometric Words ###### Abstract Existing fully-supervised point cloud segmentation methods suffer in the dynamic testing environment with emerging new classes. Few-shot point cloud segmentation algorithms address this problem by learning to adapt to new classes at the sacrifice of segmentation accuracy for the base classes, which severely impedes its practicality. This largely motivates us to present the first attempt at a more practical paradigm of generalized few-shot point cloud segmentation, which requires the model to generalize to new categories with only a few support point clouds and simultaneously retain the capability to segment base classes. We propose the geometric words to represent geometric components shared between the base and novel classes, and incorporate them into a novel geometric-aware semantic representation to facilitate better generalization to the new classes without forgetting the old ones. Moreover, we introduce geometric prototypes to guide the segmentation with geometric prior knowledge. Extensive experiments on S3DIS and ScanNet consistently illustrate the superior performance of our method over baseline methods. Our code is available at: [https://github.com/Pixie8888/GFS-3DSeg_GWs](https://github.com/Pixie8888/GFS-3DSeg_GWs). ## 1 Introduction Point cloud segmentation aims at predicting the category of each point in the 3D scenes represented by the point cloud, and has wide applications in autonomous driving, robotics. Although fully-supervised point cloud segmentation methods (Full-3DSeg) [19, 20, 28, 11, 9, 29] have achieved impressive performance, they heavily require large-scale annotated training data and rely on the closed set assumption that the class distribution of the testing point cloud remains the same as the training dataset. However, the closed set assumption is unrealistic in the open world where new classes arise continuously. The Full-3DSeg in this challenging open-world setting thus requires large amounts of annotated data for new classes, which are time-consuming and expensive to collect. The few-shot point cloud segmentation (FS-3DSeg) [35, 14, 8] algorithms are designed to ameliorate the lack of data for novel class adaptation. Generally, FS-3DSeg first trains a model with abundant training samples of the base classes, and then targets at segmenting the new classes by learning from only a small number of samples from the corresponding new classes. By adopting episodic training [26] to mimic the testing environment and specific designs for feature extraction [35], FS-3DSeg achieves promising novel class segmentation results for the query point clouds. However, in the task of point cloud segmentation, base and novel classes often appear together in one scene (see Figure 5 as examples). An ideal segmentor in practice is expected to give each point in the scene a semantic label. As a result, the FS-3DSeg setting that only segments points of novel classes while ignoring the base classes suffers limited practicality. In view of the impracticality of FS-3DSeg, we introduce the generalized few-shot point cloud segmentation (GFS-3DSeg) task. As shown in Tab. 1, given the model originally trained on base classes, the objective of GFS-3DSeg is to segment both base and novel classes using merely a limited Figure 1: **Visualization of the geometric words (GWs) on S3DIS**. The left figure shows the original point cloud and the right figure shows the activated point cloud to a geometric word. The activated points are colored green. It shows the horizontal planes of the sofa, table and chair are all activated to the same GW due to similar geometric structure. number of labeled samples for the new classes during testing. Furthermore, there is no access to the base training data during testing in GFS-3DSeg considering the issues of data privacy and storage memory limitations. In this demanding but practical context, we expect a good generalized few-shot point cloud segmentor to effectively learn to segment novel classes with few samples and also maintain the knowledge of the base classes. A potential solution to GFS-3DSeg can be using prototype learning [25, 21] to generate class prototypes of the base and novel classes as the classifier weights, which can quickly adapt to new classes and avoid forgetting past knowledge caused by fine-tuning. However, effective learning of the representation and classifier for the novel classes is not trivial and remains as a major challenge. In this paper, we propose **geometric words1** (GWs) as the transferable knowledge obtained from the base classes to enhance the learning of the new classes without forgetting the old ones. Although different (old and new) classes contain distinct semantic representations, they usually share similar local geometric structures as shown in Fig. 1. Based on this observation, we first mine the representation for the local geometric structures from the pretrained features of the base classes and store them as the geometric words to facilitate learning of the new classes with few examples. We then learn a geometric-aware semantic representation based on the geometric words. Specifically, the geometric-aware semantic representation is a fusion of two features: 1) _Class-agnostic geometric feature_ obtained by the assignment of low-level features to their most similar geometric words. 2) _Class-specific semantic feature_ which is the output of a feature extractor. Intuitively, our geometric-aware semantic representation allows the encoding of the transferable geometric information across classes while preserving the semantic information for effective segmentation. Footnote 1: Analogous to visual words in bag-of-words image retrieval systems [17]. We further introduce **geometric prototype** (GP) to supplement the original semantic prototype in the prototype learning. Particularly, the geometric prototype refers to the frequency histogram of GWs that can uniquely represent each class from the global geometric perspective despite GWs are class agnostic. We thus leverage GP to propose a geometric-guided classifier re-weighting module for the rectification of biased predictions originating from semantic prototypes. Specifically, we first perform minor frequency pruning on the geometric prototypes to suppress noisy responses of the geometric words for each class. Subsequently, we measure the geometric matching score between each query point and the pruned geometric prototypes, and employ these scores as geometric-guided weights. By re-weighting the semantic logits with geometric-guided weights, our final prediction is enriched with geometric information that is transferable across classes. Such transferable information helps to facilitate the segmentation of new classes that only have a few samples while preserving the knowledge of base classes. Our main contributions can be summarized as: * We are the first to study the important generalized few-shot point cloud segmentation task, which is more practical than its counterparts of fully-supervised and few-shot setting in the dynamic testing environment. * We propose geometric words to represent diverse basic geometric structures that are shared across different classes, and a geometric-aware semantic representation that allows for generalizable knowledge encoding. * We introduce geometric prototype to supplement the semantic prototype. We design a geometric-guided classifier re-weighting module comprising minor frequency pruning to dynamically guide the segmentation of novel classes with geometric similarity. * We conduct extensive experiments on S3DIS and ScanNet to verify the effectiveness of our method. Specifically, our method improves over the state-of-the-art FS-3DSeg method by 6 and 8 times on the 5-shot and 1-shot settings of ScanNet, respectively. sifier weight as the classifier, which unifies the recognition of both novel and base categories. To reduce the biased learning of base classes, Gidaris [6] and Ye [31] improve the classifier of the novel classes by utilizing the knowledge of base classes. Gidaris [6] propose few-shot classification weight generator to perform attention over the classification weight vectors of the base classes. Ye [31] synthesize calibrated few-shot classifiers with a shared neural dictionary learned in the base class training stage. In this work, we study generalized few-shot point cloud segmentation, a more challenging task as it targets at dense 3D point-level classification. Instead of fusion classifier weights of novel classes with past knowledge [6, 31], we use geometric prototype as an additional classifier weight to help calibrate the biased prediction. Moreover, we propose geometric-aware semantic representation to facilitate the representation learning for novel classes. **Few-shot Semantic Segmentation.** Few-shot semantic segmentation performs segmentation of novel classes for images [12, 34, 33, 22] or point clouds [35, 8, 14] by only learning from a few support samples. The methods for few-shot image semantic segmentation can be categorized into metric-based [27, 12] and relation-based [34, 33] methods. Metric-based methods aggregate prototypes from the support set as the classifier, and perform cosine similarity with the query features. Relation-based methods concatenate the support features with query features for dense feature comparison via deep convolutional network. Few-shot point cloud segmentation [35] also adopts metric-based technique by performing label propagation among the query points and the multi-prototypes of each class to infer the query point label. Generalized few-shot image semantic segmentation (GFS-2DSeg) [25, 2] aims to segment both base and novel classes in the query images during testing stage, which is more practical than the few-shot setting. CAPL [25] leverages the contextual cues from the support set and query images to enhance the classifier weights of the base classes. PIFS [2] finetunes on the support set of novel classes during testing stage and proposes prototype-based distillation loss to combat catastrophic forgetting of base classes. In this paper, we study generalized few-shot point cloud semantic segmentation (GFS-3DSeg), a practical yet unexplored task. Different from GFS-2DSeg which assumes the annotation of base classes is available in the novel training samples, we strictly follow the definition of GFSL that only the annotation of novel classes are provided in the support set. Consequently, both CAPL and PIFS fail to work properly under the GFS-3DSeg since both rely on the co-occurence of base classes in the support sets of novel classes to calibrate the imbalanced learning between base and novel classes. To solve this challenging problem, we propose to mine the representation for the basic geometric structures as the transferable knowledge to improve the representation and classifier of the novel classes. **Geometric Primitives** Geometric primitives are the fundamental components for the 3D objects, and has been initially studied in the transfer learning in 3D [3, 36]. Chowdhury [3] use microshapes as the basic geometric components to describe any 3D objects in the 3D object recognition task. However, object-level annotation is not available for the query point cloud in the point cloud semantic segmentation or object detection. Therefore, Zhao [36] only utilize the geometric information at point-level by enhancing the local geometric representation of each query point with a geometric memory bank. Although we also performs point-level enhancement in our geometric-aware semantic representation, different from Zhao [36], we inject geometric information to the high-level semantic representation to help model understand the geometric structures through learning the class semantics. Moreover, we propose to use the geometric prototype as the global geometric enhancement to each class to calibrate the biased prediction during testing stage. ## 3 Problem Formulation In generalized few-shot point cloud semantic segmentation, a base class training dataset \(D^{b}_{\text{train}}\), and a novel class training dataset \(D^{n}_{\text{train}}\) with non-overlapping label space \(C^{b}\cap C^{n}=\varnothing\) are provided. The testing dataset \(D_{\text{test}}\) has a label space \(C_{\text{test}}=C^{b}\cup C^{n}\). During the training stage, the model learns the base classes \(C^{b}\) from abundance of labeled point cloud data \(D^{b}_{\text{train}}=\left\{\left(P^{b}_{k},M^{b}_{k}\right)_{k=1}^{\left|D^{ b}_{\text{train}}\right|}\right\}\), where \(\left|D^{b}_{\text{train}}\right|\) denotes the size of \(D^{b}_{\text{train}}\). Each point cloud \(P^{b}_{k}\in\mathbb{R}^{m\times d_{0}}\) contains \(m\) points with feature dimension \(d_{0}\) and \(M^{b}_{k}\) denotes the annotation of \(C^{b}\) in \(P^{b}_{k}\). During the testing stage, the model first learns \(C^{n}\) from limited labeled data \(D^{n}_{\text{train}}=\left\{\left(P^{n,i}_{k},M^{n,i}_{k}\right)_{k=1}^{K} \right\}_{i=1}^{\left|C^{n}\right|}\), with \(K\) support point clouds per novel class \(C^{n,i}\in C^{n}\), and \(\left|C^{n}\right|\) is the number of novel classes. \(P^{n,i}_{k}\in\mathbb{R}^{m\times d_{0}}\) and \(M^{n,i}_{k}\) is the binary mask indicating the presence of \(C^{n,i}\). Note that during testing stage, the model does not have access to \(D^{b}_{\text{train}}\). The testing dataset \(D_{\text{test}}=\left\{\left(P^{q}_{k},M^{q}_{k}\right)_{k=1}^{\left|D^{u} \right|}\right\}\), with each query point cloud \(P^{q}_{k}\in\mathbb{R}^{m\times d_{0}}\). \(M^{q}_{k}\) represents the ground-truth annotation of \(C_{\text{test}}\) in \(P^{q}_{k}\). The goal of GFS-3DSeg is to correctly segment both \(C^{b}\) and \(C^{n}\) in the testing query point cloud. ## 4 Our Method **Background.** We adopt prototype learning to segment both base and novel classes during testing stage. Class prototypes are learned as the classifier weight for each class. Base class prototypes are learned in the training stage via gradient descent, and novel class prototypes are learned by aggregating the foreground features of the support set during the testing stage. We name the prototype "semantic prototype" since it captures the semantic information of each class. The prediction of each query point is assigned by the class label of the most similar semantic prototypes. However, naively adopting prototype learning is insufficient to learn well on new classes due to the small support set. We thus propose geometric-aware semantic representation and geometric-guided classifier re-weighting to help segmenting new classes. Framework Overview.Fig. 2 shows the overview our framework, which consists of four main parts: a) The **geometric words (GWs)** to enhance the representation and classifier of the new classes. b) The **geometric-aware semantic representation (GSR)** based on the GWs to learn a transferable representation during the base class training stage. we first get the geometric feature of a point as the assignment of the GWs to the point. We then fuse the geometric feature with its corresponding semantic feature to get the final GSR. c) The **geometric prototype (GP)** to supplement the semantic prototype learned from the insufficient training samples. Specifically, a GP is the frequency histogram of the GWs assigned to the points in each class that can uniquely describe the class from the global geometric perspective. d) The **geometric-guided classifier re-weighting (GCR)** based on GPs to provide prior knowledge of each query point belonging to the potential target classes based on the geometric similarity. ### Geometric Words Unlike its 2D images counterpart, 3D point clouds contain complete geometric information with shared basic geometric components. Understanding these basic geometric components helps learning across old and new classes due to the shared similar local geometric structures. We thus propose geometric words as the representation of these basic geometric components, and utilize it during training and testing stage to facilitate learning of new classes from few shots training point clouds. To obtain the geometric words, we pretrain the feature extractor \(E\) of attMPTI [35] on \(D^{b}_{\text{train}}\) and collect the features \(\{f_{\text{low}}\}\) of all the points belonging to \(C^{b}\). We concatenate the output feature of the first three EdgeConv layers and denote it as \(f_{\text{low}}\in\mathbb{R}^{d_{1}}\) since lower level features contains more geometric cues. We then obtain \(H\) geometric words \(\mathcal{G}=\{g_{h}\}_{h=1}^{H}\in\mathbb{R}^{H\times d_{1}}\) by applying K-means on \(\{f_{\text{low}}\}\) to calculate the \(H\) centroids. Each \(g_{h}\) is a local aggregation of the points with similar \(f_{\text{low}}\), _i.e._ similar geometric characteristic. Fig. 1 visualizes the point-to-GWs assignments by searching points with \(f_{\text{low}}\) that are most similar to a given geometric word \(g_{h}\). As shown in Fig. 1, the horizontal plane of the chair, tables and sofas are all activated to the same geometric word. It suggests that our GWs are able to represent shared geometric components among different classes. Geometric-aware Semantic Representation.Based on the GWs, we propose the geometric-aware semantic representation (GSR) to enhance the feature representation of the new classes. The GSR is a fusion of a class-agnostic Figure 2: **The overview of our proposed framework. (a) GW Generation**: shows the generation of geometric words from base class training data. **(b) Geometric-aware Semantic representation (GSR)**: semantic feature \(f_{\text{sem}}\) is fused with geometric feature \(f_{\text{pos}}\) as the final representation \(f_{\text{in}}\) for each point. **(c) GP Generation**: shows the generation of the geometric prototype \(p_{\text{geo}}^{c}\) for class c. **(d) Geometric-guided Classifier Re-weighting (GCR)**: we compute geometric matching between geometric feature \(\hat{f}_{\text{geo}}\) and pruned GP \(p_{\text{geo}}^{c}\) to find potential target classes and derive weight \(w^{c}\) to supplement semantic prediction \(l_{\text{sem}}^{c}\). \(l_{\text{fin}}^{c}\) is the final prediction logit of class c. geometric feature \(f_{\text{geo}}\in\mathbb{R}^{H}\) and a class-specific semantic feature \(f_{\text{sem}}\in\mathbb{R}^{d_{2}}\). Specifically, the geometric feature \(f_{\text{geo}}\) that represents the geometric information of each point is computed by the soft-assignment of its feature \(f_{\text{low}}\) to its most similar GW as follow: \[f_{\text{geo}}=\operatorname{Softmax}\left(\left[f_{\text{low}}\cdot g_{1}, \ldots,f_{\text{low}}\cdot g_{H};\tau\right]\right), \tag{1}\] where \(\cdot\) denotes the cosine similarity between the feature and a geometric word. \(\tau\) is the temperature to sharpen the probability vector, and we empirically set it to 10. We adopt soft assignment to make it differentiable with respect to the feature extractor. We use the final output of \(E\) as the semantic feature \(f_{\text{sem}}\). Subsequently, \(f_{\text{geo}}\) and \(f_{\text{sem}}\) are concatenated and sent into a small convolution block \(E_{\text{fuse}}\) to obtain the final representation \(f_{\text{final}}\in\mathbb{R}^{d_{3}}\) for each point as follow: \[f_{\text{fin}}=E_{\text{fuse}}\left(f_{\text{geo}}\parallel f_{\text{sem}} \right), \tag{2}\] where \(\parallel\) represents the concatenation of two vectors. During base class training, we simulate query and fake novel class support set in each batch following [25] to enhance the model's adaptability to unseen environments. The optimization objective is to minimize cross-entropy loss computed by the prototypes generated through the assembling of \(\left\{p_{\text{gen}}^{c}|c\in C^{b}\right\}\) and the fake novel prototypes from the simulated support set. We refer readers to [25] for a more comprehensive understanding of the training strategy. ### Geometric prototype Although GWs are class-agnostic, their combinations are able to represent different classes in a geometric way. We visualize the frequency of GWs assigned to the points in different classes in Fig. 3(b), (c) and (d). The horizontal axis represents the index of the GWs and the vertical axis represents the normalized frequency ratio. The histogram conveys the global structure of each class via the frequency ratios of the GWs, and different classes have different histograms. The histogram thus uniquely represents its corresponding class and we refer to it as geometric prototype \(p_{\text{geo}}^{c}\in\mathbb{R}^{H}\): \[p_{\text{geo}}^{c}=\frac{\sum_{i=1}^{N^{c}}\left[\hat{f}_{\text{geo}}\right]^ {c,i}}{N^{c}}, \tag{3}\] where \(N^{c}\) denotes the number of points belonging to class \(c\) in the training dataset \(D_{\text{train}}^{b}\) or \(D_{\text{train}}^{n}\). \(\hat{f}_{\text{geo}}\in\mathbb{R}^{H}\) is the hard assignment in the form of one-hot vector. We augment the semantic prototype \(p_{\text{sem}}^{c}\) with the geometric prototype \(p_{\text{geo}}^{c}\), as the semantic prototype primarily encodes semantic information and, as a result, becomes insufficient in representing the new classes due to limited training samples. Geometric-guided Classifier Re-weighting.Based on the GP, we propose the geometric-guided classifier re-weighting module to help the prediction of the novel classes. As shown in Fig. 3, the corresponding geometric word for a point on the window frame is activated in the GP of window and the geometrically similar class door, but suppressed in beam which does not have the frame structure. This implies that comparing the geometric feature of the query point with GP can be employed as a hint for segmentation. Therefore, we compute a geometric matching score \(s^{c}\) based on the cosine similarity between a query point's geometric feature and GP as: \[s^{c}=\mathds{1}\left[p_{\text{geo}}^{c}\cdot\hat{f}_{\text{geo}}\right]= \begin{cases}1&p_{\text{geo}}^{c}\cdot\hat{f}_{\text{geo}}>0\\ 0&\text{otherwise}\end{cases}, \tag{4}\] where \(\mathds{1}[\cdot]\) is an indicator fucntion, and \(s^{c}=1\) indicates the query point has the same geometric structure as class \(c\). \(c\) is the class name. However, the geometric matching may be negatively influenced by the noisy GWs in the \(p_{\text{geo}}^{c}\) due to the scene context. As shown in Fig. 4, although points on the wall (in red) are on the vertical plane, they are still activated to the same GW as the points on the horizontal table plane due to adjacency. To suppress these noisy GWs in \(p_{\text{geo}}^{c}\) and improve the accuracy of geometric matching, we propose minor frequency pruning as shown in Algorithm 1. Our motivation is that the GWs representing the typical geometric structure of the class usually contributes a large frequency on the Figure 3: Motivation for geometric-guided classifier re-weighting. For each histogram, the horizontal axis represents the index of the GWs and the vertical axis represents the normalized frequency ratio within the class or point. (a) Visualize the geometric feature \(\hat{f}_{\text{geo}}\) of the yellow query point on the window frame. (b),(c) and (d) shows the geometric prototypes of window, door and beam, respectively. The red bar denotes the GW with same index. histogram, while the GWs that are introduced by the scene context have relatively low frequencies. Consequently, we remove GWs corresponding to lower frequencies to only preserve the representative geometric structures. We denote the pruned geometric prototype as \(p^{c}_{\text{pgeo}}\). The frequency limit \(\alpha\) in Algorithm 1 denotes the amount of the frequencies to keep in the original \(p^{c}_{\text{geo}}\), and the \(p^{c,j}_{\text{geo}}\) denotes the j-th entry of \(p^{c}_{\text{geo}}\). The computation of the matching score is then updated as: \[s^{c}=\mathds{1}\left[p^{c}_{\text{pgeo}}\cdot\hat{f}_{\text{geo}}\right], \tag{5}\] To highlight the geometrically matched class in the prediction, we set a weight \(w^{c}\) to the prediction of each class according to the matching score as follow: \[w^{c}:=\begin{cases}\beta&s^{c}=1\\ 1&s^{c}=0\end{cases}. \tag{6}\] We set \(\beta>1\) to highlight the potential target classes of the query point. We then re-weight the semantic classification logit \(l^{c}_{\text{sem}}\) with \(w^{c}\) to compute the final prediction logit \(l^{c}_{\text{fin}}\) as follows: \[l^{c}_{\text{fin}}=w^{c}\times l^{c}_{\text{sem}},\qquad l^{c}_{\text{sem}}=p ^{c}_{\text{sem}}\cdot f_{\text{fin}}. \tag{7}\] The \(l^{c}_{\text{fin}}\) considers both semantic and geometric similarity when segmenting new classes, which is more reliable than using semantic prediction logit \(l^{c}_{\text{sem}}\) alone as shown in Tab. 4. Finally, we predict the label \(y\) for each query point as follow: \[y=\operatorname{argmax}\left(\operatorname{Softmax}\left(\left[l^{1}_{\text{ fin}},...,l^{|C_{\text{sem}}|}_{\text{fin}}\right];\tau\right)\right). \tag{8}\] ## 5 Experiments ### Datasets and Setup Datasets.We evaluate on two datasets: 1) S3DIS [1] consists 272 point clouds from six areas with annotation corresponding to 13 semantic classes. We use area 6 as the testing dataset \(D_{\text{test}}\), and leverage the other five areas to construct the training dataset for base and novel classes. 2) ScanNet [4] consists of 1,513 point clouds with annotation corresponding to 20 semantic classes. We use 1,201 point clouds to construct training dataset of \(D^{b}_{\text{train}}\) and \(D^{n}_{\text{train}}\) and the rest 312 point clouds to construct \(D_{\text{test}}\). For both datasets, we choose the last 6 classes with least labeled points in the corresponding dataset as the novel classes \(C^{n}\) and the rest classes as base classes \(C^{b}\). The motivation is to simulate the scenario in real world, where the frequency of novel class occurring is low and it is hard to collect sufficient training data. Consequently, the novel classes for S3DIS are table, window, column, beam, board and sofa. The novel classes for ScanNet are sink, toilet, bathtub, shower curtain, picture and counter. We follow the data pre-processing of [35] to divide each point cloud into blocks with size of \(1\) meter \(\times 1\) meter on the \(xy\) plane. From each block, we sample \(m=2,048\) points as input. The dimension \(d_{0}\) for the input feature is 9 with XYZ, RGB and normalized XYZ to the block. Evaluation Metrics.We evaluate the performance of model using mean intersection-over-union (mIoU). We use Figure 4: Illustration of minor frequency pruning. The top left figure shows the green points on the table are activated to the GW representing the horizontal plane, while the red points on the vertical plane of the wall are wrongly activated to the same GW due to scene context. The right figure shows the proposed minor frequency pruning to suppress the activation to the wrong GWs introduced by scene context. mIoU-B, mIoU-N and mIoU-A to denote the mIoU on the base classes, novel classes, and all the classes, respectively. In addition, we use harmonic mean of mIoU-B and mIoU-N to better describe the overall performance on base and novel classes, _i.e_., HM = \(\frac{2\times\)mIoU-B\(\times\)mIoU-N}{\text{mIoU-B+mIoU-N}}\). In comparison to mIoU-A, HM is not biased towards the base classes [31]. ### Implementation details We adopt the feature extractor of [35] as \(E\) and pre-train it on the \(D_{\text{train}}^{b}\) for 100 epochs. We then perform K-means on the collection of the base class features \(\{f_{\text{low}}\}\). We set \(H\) as 200 for S3DIS and 180 for ScanNet, respectively. During base class training, we perform geometric-aware semantic representation learning and learn semantic prototypes for base classes \(\left\{p_{\text{Seg}}^{c}\mid c\in C^{b}\right\}\). \(d_{1}\) and \(d_{2}\) are both 192 following attMPTI [35], and \(d_{3}\) is set to 128. We set batch size to 32 and train for 150 epochs. We use Adam optimizer with initial learning rate of 0.01 and decayed by 0.5 every 50 epochs. We load the pre-trained weight of the first three EdgeConv layers and set their learning rate to be 0.001. We compute the geometric prototypes for base classes \(\left\{p_{\text{pgo}}^{c}\mid c\in C^{b}\right\}\) after training completes. During testing stage, we first obtain the semantic prototypes \(\left\{p_{\text{sem}}^{c}\mid c\in C^{n}\right\}\) and geometric prototypes \(\left\{p_{\text{pgo}}^{c}\mid c\in C^{n}\right\}\) for novel classes by averaging \(f_{\text{fin}}\) and \(\hat{f}_{\text{geo}}\) (followed by minor frequency pruning) of the foreground points in the support set, respectively. We then predict the class labels for each query point via proposed GCR. \(\alpha\) is set to 0.9 and 0.95 for S3DIS and ScanNet, respectively. \(\beta\) is set to 1.2. ### Baselines We design three baselines for comparison with our method. 1) **attMPT1**[35] is the state-of-the-art FS-3DSeg method. We follow the original implementation in [35] and episodically train attMPTI on base class dataset. Upon finishing training, we collect multi-prototypes for base classes. During testing stage, we first generate multi-prototypes for novel classes from \(D_{\text{train}}^{n}\). We then estimate the query label by performing label propagation among query points and prototypes of base and novel classes. 2) **PIFS**[2] is the state-of-the-art method for GFS-2DSeg that fine-tunes on \(D_{\text{train}}^{n}\) to learn novel classes. We apply their proposed prototype-based distillation loss to only the scores of novel classes since we do not provide the annotation of base classes in \(D_{\text{train}}^{n}\). 3) **CAPL**[25] is the state-of-the-art method in GFS-2DSeg that performs prototype learning to learn novel classes. We remove the SCE module of CAPL since the annotations of base classes in the \(D_{\text{train}}^{n}\) are not available. All the baselines use the same feature extractor with us for fair comparison. In addition, we design an oracle setting, **Fully Supervised**, where the model is trained on the fully annotated dataset of base and novel classes using the same feature extractor with us and a small segmentation head. ### Comparison with Baselines Tab. 2 and Tab. 3 show the results of GFS-3DSeg on S3DIS and ScanNet, respectively. We conduct experiments in two settings with the number of support point clouds \(K=\{1,5\}\) on each dataset. We randomly generate 5 sets of \(D_{\text{train}}^{n}\) using different seeds for each setting and calculate the averaged results over all 5 sets to obtain a more reliable results. It is clear to see that the segmentation accuracy of novel classes increases with more number of shots. Compared with all the baselines, our method is able to utilize the limited number of training samples from \(D_{\text{train}}^{n}\) in a more effective way and achieves much better performance in terms \begin{table} \begin{tabular}{c|c c c c|c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{5-shot} & \multicolumn{4}{c}{1-shot} \\ \cline{2-9} & mIoU-B & mIoU-N & mIoU-A & HM & mIoU-B & mIoU-N & mIoU-A & HM \\ \hline Fully Supervised & 76.51 & 58.69 & 68.29 & 66.42 & 76.51 & 58.69 & 68.29 & 66.42 \\ \hline attMPTI [35] & 34.90 & 16.08 & 26.21 & 21.99 & 21.89 & 11.39 & 17.05 & 14.95 \\ PIFS [2] & 56.99 & 19.66 & 39.76 & 29.23 & 57.85 & 14.59 & 37.88 & 23.31 \\ CAPL [25] & 73.56 & 35.18 & 55.85 & 47.51 & 72.80 & 23.87 & 50.22 & 35.67 \\ Ours & **73.61** & **43.26** & **59.60** & **54.42** & **74.10** & **29.66** & **53.58** & **41.92** \\ \hline \end{tabular} \end{table} Table 2: Results on **S3DIS** under 5-shot and 1-shot settings. \begin{table} \begin{tabular}{c|c c c c|c c c c} \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{5-shot} & \multicolumn{4}{c}{1-shot} \\ \cline{2-9} & mIoU-B & mIoU-N & mIoU-A & HM & mIoU-B & mIoU-N & mIoU-A & HM \\ \hline Fully Supervised & 43.12 & 37.04 & 41.34 & 39.85 & 43.12 & 37.04 & 41.34 & 39.85 \\ \hline attMPTI [35] & 16.31 & 3.12 & 12.35 & 5.21 & 12.97 & 1.62 & 9.57 & 2.88 \\ PIFS [2] & 35.14 & 3.21 & 25.56 & 5.88 & 35.80 & 2.54 & 25.82 & 4.75 \\ CAPL [25] & 38.22 & 14.39 & 31.07 & 20.88 & 38.70 & 10.59 & 30.27 & 16.53 \\ Ours & **40.18** & **18.58** & **33.70** & **25.39** & **40.06** & **14.78** & **32.47** & **21.55** \\ \hline \end{tabular} \end{table} Table 3: Results on **ScanNet** under 5-shot and 1-shot settings. of the HM and mIoU-N. attMPTI [35] fails to perform well on the GFS-3DSeg since it only focuses on establishing the decision boundary for those classes appearing in each episode. When including all the classes in the evaluation, the original decision boundary collapses. PIFS [2] also performs poorly on the GFS-3DSeg. The large intra-class variances of the 3D objects make novel class adaptation difficult for the 2D fine-tuning method. Moreover, their fine-tuning method leads to severe catastrophic forgetting of the base classes due to the absence of base class training data. Our method is based on CAPL [25]. Compared to CAPL which mainly utilizes the context information to enhance the semantic prototypes of the base classes, we focus on enhancing the learning of the new classes by leveraging transferable knowledge GWs. As a result, our method shows great superiority over CAPL in the performance of the novel classes. This can also be verified in Fig. 5, where our model is able to segment target new classes (board and table in the first row, and beam in the second row) more precisely. **Effectiveness of the minor frequency pruning.** In Tab. 6, we analyze the effect of the minor frequency pruning in the geometric prototype. GP without pruning (\(\alpha=1\)) shows worst performance compared to the pruned GPs. \(\alpha=0.9\) gives the best performance, thus we adopt \(\alpha=0.9\) in our final model. **Analysis of the weight \(\beta\).** We present the ablation study of the weight \(\beta\) in Tab. 7. \(\beta=1\) does not highlight any potential target classes predicted by geometric matching, which is equivalent to the model without GCR. Setting a moderate threshold \(>1\) is helpful to improve the segmentation performance, so we choose \(\beta=1.2\) for model testing. **Using SOTA feature extractor.** Tab. 8 shows the result of replacing DGCNN-based feature extractor [35] with Point Transformer v2 (PTv2) under the 5-shot setting of S3DIS. Our method outperforms the strongest baseline CAPL by a large margin on novel classes and overall performance. It verifies that our method can work successfully with a state-of-the-art point cloud feature extractor. We notice that the mIoU-N in Tab. 8 is lower than that of Tab. 2. One possible reason is that the feature extractor in [35] is specially designed to be able to quickly learn new classes from a small support set. **Standard deviation and per-class IoU.** Tab. 10 shows the standard deviation results on the 5 testing sets of the 5-shot setting of S3DIS. Our model shows similar variation with CAPL. Tab. 9 shows per-class IoU of the 5-shot setting of S3DIS. Our method largely outperforms CAPL for all new classes while maintaining on-par performance on base classes. ## 6 Conclusion In this paper, we present the unexplored yet important generalized few-shot point cloud segmentation. We address the challenge of facilitating new class segmentation with limited training samples by utilizing transferable knowledge geometric words (GWs) mined from the base classes. We propose geometric-aware semantic representation to learn generalizable representation where geometric features described through GWs are fused with semantic representation. We further propose the geometric prototype (GP) to supplement the semantic prototype in the testing stage. Extensive experiments on two benchmark datasets demonstrate the superiority of our method. **Acknowledgement.** This research work is fully done at the National University of Singapore and is supported by the Agency for Science, Technology and Research (A*STAR) under its MTC Programmatic Funds (Grant No. M23L7b0021). ## Appendix A Experimental Results on 3-shot Setting To further validate the effectiveness of our model, we compare our method with the baselines under the 3-shot setting on S3DIS and ScanNet in Tab. A1 and Tab. A2, respectively. The results consistently illustrate that our model outperforms all baselines by a large margin on novel class segmentation, and achieves the best overall performance. ### Ablation Study Tab. C3 shows the ablation study on ScanNet. Both geometric-aware semantic representation (GSR) and geometric-guided re-weighting (GRW) are beneficial to novel class generalization, and our full model with both GSR and GRW performs the best regarding overall segmentation accuracy. ### Qualitative Results The qualitative results in Fig. C2 demonstrate that our model can segment novel classes (Picture in the first row, Toilet and Sink in the second row) more precisely than CAPL [25]. Concurrently, we can still maintain good segmentation performance on base classes. ### Visualization of Geometric Words Fig. C3 visualizes the geometric words (GWs) on ScanNet. Each row shows two activated point clouds regarding to the same geometric word in different scenes. In the first row, the edge of the sofa, table, bathtub and toilet are all activated when provided with the same GW. In the second row, the stick of chair and table are activated. It suggests that the GWs are able to represent shared geometric components **across different scenes and different classes**. Interestingly, we also find that GWs are height-aware. The activated parts in the third and fourth rows regarding two GWs represent vertical planes of different heights.
2309.05257
FusionFormer: A Multi-sensory Fusion in Bird's-Eye-View and Temporal Consistent Transformer for 3D Object Detection
Multi-sensor modal fusion has demonstrated strong advantages in 3D object detection tasks. However, existing methods that fuse multi-modal features require transforming features into the bird's eye view space and may lose certain information on Z-axis, thus leading to inferior performance. To this end, we propose a novel end-to-end multi-modal fusion transformer-based framework, dubbed FusionFormer, that incorporates deformable attention and residual structures within the fusion encoding module. Specifically, by developing a uniform sampling strategy, our method can easily sample from 2D image and 3D voxel features spontaneously, thus exploiting flexible adaptability and avoiding explicit transformation to the bird's eye view space during the feature concatenation process. We further implement a residual structure in our feature encoder to ensure the model's robustness in case of missing an input modality. Through extensive experiments on a popular autonomous driving benchmark dataset, nuScenes, our method achieves state-of-the-art single model performance of 72.6% mAP and 75.1% NDS in the 3D object detection task without test time augmentation.
Chunyong Hu, Hang Zheng, Kun Li, Jianyun Xu, Weibo Mao, Maochun Luo, Lingxuan Wang, Mingxia Chen, Qihao Peng, Kaixuan Liu, Yiru Zhao, Peihan Hao, Minzhe Liu, Kaicheng Yu
2023-09-11T06:27:25Z
http://arxiv.org/abs/2309.05257v3
FusionFormer: A Multi-sensory Fusion in Bird's-Eye-View and Temporal Consistent Transformer for 3D Object Detection ###### Abstract Multi-sensor modal fusion has demonstrated strong advantages in 3D object detection tasks. However, existing methods that fuse multi-modal features require transforming features into the bird's eye view space and may lose certain information on Z-axis, thus leading to inferior performance. To this end, we propose a novel end-to-end multi-modal fusion transformer-based framework, dubbed FusionFormer, that incorporates deformable attention and residual structures within the fusion encoding module. Specifically, by developing a uniform sampling strategy, our method can easily sample from 2D image and 3D voxel features spontaneously, thus exploiting flexible adaptability and avoiding explicit transformation to the bird's eye view space during the feature concatenation process. We further implement a residual structure in our feature encoder to ensure the model's robustness in case of missing an input modality. Through extensive experiments on a popular autonomous driving benchmark dataset, nuScenes, our method achieves state-of-the-art single model performance of \(72.6\%\) mAP and \(75.1\%\) NDS in the 3D object detection task without test time augmentation. ## 1 Introduction Autonomous driving technologies typically rely on multiple sensors for safety considerations, such as LiDAR (Chen et al., 2023; Yin et al., 2021; Wang et al., 2020; Lang et al., 2019), cameras (Wang et al., 2021; Wang et al., 2022), and radar (Meyer and Kuschk, 2019; Meyer et al., 2021). These sensors possess distinct characteristics. For example, LiDAR can provide accurate yet sparse point clouds with 3D information, while images have dense features but lack such depth information. To enhance performance, multi-modal fusion can be used to integrate the strengths of these sensors. By combining information from multiple sensors, autonomous driving systems can achieve better accuracy and robustness, making them more reliable for real-world applications. Concatenating multi-modality features via simple concatenation in bird's eye view (BEV) space becomes a defacto standard to achieve state-of-the-art performance. As shown in Figure 1, current fusion framework fuses features from LiDAR point cloud and images in BEV space via simple concatenation (Liu et al., 2023; Liang et al., 2022) or a certain transformer architecture (Yan et al., 2023). However, we conjecture that these approaches has certain two limitations. In order to fuse information at BEV level, we must first transform the 2D image features into 3D via certain geometry view transformation (Philion and Fidler, 2020). This process requires using a monocular depth estimation module which is an ill-posed problem and can generate inaccurate feature alignment. We believe that a superior approach is to exploit features from sparse point cloud to assist this process. Concurrently, Yan et al. (2023) proposes a transformer to leverage positional encoding to encode image features, which can be viewed as an alternative approach to alleviate this issue. However, all aforementioned methods explicitly transform the point voxel features into BEV space before the fusion module by compressing the Z-axis dimensional features into vectors. This may hinder the performance of downstream tasks that involves height information, such as 3D object detection where one needs to predict the height of the bounding box. To tackle above problems, we propose a novel multi-modal fusion framework for 3D object detection, dubbed FusionFormer to address these challenges. As shown in Figure 1 (c), FusionFormer can generate fused BEV features by sequentially fusing LiDAR and image features with deformable attention (Zhu et al., 2020), which inherently samples features at the reference points corresponding to the BEV queries. By developing a uniform sampling strategy, our FusionFormer can easily sample from 2D image and 3D voxel features at the same time thus exhibits flexible adaptability across different modality inputs, and avoids explicit transformation and the need of monocular depth estimation. As a result, multi-modal features can be input in their original forms avoiding the information loss when transforming into BEV features. During the fusion encoding process, the point cloud features can serve as depth references for the view transform of image features, while the dense semantic features from images reciprocally complement the sparsity of point cloud features, leading to the generation of more accurate and dense fused BEV features. Notably, the multi-modal fusion encoder incorporates residual structures, ensuring the model's robustness in the presence of missing point cloud or image features. We also propose a plug-and-play temporal fusion module along with our FusionFormer to support temporal fusion of BEV features from previous frames. In addition, to verify the effectiveness and flexibility of our approaches, we use voxel features obtained from monocular depth estimation of only images to replace the features obtained from LiDAR point clouds to construct a FusionFormer that only uses camera modality. In summary, we present the following contributions in this paper: Figure 1: **Comparison between state-of-the-art methods and our FusionFormer.****(a)** In BEVFusion-based methods, the camera features and points features are transformed into BEV space and fused with concatenation. **(b)** In CMT, the points voxel features are first compressed into BEV features, then are encoded with the same positional encoding as image features. Then, each object query is passed into a transformer decoder to generate the prediction result. **(c)** In FusionFormer, the fusion of multi-modal features is achieved by sequentially interacting BEV queries with original point cloud voxel features and image features. This interaction leverages the depth references provided by point cloud features for the view transformer of image features, while the image features complement the sparsity of point cloud features. As a result, more accurate and dense fused BEV representations are obtained. Additionally, FusionFormer incorporates a temporal fusion encoding module, enabling the fusion of BEV features from historical frames. * We notice that state-of-the-art multi-modality frameworks need explicitly compressing the voxel features into BEV space before fusing with image features might lead to inferior performance, and propose a novel transformer based framework with a uniform sampling strategy to address this issue. * We also demonstrate that our method is flexible and can be transformed into a camera only 3D object detector by replacing the LiDAR features to image features with monocular depth estimation. * Our method achieves state-of-the-art single model performance of \(72.6\%\) mAP and \(75.1\%\) NDS in the 3D object detection task of the nuScenes dataset without test time augmentation. ## 2 Related Work Visual-centric 3D Object Detection.In recent years, camera-based 3D object detection has gained increasing attention in the field of autonomous driving. Early approaches relied on predicting the 3D parameters of objects based on the results of 2D object detection (Park et al., 2021; Wang et al., 2021). Recently, BEV-based 3D object detection has become a hot research topic (Xie et al., 2022). Compared to previous methods, BEV-based 3D object detection can directly output 3D object detection results around the vehicle using multi-view camera images, without requiring post-processing of detection results in overlapping regions. Inspired by LSS (Philion and Fidler, 2020), recent works like BEVDet (Huang et al., 2021) and BEVDepth (Li et al., 2023) have used bin-based depth prediction to transform multi-view camera features into BEV space. PETR (Liu et al., 2022) achieves a camera-based BEV method with transformer by adding 3D position encoding. DETR3D (Wang et al., 2022) and BEVFormer (Li et al., 2022) use deformable attention to make the query under BEV space interact with local features related to its position projection range during the transformer process, achieving the transformation from multi-view camera space to BEV space. LiDAR-centric 3D Object Detection.LiDAR-based 3D object detection methods can be categorized into different types based on the representation form of point cloud features. Point-wise methods extract features directly from individual points and output 3D object detection results end-to-end (Qi et al., 2018; Paigwar et al., 2019). BEV-based methods, on the other hand, construct intermediate feature forms before transforming them into BEV space (Yin et al., 2021). For instance, VoxelNet (Zhou and Tuzel, 2018) voxelizes the raw point cloud and applies sparse 3D convolutions to obtain voxel features. These features are subsequently compressed along the Z dimension to obtain BEV features. In contrast, Pointpillar (Lang et al., 2019) projects the point cloud into multiple pillars and pools the points within each pillar to extract features for BEV-based detection. Temporal-aware 3D Object Detection.Temporal fusion has emerged as a hot research topic in the field of 3D object detection for its ability to enhance detection stability and perception of target motion. BEVFormer (Li et al., 2022) uses spatiotemporal attention to fuse the historical BEV features of the previous frame with current image features. BEVDet4D (Huang and Huang, 2022) employs concatenation to fuse temporally aligned BEV features from adjacent frames. SOLOFusion (Park et al., 2022) further leverages this approach to achieve long-term temporal fusion. Some methods perform temporal information fusion directly on the original feature sequences based on query. For instance, PETRv2 (Liu et al., 2022) employs global attention and temporal position encoding to fuse temporal information, while Sparse4D (Lin et al., 2022) models the relationship between multiple frames based on sparse attention. Additionally, StreamPETR (Wang et al., 2023) introduces a method for long-term fusion by leveraging object queries from past frames. Multi-modal 3D Object Detection.Fusing multi-sensory features becomes a de-facto standard in 3D perception tasks. BEVFusion-based methods (Liu et al., 2023; Liang et al., 2022; Cai et al., 2023) obtain image BEV features using view transform (Philion and Fidler, 2020; Li et al., 2022) and concatenates them with LIDAR BEV features via simple concatenation. However, such simple stragety may fail to fully exploit the complementary information between multi-modal features. Another line of approaches construct transformer (Bai et al., 2022; Wang et al., 2023; Yang et al., 2022) based architectures to perform interaction between image and point-cloud features. These methods relies simultaneously on both image and point cloud modal features, which presents challenges in cases of robustness scenarios when missing a modality data. Concurrently, Yan et al. (2023) proposes a method, dubbed CMT, which adopts 3D position encoding to achieve end-to-end multimodal fusion-based 3D object detection using transformer. Nonetheless, the aforementioned fusion methods rely on compressing point cloud voxel features into BEV representations, which can result in the loss of the height information. To tackle this, UVTR (Li et al., 2022b) introduced knowledge transfer to perform voxel-level multi-modal fusion by directly combining LiDAR voxel features with image voxel features obtained through LSS. However, this approach did not yield notable improvements in performance. Unlike these approaches, FusionFormer demonstrates enhanced adaptability to the input format of multimodal features, allowing direct utilization of point cloud features in voxel form. Moreover, by incorporating deformable attention and residual structures within the fusion encoding module, FusionFormer can achieve both multimodal feature complementarity and robustness in handling missing modal data. ## 3 Method Here we present our method in detail. Figure 2 (a) illustrates our proposed FusionFormer for multimodal temporal fusion. By utilizing a fusion encoder based on deformable attention (Lin et al., 2022), LiDAR and image features are transformed into fused BEV features. Compared to previous approaches such as BEVFusion (Liu et al., 2023; Liang et al., 2022), FusionFormer can adapt to different feature representations of different modalities without requiring pre-transformation into BEV space. The image branch can retain its original 2D feature representation, while the point cloud branch can be represented as BEV features or voxel features. Detailed information regarding the image branch and point cloud branch can be found in the A.1 section of the appendix. The temporal fusion module utilizes deformable attention to fuse BEV features from the current and previous frames that have been temporally aligned. Then the processed multimodal temporal fusion BEV features are input into the detection task head to obtain 3D object detection results. ### Multi-modal Fusion Encoder As illustrated in Figure 2 (b), the fusion encoding module consists of 6 layers, each incorporating self-attention, points cross-attention, and images cross-attention. In accordance with the standard transformer architecture, the BEV queries are subjected to self-attention following initialization. Subsequently, points cross-attention is executed to facilitate the integration of LiDAR features, which is further enhanced through images cross-attention to fuse image features. The encoding layer outputs the updated queries as input to the next layer after being processed through a feed-forward network. After 6 layers of fusion encoding, the final multimodal fusion BEV features are obtained. Figure 2: **(a) The framework of the FusionFormer. The LiDAR point cloud and multi-view images are processed separately in their respective backbone networks to extract voxel features and image features. These features are then inputted into a multi-modal fusion encoder (MMFE) to generate the fused BEV features. The fused BEV features of the current frame, along with the BEV features from historical frames, are jointly fed into a temporal fusion encoder (TFE) to obtain the multi-modal temporal fused BEV features. Finally, the features are utilized in the detection head to produce the final 3D object detection results. (b) The architecture of the Multi-modal Fusion Encoder (MMFE). The BEV queries are initialized and subsequently subjected to self-attention. They are then sequentially utilized for cross-attention with the point cloud voxel features and image features. The resulting BEV queries, updated through a feed-forward network, are propagated as inputs to the subsequent encoder layers. Following multiple layers of fusion encoding, the ultimate fused BEV feature is obtained.** BEV Queries.We partition the BEV space within the surrounding region of interest (ROI) range around the vehicle's center into a grid of \(H\times W\) cells. Correspondingly, we define a set of learnable parameters \(Q\) to serve as the queries for the BEV space. Each \(q\) corresponds to a cell in the BEV space. Prior to inputting Q into the fusion encoder, the BEV queries are subjected to position encoding based on their corresponding BEV spatial coordinates (Li et al., 2022c). Self-Attention.To reduce computational resource usage, we implemented the self-attention based on deformable attention. Each BEV query interacts only with its corresponding queries within the ROI range. This process is achieved through feature sampling at the 2D reference points for each query as illustrated below: \[SA(Q_{p})=DefAttn(Q_{p},p,Q) \tag{1}\] where \(Q_{p}\) represents the BEV query at point \(p=(x,y)\). Points Cross-Attention.The points cross-attention layer is also implemented based on deformable attention, but the specific manner in which points cross-attention is implemented varies depending on the form of the LiDAR points features. For the case where BEV features are used as input, we implement the points cross-attention layer as follows: \[PCA_{2D}(Q_{p},B_{pts})=DefAttn(Q_{p},P_{2D},B_{pts}) \tag{2}\] where \(B_{pts}\) represents the BEV features output by the LiDAR branch, and \(P_{2D}=(x_{2D},y_{2D})\) represents the 2D projection of the coordinate \(p=(x,y)\) onto the point cloud BEV space. For the case where voxel features are used as input, the points cross-attention layer is implemented as follows: \[PCA_{3D}(Q_{p},V_{pts})=\sum_{i=1}^{N_{ref}}DefAttn(Q_{p},P_{3D}(p,i),V_{pts}) \tag{3}\] where \(V_{pts}\) represents the voxel features output by the LiDAR branch. To obtain the 3D reference points, we first expand the grid cell corresponding to each BEV query with a height dimension, similar to the pillar representation (Lang et al., 2019). Then, from each pillar corresponding to a query, we sample a fixed number of \(N_{ref}\) reference points, which are projected onto the point cloud voxel space using the projection equation \(P_{3D}\). Specifically, for each query located at \(p=(x,y)\), a set of height anchors \(\{z_{i}\}_{i=1}^{N_{ref}}\) are defined along its \(Z\)-axis. Consequently, for each BEV query \(Q_{p}\), a corresponding set of 3D reference points \((x,y,z_{i})_{i=1}^{N_{ref}}\) is obtained. And the projection equation is as follow: \[P_{3D}(p,i)=(x_{pts},y_{pts},z_{pts}) \tag{4}\] where \(P_{3D}(p,i)\) is the projection of the i-th 3D reference point of BEV query \(Q_{p}\) in the LiDAR space. Images Cross-Attention.The implementation of the images cross-attention is similar to the points cross-attention with voxel features as input. Since the images have multi views, the 3D reference points of each query can only be projected onto a subset of the camera views. Following BEV-Former (Li et al., 2022c), we denote the views that can be projected as \(V_{hit}\). Therefore, the images cross-attention process can be expressed as: \[ICA(Q_{p},F)=\frac{1}{V_{hit}}\sum_{i=1}^{N_{ref}}\sum_{j=1}^{V_{hit}}DefAttn( Q_{p},P(p,i,j),F_{j}) \tag{5}\] where \(j\) is the index of the camera view, \(F_{j}\) represents the image features of the \(j\)-th camera, and \(P(p,i,j)\) represents the projection point of the \(i\)-th 3D reference point \((x,y,z_{i})\) of query \(Q_{p}\) in the image coordinate system of the \(j\)-th camera. ### Temporal Fusion Encoder As shown in Figure 3, the temporal fusion encoder (TFE) consists of three layers, each comprising BEV temporal-attention and feedforward networks. At the first layer, the queries are initialized with the BEV features of the current frame and then updated through temporal-attention using historical BEV features. The resulting queries are passed through a feedforward network and serve as input to the next layer. After three layers of fusion encoding, the final temporal fusion BEV features are obtained. The temporal-attention process can be expressed as: \[TCA(Q_{p},B)=\sum_{i=0}^{T}DefAttn(Q_{p},P,B_{t-i}) \tag{6}\] where \(B_{t-i}\) represents the BEV feature at time \(t-i\). ### Fusion with Depth Prediction The flexibility of FusionFormer enables us to approximate the point cloud branch in scenarios where only camera images are available by adding an image-based monocular depth prediction branch. As illustrated in Figure 4, we propose a depth prediction network to generate interval-based depth predictions from input image features. 3D convolution is utilized to encode the depth prediction results as voxel features in each camera frustum. Depth cross-attention is then employed to fuse the depth features. The process of depth cross-attention is defined as follows: \[DCA(Q_{p},D)=\frac{1}{V_{hit}}\sum_{i=1}^{N_{ref}}\sum_{j=1}^{V_{hit}}DefAttn(Q _{p},P(p,i,j),D_{j}) \tag{7}\] where \(D_{j}\) denotes the encoded depth prediction features of the j-th camera, and \(P(p,i,j)\) represents the projection point of the i-th 3D reference point \((x,y,z_{i})\) of query \(Q_{p}\) onto the frustum coordinate system of the j-th camera. ## 4 Experiments This section presents the performance of our proposed FusionFormer on the task of 3D object detection, along with several ablation studies that analyze the benefits of each module in our framework. Figure 4: **Fusion with depth prediction.** After being processed by the backbone network, the multi-view image features are split into two branches. One branch utilizes a feature pyramid network (FPN) to extract multi-scale image features. The other branch employs a monocular depth prediction network to estimate depth and utilizes 3D convolution to encode the depth predictions. The multi-scale image features and the depth embedding are jointly input into the encoder to obtain the BEV features. Figure 3: **Temporal Fusion Encoder (TFE).** The initial set of BEV queries is formed by utilizing the BEV features of the current frame. These queries are then subjected to cross-attention with historical BEV features, including the current frame. The resulting queries are updated through a feed-forward network and serve as inputs for the subsequent layer. Through multiple layers of temporal fusion encoding, the final output is obtained, representing the temporally fused BEV feature. ### Experimental Setups Datasets and metrics.We conducted experiments on the nuScenes dataset (Caesar et al., 2020) to evaluate the performance of our proposed method for 3D object detection in autonomous driving. The nuScenes dataset consists of 1.4 million 3D detection boxes from 10 different categories, with each frame of data containing 6 surround-view camera images and LiDAR point cloud data. We employ the nuScenes detection metrics NDS and mAP as evaluation metrics for our experiments. Implementation details.We conducted algorithmic experiments using the open-source project MMDetection3D (Contributors, 2020) based on PyTorch. Specifically, we selected VoVNet-99 (Lee & Park, 2020) as the backbone for the image branch, generating multi-scale image features through FPN (Lin et al., 2017). The input image size was set to \(1600\times 640\). For the LiDAR point cloud branch, VoxelNet (Zhou & Tuzel, 2018) was used as the backbone. The input LiDAR point cloud was voxelified with a size of \(0.075m\). The size of the BEV queries was set to \(200\times 200\). During the training process, we loaded the pre-trained weights of the image branch backbone on Fcos3D (Wang et al., 2021). The point cloud branch did not require pre-trained weights and was directly trained end-to-end with the model. We present a 3D detection head based on Deformable DETR (Zhu et al., 2020) that outputs 3D detection boxes and velocity predictions directly from BEV features without the need for non-maximum suppressing. To address the unstable matching problem encountered in DETR-like detection heads and accelerate training convergence, we applied the query denoising strategy (Li et al., 2022) during the training process. The model was trained for 24 epochs with the class-balanced grouping and sampling (CBGS) strategy (Zhu et al., 2019). \begin{table} \begin{tabular}{l|c|c c|c c c c} \hline Methods & Modality & NDS\(\uparrow\) & mAP\(\uparrow\) & mATE\(\downarrow\) & mASE\(\downarrow\) & mADE\(\downarrow\) & mAVE\(\downarrow\) & mAAE\(\downarrow\) \\ \hline PointPainting(Vora et al.) & CL & 61.0 & 54.1 & 38.0 & 26.0 & 54.1 & 29.3 & 13.1 \\ PointAugmenting(Wang et al.) & CL & 71.1 & 66.8 & 25.3 & **23.5** & 35.4 & 26.6 & 12.3 \\ MVP(Chen et al.) & CL & 70.5 & 66.4 & 26.3 & 23.8 & 32.1 & 31.3 & 13.4 \\ FusionPainting(Xu et al.) & CL & 71.6 & 68.1 & 25.6 & 23.6 & 34.6 & 27.4 & 13.2 \\ TransFusion(Bai et al.) & CL & 71.7 & 68.9 & 25.9 & 24.3 & 35.9 & 28.8 & 12.7 \\ BEVFusion(Liu et al.) & CL & 72.9 & 70.2 & 26.1 & 23.9 & 32.9 & 26.0 & 13.4 \\ BEVFusion(Liang et al.) & CL & 73.3 & 71.3 & **25.0** & 24.0 & 35.9 & 25.4 & 13.2 \\ UVTR(Li et al.) & CL & 71.1 & 67.1 & 30.6 & 24.5 & 35.1 & **22.5** & 12.4 \\ CMT(Yan et al.) & CL & 74.1 & 72.0 & 27.9 & **23.5** & 30.8 & 25.9 & 11.2 \\ DeepInteraction(Yang et al.) & CL & 73.4 & 70.8 & 25.7 & 24.0 & 32.5 & 24.5 & 12.8 \\ BEVFusion4D(Cai et al.) & CLT & 74.7 & **73.3** & - & - & - & - & - \\ \hline FusionFormer & CLT & **75.1** & 72.6 & 26.7 & 23.6 & **28.6** & **22.5** & **10.5** \\ \hline \end{tabular} \end{table} Table 1: Performance comparison on the nuScenes test set. “L” is LiDAR. “C” is camera. “T” is temporal. The results are evaluated using a single model without any test-time-augmentation or ensembling techniques. \begin{table} \begin{tabular}{l|c|c|c|c c} \hline Methods & Image Backbone & LiDAR Backbone & Modality & mAP\(\uparrow\) & NDS\(\uparrow\) \\ \hline TransFusion(Bai et al.) & DLA34 & voxel0075 & CL & 67.5 & 71.3 \\ BEVFusion(Liu et al.) & Swin-T & voxel0075 & CL & 68.5 & 71.4 \\ BEVFusion(Liang et al.) & Swin-T & voxel0075 & CL & 67.9 & 71.0 \\ UVTR(Li et al.) & R101 & voxel0075 & CL & 65.4 & 70.2 \\ CMT(Yan et al.) & VoV-99 & voxel0075 & CL & 70.3 & 72.9 \\ DeepInteraction(Yang et al.) & R50 & voxel0075 & CL & 69.9 & 72.6 \\ BEVFusion4D-S(Cai et al.) & Swin-T & voxel0075 & CL & 70.9 & 72.9 \\ BEVFusion4D(Cai et al.) & Swin-T & voxel0075 & CLT & **72.0** & 73.5 \\ \hline FusionFormer-S & VoV-99 & voxel0075 & CL & 70.0 & 73.2 \\ FusionFormer & VoV-99 & voxel0075 & CLT & 71.4 & **74.1** \\ \hline \end{tabular} \end{table} Table 2: Performance comparison on the nuScenes val set. “L” is LiDAR. “C” is camera. “T” is temporal. The ”-S” indicates that the model only utilizes single-frame BEV features without incorporating temporal fusion techniques. The results are evaluated using a single model without any test-time-augmentation or ensembling techniques. ### Comparison with State-of-the-Art Methods As shown in Table 1, FusionFormer achieves \(75.1\%\) NDS and \(72.6\%\) mAP on the nuScenes test dataset for 3D object detection, outperforming state-of-the-art methods. We used a single model fused with 8 frames of historical BEV features without any test-time-augmentation or ensembling techniques. We also compared the performance of FusionFormer with other methods on the nuScenes val dataset as shown in Table 2. Our proposed FusionFormer achieves state-of-the-art performance on both single-frame and temporal fusion scenarios with NDS scores of \(73.2\%\) and \(74.1\%\). Several detection results on the nuScenes test set of FusionFormer are shown in Figure 5. ### Camera Based 3D detection Fused with Depth Prediction As shown in Table 3, FusionFormer achieves \(53.3\%\) NDS and \(43.9\%\) mAP on the nuScenes val dataset with only camera images input by fused with the depth prediction results. Compared with the baseline BEVFormer, the NDS and mAP increased by \(1.6\%\) and \(2.3\%\) respectively. In particular, we found that after introducing the depth prediction branch, the BEV features output by the encoder can converge better. This may be because the depth information carried by the depth prediction branch allows the model to focus more accurately on the target location. As shown in Figure 6 (a), compared to BEVFormer, the BEV features obtained through FusionFormer-Depth are noticeably more focused on the target location. ### Robustness Study During the training process, we incorporated modality mask (Yan et al., 2023; Yu et al., 2023) to enhance the model's robustness to missing modality data. As demonstrated in Table 4, our model can produce desirable results even in scenarios where image or point cloud data is missing, show-casing its strong robustness. These findings highlight the potential of our approach for addressing challenges in multi-modal learning and its potential for practical real-world applications. ### Ablation Study In this section, we investigate the influence of each module on the performance of our proposed multi-modal fusion model for 3D detection. We adopted ResNet-50 (He et al., 2016) as the backbone for the image branch, with an input resolution of \(800{\times}320\) for the image and a voxel size of \(0.1m\) for the point cloud branch, outputting \(150{\times}150\) BEV features. It is noteworthy that, all the experiments presented in this section were based on single frame without incorporating temporal \begin{table} \begin{tabular}{l c c c} \hline Method & Modality & mAP\(\uparrow\) & NDS\(\uparrow\) \\ \hline FusionFormer & C & 34.3 & 45.5 \\ FusionFormer & L & 62.5 & 68.6 \\ FusionFormer & CL & 71.4 & 74.1 \\ \hline \end{tabular} \end{table} Table 4: Robustness performance on the nuScenes val set. “L” is LiDAR. ”C” is camera. Figure 5: **Qualitative detection results in the nuScenes test set. Bounding boxes with different colors represent Cars(*), Pedestrians(*), Bus(*) and Truck(*).** \begin{table} \begin{tabular}{l c c c} \hline Method & mAP\(\uparrow\) & NDS\(\uparrow\) \\ \hline BEVFormer(Li et al.) & 41.6 & 51.7 \\ FusionFormer-Depth & **43.9** & **53.3** \\ \hline \end{tabular} \end{table} Table 3: Results of camera based 3D detection fused with depth prediction. fusion techniques. The models were trained for 24 epochs without utilizing the CBGS (Zhu et al., 2019) strategy. LiDAR Features.In order to evaluate the impact of fusing voxel features from point cloud, we conducted experiments by comparing the model's performance with LiDAR features using BEV and voxel representations. Table 5 presents the results of all models. In contrast to inputting LiDAR features in the form of BEV, the use of voxel input format leads to superior model performance. Notably, the prediction errors for object center location and orientation are significantly reduced. This may be attributed to the preservation of more object structural information of the Z-axis in the voxel format, resulting in more accurate detection outcomes. Modality Fusion.We conducted a comparative analysis of our proposed modality fusion method with other fusion methods to evaluate their performance. In the case of the fusion methods of addition and concatenation, the image BEV features were obtained through BEVFormer(Li et al., 2022c). The experimental results are presented in Table 6. As shown in Figure 6 (b), compared to other fusion methods, the fused BEV features obtained through FusionFormer exhibit a stronger response to the targets. Specifically, the distant cars labeled in the image are excluded from the ground truth (GT) annotations due to the limited points captured by LiDAR. Consequently, conventional multimodal fusion methods, such as simple addition and concatenation, fail to effectively incorporate these distant objects. In contrast, our proposed method, FusionFormer, enables enhanced fusion of multimodal features. It leverages the complementary information from image data to detect distant objects even in scenarios with sparse point cloud data. Conclusion In this paper, we propose a novel transformer-based framework with a uniform sampling strategy that overcomes the limitations of existing multi-modality frameworks. Our approach eliminates the need for compressing voxel features into BEV space before fusion with image features, resulting in superior performance. We demonstrate the versatility of our method by transforming it into a camera-only 3D object detector, utilizing image features obtained through monocular depth estimation instead of LiDAR features. Our method achieves state-of-the-art performance in the 3D object detection task on the nuScenes dataset. In future, we will explore the applications of FusionFormer in other tasks, such as map segmentation.
2309.15050
The $η/η' \rightarrow π^+ π^- γ$ Decays within BHLS$_2$ and the Muon HVP
The departure of the latest FNAL experimental average for the muon anomalous magnetic moment $a_\mu=(g_\mu-2)/2$ measurements having increased from $4.2 \sigma$ to $5.0 \sigma$, with respect to the White Paper consensus, it may indicate a hint for new physics. As the most delicate piece of $a_\mu$ is its leading order HVP part $a_\mu^{HVP-LO}$, methods to ascertain its theoretical value are crucial to interpret this discrepancy. We propose to examine the dipion spectra from the $\eta/\eta' \rightarrow \pi^+ \pi^- \gamma$ decays in the Hidden Local Symmetry (HLS) context using its BHLS$_2$ broken variant. We thus have at our disposal a framework where the close relationship of the dipion spectra from the $\eta/\eta'$ and $\tau$ decays and of the $e^+e^- \to \pi^+\pi^-$ annihilation can be simultaneously considered. A special focus is put to the high statistic dipion spectra from the $\eta$ decay collected by the KLOE/KLOE2 Collaboration and $\eta'$ decay collected by the BESIII Collaboration, and it is shown that the BHLS$_2$ framework provides a fair account of their dipion spectra. More precisely, it is first proven that a single Omn\`es representation real polynomial is requested, common to both the $\eta$ and $\eta'$ dipion spectra. Moreover, it is shown that fits involving the $\eta/\eta'/\tau$ dipion spectra, and excluding the $e^+e^- \to \pi^+\pi^-$ annihilation data, allow for a prediction of the pion vector form factor data $F_\pi(s)$ which fairly agree with the usual dipion spectra collected in the $e^+e^- \to \pi^+\pi^-$ annihilation channel. Even if more precise $\eta/\eta'/\tau$ dipion spectra would help to be fully conclusive, this confirms the Dispersive Approach results for $a_\mu^{HVP-LO}$ and points towards a common non experiment-dependent origin to this tension with the now well accepted LQCD result.
Maurice Benayoun, Luigi DelBuono, Fred Jegerlehner
2023-09-26T16:27:07Z
http://arxiv.org/abs/2309.15050v2
# BHLS\({}_{2}\) and \(\pi^{+}\pi^{-}\) Final State Interaction : ###### Abstract The departure of the latest FNAL experimental average for the muon anomalous magnetic moment \(a_{\mu}=(g_{\mu}-2)/2\) measurements having increased from \(4.2\sigma\)[1] to \(5.0\sigma\)[2], with respect to the White Paper (WP) consensus[3], it may indicate a hint for new physics. As the most delicate piece of \(a_{\mu}\) is its leading order HVP part \(a_{\mu}^{HVP-LO}\), methods to ascertain its theoretical value are crucial to interpret appropriately this departure with the measurement. We therefore propose to examine closely the dipion spectra from the \(\eta/\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\gamma\) decays in the Hidden Local Symmetry (HLS) context using its BHLS\({}_{2}\) broken variant. We thus have at disposal a framework where the close relationship of the dipion spectra from the \(\eta/\eta^{\prime}\) and \(\tau\) decays and of the \(e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\) annihilation can be simultaneously considered. A special focus is put to the high statistic dipion spectra from the \(\eta\) decay collected by the KLOE/KLOE2 Collaboration and \(\eta^{\prime}\) decay collected by the BESIII Collaboration. It is shown that, once the Final State Interaction (FSI) effects are accounted for, the BHLS\({}_{2}\) framework provides a fair account of their dipion spectra. More precisely, it is first proven that a single FSI polynomial is requested, common to both the \(\eta\) and \(\eta^{\prime}\) dipion spectra. Moreover, it is shown that fits involving the \(\eta/\eta^{\prime}/\tau\) dipion spectra, and excluding the \(e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\) annihilation data, allow for a prediction of the pion form factor data \(F_{\pi}(s)\) which fairly agree with the usual dipion spectra collected in the \(e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\) annihilation channel. Even if more precise \(\eta/\eta^{\prime}/\tau\) dipion spectra would help to be fully conclusive, this may already be considered as supporting the Dispersive Approach results for \(a_{\mu}^{HVP-LO}\). \(\dagger\) _Maurice Benayoun has passed on September 15th, 2023._ ###### Contents * 1 Preamble : Various Aspects of the Dispersive Approach to the Muon HVP * 2 Introduction * 3 The Kroll Conditions and VPP Lagrangian Pieces * 4 The \(\eta/\eta^{\prime}\to\pi^{-}\pi^{+}\gamma\) Decays in the BHLS\({}_{2}\) Framework * 5 The \(\eta\to\pi^{+}\pi^{-}\gamma\) Amplitude within BHLS\({}_{2}\) * 6 The \(\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) Amplitude within BHLS\({}_{2}\) * 7 BHLS\({}_{2}\) and the WZW Box Anomalies * 8 \(\eta/\eta^{\prime}\) Radiative Decays : The BHLS\({}_{2}\) Dipion Mass Spectra * 9 Final State Interaction (FSI) in the \(\eta/\eta^{\prime}\) Radiative Decays * 10 Fits of the \(\eta/\eta^{\prime}\) Radiative Decay Spectra within BHLS\({}_{2}\) * 10.1 Available Dipion Spectra from the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) Decays * 10.2 \(\eta/\eta^{\prime}\) Experimental Spectra : Fits in Isolation * 10.3 The \(\eta/\eta^{\prime}\) Experimental Spectra : Analysis within the BHLS\({}_{2}\) Context * 10.4 Final State Interaction: BHLS\({}_{2}\) Fit Results versus Others * 10.5 The \(T^{R2}(\eta/\eta^{\prime})\) Terms in BHLS\({}_{2}\) : The Role of \(\rho^{\pm}\) Exchanges * 10.6 Dealing with the Absolute Scale of the \(\eta/\eta^{\prime}\) dipion spectra * 11 \(\eta/\eta^{\prime}\) Decays : The Muon Anomalous Magnetic Moment * 11.1 Accuracy of the FSI Parametrization * 11.2 The \(\eta/\eta^{\prime}\) Spectra and HVP Estimates * 11.3 \(\eta/\eta^{\prime}\) Based Evaluations of the HVP * 12 Concluding Remarks * A Brief Outline of the HLS/BHLS\({}_{2}\) Approach * A.1 The Unbroken Non-Anomalous HLS Lagrangian * A.2 Breaking the HLS Lagrangian I : The BKY Mechanism * A.3 Breaking the HLS Lagrangian II : The Covariant Derivative (CD) Breaking * A.4 Breaking the HLS Lagrangian III : Dynamical Vector Meson Mixing * A.5 The Kinetic Breaking and the [\(\pi^{0},\ \eta,\ \eta^{\prime}\)] System * B Erratum : The VPP/APP interaction pieces in BHLS\({}_{2}\) C \(A_{\pm}\) Solutions : The \(AAP\) and \(VVP\) Lagrangians * C.1 The \(AAP\) Lagrangian * C.2 The \(VVP\) Lagrangian * C.2.1 The \(VV\pi\) Lagrangians * C.2.2 The \(VV\eta\) Lagrangian * C.2.3 The \(VV\eta^{\prime}\) Lagrangian * D \(A_{\pm}\) Solutions : The \(APPP\) and \(VPPP\) Lagrangians * D.1 The \(APPP\) Lagrangian * D.2 The \(VPPP\) Lagrangian * E Brief Analysis of the BHLS\({}_{2}\) Parameters Values ## 1 Preamble : Various Aspects of the Dispersive Approach to the Muon HVP The hadronic vacuum polarization (HVP) \(a_{\mu}\equiv(g_{\mu}-2)/2\) plays a central role in precision physics, in particular, in the Standard Model prediction of the Muon Anomalous Magnetic Moment, but as importantly, for a precise calculation of the running electromagnetic fine structure constant \(\alpha_{em}(s)\) and of the electroweak mixing parameter \(\sin^{2}\theta_{W}(s)\). Thereby, accurate predictions suffer from the non-perturbative contributions from the low-lying hadron physics uneasy to address precisely from first principles. Recently [2], the Muon \(g-2\) FNAL experiment has re-estimated the previous average value of their run 1 data sample [2] and the latest BNL measurement [4] by also considering their run 2 and 3 data samples; this turns out to increase the statistics by a factor of \(\simeq 4\). Moreover, the Muon \(g-2\) FNAL Collaboration achieved an improvement by about a factor of 2 of their systematics uncertainty. The derived updated average : \[a_{\mu}^{exp.}=116592059(22)\times 10^{-11}(0.19{\rm ppm})\] increases the deviation from the White Paper(WP) Standard Model consensus [3], from \(4.2\)\(\sigma\)[1] to \(5.0\)\(\sigma\)[2]. The difference \(\delta_{a}=a_{\mu}^{exp.}-a_{\mu}^{th.}\) is now \(\delta_{a}=24.4\pm 4.5\) in units of \(10^{-10}\), dominated by the uncertainty agreed upon by the WP theory consensus [3]. This departure from theoretical expectations deserves, of course, to be explored as, indeed, the overall pattern reflected by the various model/theoretical approaches is unclear, even contradictory. The WP Standard Model consensus for \(a_{\mu}^{th.}\) resorts to a data-driven dispersion relation (DR) approach, where the experimental low-energy hadron production cross-sections provide the non-perturbative input to calculate the HVP effects. Fortunately, the problem can be restricted to a precise knowledge of the process \(e^{+}e^{-}\rightarrow\gamma^{*}\rightarrow\) hadrons, and for what concerns the muon \(g-2\), the \(e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\) channel provides the dominant contribution to the model uncertainty. Regarding its non-perturbative hadronic content, the standard DR based evaluation of the HVP consists in deriving the contribution of _each_\(e^{+}e^{-}\rightarrow\gamma^{*}\rightarrow\) hadrons annihilation channel by combining the different spectra collected by the different experiments in the hadronic channel considered by means of algorithms of different levels of sophistication. The full hadronic HVP value is then defined, for what concerns its non-perturbative content, by the sum of these different contributions. The WP Standard Model consensus [3] is based on a combination of two such evaluations [5, 6]. Although the main challenge is then, seemingly, the simple \(\pi\pi\)-production process, the experimental challenge is highly complex depending on a precise understanding of the detectors and, on the theory side, on the radiative corrections required to disentangle hadronic effects from electromagnetic contamination. Unfortunately, the data samples provided by the different experiments do not exhibit a satisfactory consistency - and even some can be in strong contradiction [7] with the others. Using the \(\tau\to\pi^{-}\pi^{0}\nu_{\tau}\) decay information, first proposed by [8], has been considered to discriminate among the \(\pi^{+}\pi^{-}\) spectra, but did not lead to convincing enough conclusions. It is widely considered that all low-energy hadronic processes derive from QCD even though, in the non-perturbative low-energy regime, tools to make valid predictions of real-time hadronic cross sections are missing. Nevertheless, as hadron physics is accepted to derive from QCD, it follows that _the various specific hadronic decay processes are highly correlated to each other_. It is thus motivated to address these correlations, especially in order to constrain the non-perturbative sector of the \(e^{+}e^{-}\to\gamma^{*}\to\) hadrons annihilations. Although we lack methods to predict a process like \(e^{+}e^{-}\to\pi^{+}\pi^{-}\), we know that QCD implies well-defined symmetry patterns like approximate chiral symmetry, and gives rise to Chiral Perturbation Theory (ChPT), a systematic expansion about the chiral symmetry point. It allows one to work out reliable predictions from first principles for the low energy tail of the QCD hadron spectrum (up to about the \(\eta\) meson mass). With this in mind, an attempt to consider the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation not only in relation with the \(\tau^{\pm}\to\pi^{\pm}\pi^{0}\nu_{\tau}\) decay _but also with other related spectra_ is important; it motivates a unified modeling1 by a version of the Resonance Lagrangian approach (RLA) - we adopted the Hidden Local Symmetry (HLS) version [9, 10] - needed to extend Chiral perturbation theory towards higher energy to cover the \(\rho\), \(\omega\) and \(\phi\) energy range2. To practically succeed in such a program, the original HLS model - see for instance [13] for a review - has been supplied with appropriate symmetry breaking mechanisms with various levels of sophistication to derive the earlier versions of the BHLS model as [14, 15, 16], or the more refined BHLS\({}_{2}\) version [17], updated in [18]. Footnote 1: Considering individual channels in isolation, as usually done, does not help much to uncover inconsistencies between different experimental data sets sometime involving different final states. Footnote 2: A precise evaluation of the photon HVP implies a precise account of the energy range \(\sqrt{s}\equiv[2m_{\pi},1.05\) GeV], the largest contribution of the non–perturbative region which extends up to \(\simeq 2\) GeV as experimentally observed [11, 12]. One thus achieved a simultaneous consistent fit of the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) data from CMD-2 [19], SND [19], KLOE [13, 14, 15], BaBar [16, 17], BESIII [67, 68] and CLEO-c [20] and the \(\tau\to\pi^{-}\pi^{0}\nu_{\tau}\) decay spectral functions collected by ALEPH [21], CLEO [22] and Belle [23] (see [14, 15, 17, 18]). This updated BHLS\({}_{2}\) fairly recovers the known properties of the \([\pi^{0},\eta,\eta^{\prime}]\) system thanks to its kinetic breaking mechanism [18]. Beside keeping the neutral vector current conserved, this breaking mechanism also generates a violation of the charged vector current conservation and a departure of \(F_{\pi}^{\tau}(s=0)=1\) by a few per mil. Such an option finds a support in the own Belle fit results reported in Table VII of their [23]; additional \(\tau\) spectra are needed to conclude - see the discussion in Section 3 of [18] - as such a breaking mechanism might affect \(\tau\) based predictions for the muon HVP. Beside the \(\pi^{+}\pi^{-}\) annihilation channel and the \(\tau\rightarrow\pi^{-}\pi^{0}\nu_{\tau}\) decay spectra, BHLS\({}_{2}\)[17, 18], also addressed sucessfully the \(\pi^{+}\pi^{-}\pi^{0},(\pi^{0}/\eta)\gamma\) and \(K\overline{K}\) final states in the fully correlated way represented by a single Lagrangian. A few additional radiative partial width decays are also considered, noticeably those for \(\pi^{0}/\eta/\eta^{\prime}\rightarrow\gamma\gamma\), and some more \(VP\gamma\) radiative decays. In view of the significant inconsistencies of the data samples collected by some experiments, the global fit approach has two advantages: first, more data are expected to reduce the uncertainties of the HVP evaluations and, second, provides consistency checks of each \(e^{+}e^{-}\rightarrow\gamma^{*}\rightarrow\) hadrons data set versus the other samples collected in the same annihilation channel _or in another one_. In the present work, we go a step further by also involving the \(\eta/\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\gamma\) decay modes in order to obtain additional \(\pi\pi\) dipion spectra from experiments with systematics quite different from those encountered in \(e^{+}e^{-}\) annihilations. As will be seen below, these decays allow for a new test of the self-consistency of the DR based estimates of \(a_{\mu}\) : Indeed, the \(\eta/\eta^{\prime}\) decay spectra can provide a DR evaluation for \(a_{\mu}(\pi^{+}\pi^{-},\sqrt{s}<1~{}{\rm GeV})\) which can be fruitfully compared with those directly derived from directly integrating the \(e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\) annihilation data. One may expect that the \(\eta/\eta^{\prime}\) dipion spectra benefit from systematics largely independent of those in the \(e^{+}e^{-}\) annihilation. Beside the DR approach which gave rise to several evaluations of the muon HVP \(a_{\mu}\) listed in the White Paper [3], the challenging Lattice QCD (LQCD) approach has been used by several groups and produced results with relatively poor precision at the time of the White Paper. They were not used to define the so-called WP Standard Model consensus reported in [3] which, based on some DR estimates, provided the leading order (HVP-LO) consensus \(a_{\mu}^{\rm LO}[{\rm th.}]=693.1(4.0)\times 10^{-10}\). Using the LQCD approach, the BMW Collaboration which first got [24, 3]\(a_{\mu}^{\rm LO}=(711.1\pm 7.5\pm 17.4)\times 10^{-10}\) later on improved their calculation and got \(a_{\mu}^{\rm LO}=(707.5\pm 5.5)\times 10^{-10}\)[25] at clear variance with the WP consensus just reminded. This evaluation finds support from the new evaluations by other LQCD groups : \(a_{\mu}^{\rm LO}=(720.0\pm 12.4_{\rm stat}\pm 9.9_{\rm syst})\times 10^{-10}\) (Mainz/CLS 19) and \(a_{\mu}^{\rm LO}=(715.4\pm 16.3_{\rm stat}\pm 9.2_{\rm syst})\times 10^{-10}\) (RBC/UKQCD18) [26][78]. The lattice calculation of \(a_{\mu}^{\rm LO}\) thus brings the SM prediction of \(a_{\mu}\) into an acceptable agreement with the experiment but generates a significant disagreement between the LQCD results and the different data-driven dispersive results; this looks now well established. It moves the former puzzle from data versus predictions to a puzzle between Lattice QCD and the DR approaches which deserves clarification. Introduction In this article, we focus on the traditional way to estimate the contribution of the non-perturbative energy region to the photon HVP which relies on dispersive methods using as basic ingredients the \(e^{+}e^{-}\) annihilation cross sections to all the possible exclusive hadronic final states collected up to \(\sqrt{s}\simeq 2\) GeV. The different successive broken variants of the HLS model, especially BHLS\({}_{2}\)[17, 18], provides a well adapted framework to address the most relevant \(e^{+}e^{-}\) annihilations to hadronic channels in the crucial part of the low energy region (\(\sqrt{s}\leq 1.05\) GeV), namely the \(e^{+}e^{-}\) annihilations to the \(\pi^{+}\pi^{-},K\overline{K}/\pi^{+}\pi^{-}\pi^{0}/\pi^{0}\gamma/\eta\gamma\) final states; these already provide more than 80% of the muon HVP, when integrated up to the \(\phi\) meson mass. A BHLS\({}_{2}\) based computer code was used for this analysis which considered the large number of available data samples (several dozens), more than 1400 data points and thus, practically, the whole set of the available data samples has been exhausted. They have been listed, analyzed and discussed in full details previously, especially in the recent articles [17, 18], where a large number of previous references can be found3. This computer code takes faithfully into account the whole uncertainty information provided together with these data samples and, therefore, yielding satisfactory global fit probabilities turns out to have simultaneously a satisfactory model, a satisfactory handling data of the samples collected in several physics channels and, also, a satisfactory dealing with their reported uncertainty information. Footnote 3: The CMD-3 Collaboration has recently published a high statistics measurement of the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) cross section [7] which deserves a specific analysis beyond the scope of the present work which is focused on a quite different topic; nevertheless, the information provided by the CMD-3 Collaboration in their article regarding the consistency of their spectrum with the previously collected data samples may indicate that, as it is, their measurement is not consistent with any subset of the relevant existing data samples and thus it should hardly accomodate a global framework like HLS; so, it should not impact the conclusions of the present work. In this perspective, given data samples exhibiting contradictory aspects compared to most of the others may lead to either discard them or, when meaningful, motivate several solutions which avoids to mix up contradictory spectra; this has led us in our previous studies [17, 18] to provide different HVP evaluations based on some of the reported dipion KLOE samples, - namely [27, 28, 29] - on the one hand and separately on their Babar analog [30, 31] on the other hand. Regarding the various dipion spectra, the studies finding strong contradictions between the so-called KLOE8 data sample [32], or the recently published SND spectrum [33], and the bulk of the other considered data samples, have been discarded. Comparing our own evaluations with those based on Dispersion Relations collected in [3], one does not observe any loss in precision with any of the various reported values of the muon \(g-2\); however, differences between central values can be observed, clearly related with the contradictory properties of some data samples, especially KLOE [27, 28] versus Babar [30, 31] reported since a long time [15, 16, 34]. As noted above, the contribution of the listed HLS channels to the HVP is large; however, it is also worth mentioning that their contribution to the HVP uncertainty is almost negligible compared to those of the rest of the non-perturbative region. Moreover, as the HLS approach implies tight connexions between the various annihilation channels, it allows performing stringent consistency checks on the different data samples involving the same physics channels or, also, the other channels addressed by the HLS Lagrangian. It is worthwhile pointing out this important property, specific to global models like BHLS\({}_{2}\) and also stressing that, by far, most of the available data samples fulfill this drastic constraint. On the other hand, as indicated in the previous Section, the updated variant BHLS\({}_{2}\) variant [18] of the broken HLS model [17] allows to fairly address the physics of the [\(\pi^{0},~{}\eta,~{}\eta^{\prime}\)] system within the HLS corpus. Indeed, beside the \(e^{+}e^{-}\rightarrow(\pi^{0}/\eta)\gamma\) annihilations, the PS decays to \(\gamma\gamma\) and the \(VP\gamma\) couplings, the pseudoscalar meson (PS) mixing properties in the octet-singlet [35, 36, 37] and quark flavor [38, 39, 40] basis parametrizations have been analyzed, leading to a satisfactory comparison with expectations. Among the other processes involving the properties of the [\(\pi^{0},~{}\eta,~{}\eta^{\prime}\)] system, the \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\gamma\) decay spectrum deserves a special attention. The measurements of this decay process started long ago - as early as 1975 [41] - and several experiments have collected samples of limited statistics [42, 43, 44, 45, 46, 47, 48, 49] motivated by a reported 20 MeV mass shift of the \(\rho\) peak compared to its observed value in the \(e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\) annihilation. This effect was soon attributed to an interference between the \(\eta^{\prime}\rightarrow\rho\gamma\) (\(\rho\rightarrow\pi^{+}\pi^{-}\)) resonant amplitude and the Wess-Zumino-Witten (WZW) anomalous \(\eta^{\prime}\pi^{+}\pi^{-}\gamma\) contact term [50, 51]; this so-called box anomaly was expected to occur alongside the triangle anomaly responsible of the two-photon decays of the \(\pi^{0},~{}\eta\) and \(\eta^{\prime}\) mesons. A basic HLS approach including this anomalous interaction term beside the dominant \(\eta^{\prime}\rho^{0}\gamma\) coupling [52] confirmed this guess. However, the dipion \(\eta^{\prime}\) spectrum from BESIII Collaboration [53] published much later, thanks to its large statistics (970,000 events), modified the picture : It led to conclude that supplementing the (\(\rho^{0},\omega\)) resonance contributions by only a contact term is insufficient to reach a satisfactory description of the dipion spectrum. On the other hand, the reported dipion spectrum observed in the parent \(\eta\rightarrow\pi^{+}\pi^{-}\gamma\) decay has undergone much less measurements. Beside former spectra4 from Layter _et al._[54] and Gormley _et al._[55], WASA-at-COSY reported for a 14,000 event spectrum [56] whereas the KLOE/KLOE2 Collaboration collected a 205,000 event spectrum [57]. Footnote 4: The numerical content of these spectra can only be derived from the paper figures. As the dipion spectra reported from the recent measurements of the \(\eta/\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\gamma\) decays carry high statistics, it thus becomes relevant to re-examine if (and how) they fit within the recently defined BHLS\({}_{2}\) framework of the HLS model, especially thanks to its kinetic breaking (See Appendix A.5) which has already allowed for a satisfactory description of the [\(\pi^{0},~{}\eta,~{}\eta^{\prime}\)] system properties [18]. Moreover, even if the physics of the \(\eta/\eta^{\prime}\) mesons is interesting _per se_, a better understanding of their properties is important, given their important role in the Light-by-Light (LbL) contribution to the muon anomalous magnetic moment. The layout of the paper is as follows. Section 3 aims at reminding the Kroll conditions [58] which reduce the number of free parameters of the kinetic breaking mechanism from 3 to 1; it also reminds and corrects Lagrangian pieces relevant for the present study. Section 4 is intended to identify the Lagrangian pieces contributing to the considered \(\eta\) and \(\eta^{\prime}\) radiative decays and displays the involved diagrams; the BHLS\({}_{2}\) amplitudes for these are constructed in Section 5 for the \(\eta\to\pi^{+}\pi^{-}\gamma\) decay and in Section 6 for the \(\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) one. The relation between the anomalous HLS amplitudes and their Wess-Zumino-Witten (WZW) [50, 51] analogs is given in Section 7. The derivation of the dipion mass spectrum in the \(\eta/\eta^{\prime}\) radiative decays is done in Section 8 and the role of the final state interaction mechanism (FSI) in the \(\eta/\eta^{\prime}\) radiative decays is thoroughly examined in Section 9. Section 10 is the central part of the present study; Subsection 10.1 presents exhaustively the available \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) data samples; for this purpose it is important to note that all the available spectra carry an arbitrary absolute normalization and that, accounting for the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) partial widths implies using also an external piece of (PDG [59]) information. A detailed study of the FSI polynomial degrees is the subject of Subsection 10.2 which reports on the fits performed separately with the \(\eta\) and \(\eta^{\prime}\) spectra to find the appropriate degrees of the requested FSI polynomials. This permits to perform the fits of the dipion spectra reported in Subsection 10.3 where it is proved that a unique FSI polynomial can satisfactorily account for both the \(\eta\) and \(\eta^{\prime}\) dipion spectra simultaneously. Subsection 10.4 is devoted to comparing our FSI polynomial results to those reported in the literature. The role of intermediate \(\rho^{\pm}\) exchanges is emphasized in Subsection 10.5. The global BHLS\({}_{2}\) fits performed to simultaneously describe the dipion spectrum lineshapes examined in the previous Subsections and the PDG information for the partial widths \(\Gamma(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma)\) is worked out in Subsection 10.6. Finally in Section 11 one examines the issues relative to the connection between the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) decays and the hadronic contribution to the muon anomalous magnetic moment \(a_{\mu}\). Section 12 summarizes the conclusions reached by the present study. In order to ease the paper reading, the main pieces of information regarding the HLS model are briefly reminded in Appendix A.1, whereas its symmetry breaking mechanisms are briefly summarized in Appendices A.2 to A.5. An Erratum to the previous broken version of the BHLS\({}_{2}\) version is the subject of Appendix B. To ease the reading of the present work, one has also found appropriate to give the most relevant parts of the non-anomalous and anomalous BHLS\({}_{2}\) pieces under the Kroll Conditions - reminded just below - in Appendices C and D. A brief numerical analysis of some parameter values returned by the fits of the \(\eta/\eta^{\prime}\) dipion spectra is the subject of Appendix E. ## 3 The Kroll Conditions and VPP Lagrangian Pieces In the FKS approach [38, 39, 40] to the [\(\pi^{0},\ \eta,\ \eta^{\prime}\)] system, it has been found appropriate to impose the Kroll conditions [58] to axial current matrix elements. Applied to the BHLS\({}_{2}\) axial currents, these conditions : \[<0|J^{a}_{\mu}|\eta_{b}(p)>=ip_{\mu}f_{a}\delta_{ab}\ \,\ \ |\eta_{b}(p)>=|b \overline{b}(p)>\ \,\ \ \ J^{a}_{\mu}=\overline{a}\gamma_{\mu}\gamma_{5}a\ \,\ \ \ \{a,b=u,d,s\}\ . \tag{1}\] lead to two non-trivial relations [18] - referred to below as \(A_{\pm}\) solutions - among the \(\lambda_{i}\) parameters of the generalized 't Hooft term [60, 37] (see Appendix B); one gets : \[\left\{\begin{array}{l}{\rm Solutions}\;\;A_{\pm}\;\Longleftrightarrow\;\; \lambda_{0}=\sqrt{2}\lambda_{8}=\pm\sqrt{\frac{3}{2}}\lambda_{3}\;\;\right\}. \tag{2}\] which reduces the actual parameter freedom of the kinetic breaking from three to only one. One thus should note that the Kroll Conditions tightly couple the breaking in the BHLS\({}_{2}\) Lagrangian of the original U(3) symmetry to SU(3)\(\times\)U(1) and a particular Isospin breaking piece (via \(\lambda_{3}\neq 0\)); it also lead to \(F_{\pi}^{\tau}(s=0)=1-\lambda_{3}^{2}/2\). The \(\pm 1\) factor in Equations (2) is propagated below as \(d_{\pm}\); so, \(A_{+}\) corresponds to \(d_{+}\) and \(A_{-}\) to \(d_{-}\). The non-anomalous pieces \({\cal L}_{\eta^{\prime}\pi^{\pm}}\) and \({\cal L}_{\eta\pi^{\pm}}\) of the BHLS\({}_{2}\) Lagrangian acquire simplified expressions compared to [18] : \[\left\{\begin{array}{ll}{\cal L}_{\pi^{0}\pi^{\pm}}=&\frac{iag}{2}(1+\Sigma _{V})(1-\frac{\lambda_{0}^{2}}{3})\left[\,\rho^{-}\cdot\pi^{+}\stackrel{{ \leftrightarrow}}{{\partial}}\pi^{0}-\rho^{+}\cdot\pi^{-}\stackrel{{ \leftrightarrow}}{{\partial}}\pi^{0}\right]\\ \\ {\cal L}_{\eta\pi^{\pm}}=&-\frac{iag}{2}\left[1+\Sigma_{V}\right]\left[\epsilon -\frac{A_{\pm}}{2}\sin\delta_{P}\right]\left[\,\rho^{-}\cdot\pi^{+}\stackrel{{ \leftrightarrow}}{{\partial}}\eta-\rho^{+}\cdot\pi^{-}\stackrel{{ \leftrightarrow}}{{\partial}}\eta\right]\\ \\ {\cal L}_{\eta^{\prime}\pi^{\pm}}=&-\frac{iag}{2}\left[1+\Sigma_{V}\right] \left[\epsilon^{\prime}+\frac{A_{\pm}}{2}\cos\delta_{P}\right]\left[\,\rho^{ -}\cdot\pi^{+}\stackrel{{\leftrightarrow}}{{\partial}}\eta^{ \prime}-\rho^{+}\cdot\pi^{-}\stackrel{{\leftrightarrow}}{{ \partial}}\eta^{\prime}\right]\end{array}\right. \tag{3}\] where : \[A_{\pm}=\Delta_{A}+d_{\pm}\lambda_{0}^{2}\;, \tag{4}\] exhibiting the BKY \(\Delta_{A}\) and \(\delta_{P}\) is defined by : \[\cos\delta_{P}=\;\;\;\frac{1}{\sqrt{3}}\left[\sin\theta_{P}+\sqrt{2}\cos \theta_{P}\right]\;\;\;,\;\;\;\;\;\sin\delta_{P}=-\frac{1}{\sqrt{3}}\left[ \cos\theta_{P}-\sqrt{2}\sin\theta_{P}\right] \tag{5}\] in terms of \(\theta_{P}\), the third mixing angle [61] which is one among the BHLS\({}_{2}\) fit parameters. It has been shown in [18] that the BKY parameter \(\Sigma_{V}\) can be dropped out without any loss in generality. One should note that, if \({\cal L}_{\pi^{0}\pi^{\pm}}\) is leading order, both \({\cal L}_{\eta\pi^{\pm}}\) and \({\cal L}_{\eta^{\prime}\pi^{\pm}}\) are manifestly \({\cal O}(\delta)\), _i.e._ first order in breakings. Finally, it is worthwhile to remind that terms of order \({\cal O}(\delta^{2})\) or higher in amplitudes are discarded. ## 4 The \(\eta/\eta^{\prime}\to\pi^{-}\pi^{+}\gamma\) Decays in the BHLS\({}_{2}\) Framework The amplitudes for the \(\eta/\eta^{\prime}\to\pi^{-}\pi^{+}\gamma\) decays _a priori_ involve the \(APPP\), \(VPPP\) and \(AVP\) sectors of the full BHLS\({}_{2}\) Lagrangian [17, 18]. The interaction terms involved are displayed in Appendices C and D in terms of the _physical_ pseudoscalar fields and _ideal_ vector fields which should be replaced by their physical partners following the method developped in [17]. The \(V-\gamma\) transition couplings can be found in [17], Appendix A and the relevant non-anomalous VPP couplings have been displayed, for convenience, in Section 3 just above. The classes of diagrams _a priori_ involved in the \(\eta/\eta^{\prime}\) decays to \(\pi^{-}\pi^{+}\gamma\) are displayed in Figure 1. Namely, diagram (a1) illustrates the \(APPP\) interaction, whereas diagram (a2) sketches the \(VPPP\) contributions with \(V-\gamma\) transitions (\(V=\rho^{0},\ \omega,\ \phi\)) provided by the non-anomalous BHLS\({}_{2}\) Lagrangian ([17], Appendix A). These two kinds of diagrams are generally named box anomaly terms. Diagram (b1) sketches the diagram class involving \(VVP\) couplings; these diagrams provide the major contribution to the \(\eta/\eta^{\prime}\) dipion spectra. As one assumes \(c_{3}=c_{4}\) thanks to former works [15], all contributions involving \(AVP\) couplings, as those depicted in Figures (b2) and (c1), identically vanish. Finally, the (c2) diagram class illustrates the diagrams reflecting the 2 possible choices for the \(\pi^{\pm}\pi^{\mp}\) pair, each involving an intermediate \(\rho^{\pm}\) exchange. In the following, for the \(\eta\) and \(\eta^{\prime}\) decays, the non-resonant (a1) and (a2) contributions are gathered into the \(T^{NR}\) partial amplitude, whereas the (b1) and (c2) resonant contributions are given by resp. the \(T^{R1}\) and \(T^{R2}\) terms. ## 5 The \(\eta\to\pi^{+}\pi^{-}\gamma\) Amplitude within BHLS\({}_{2}\) As three kinds of diagrams contribute, the full \(T(\eta)\), amplitude for the \(\eta\to\pi^{+}\pi^{-}\gamma\) decay is written : \[T(\eta)=T^{NR}(\eta)+T^{R1}(\eta)+T^{R2}(\eta) \tag{6}\] Figure 1: The classes of tree diagrams. \(P\) stands for either of \(\eta\) and \(\eta^{\prime}\); in diagrams \(a\) and \(b\), the double lines stand for the neutral vector mesons (subject to mixing), in diagrams \(c\), the intermediate vector meson is \(\rho^{\pm}\) whereas the external one is neutral. The pions are charged. The vanishing of the \(AVP\) couplings (see text) implies that diagrams (b2) and (c1) do not contribute to the decay amplitudes. and they include the common tensor object : \[F=\epsilon^{\mu\nu\alpha\beta}\varepsilon_{\mu}(\gamma,q)q_{\nu}p_{\alpha}^{-}p_{\beta} ^{+} \tag{7}\] typical of the anomalous Lagrangian piece expressions; \(F\) exhibits the obvious momentum notations. This factor is understood in the \(T(\eta/\eta^{\prime})\) amplitude expressions here and below to lighten writing; it is restored in the final expressions involving the differential decay widths. As already stated, the first term in the expansion (6) gathers the non-resonant (\(APPP/VPPP\)) contributions whereas the second and third terms collect the resonant contributions of different structure generated via the VVP Lagrangian and commented in the Section just above. The \(T_{NR}^{\eta}\) term can be written (\(A_{\pm}=\Delta_{A}+d_{\pm}\lambda_{0}^{2}\)) : \[T^{NR}(\eta)=-\frac{ie}{4\pi^{2}f_{\pi}^{3}}\left[1-\frac{3c_{3}}{2}\right]g_{ \eta\pi^{+}\pi^{-}\gamma}\ \ \ {\rm with}\ \ g_{\eta\pi^{+}\pi^{-}\gamma}=\epsilon+\left\{1-\frac{A_{\pm}}{2}-\frac{ 3\lambda_{0}^{2}}{4}\right\}\sin\delta_{P}. \tag{8}\] It is worthwhile noting that \({\bf i}/\) The dependency upon \(c_{1}-c_{2}\) drops out when summing up the \(APPP\) and \(VPPP\) contributions, \({\bf ii}/\) If one cancels out the symmetry breaking contributions, \(T^{NR}(\eta)\) remains non-zero and corresponds to the Wess-Zumino-Witten (WZW) term [50, 51]. On the other hand, the \(T^{R1}(\eta)\) contributions to the \(T(\eta)\) amplitude can be written (\(m^{2}=ag^{2}f_{\pi}^{2}\)) : \[\left\{\begin{array}{l}T^{R1}(\eta)=c_{3}\,\frac{iem^{2}}{8\pi^{2}f_{\pi}^{3 }}\left[\frac{T_{\rho}^{0}(\eta)}{D_{\rho}(s)}+\frac{T_{\omega}^{0}(\eta)}{D_ {\omega}(s)}+\frac{T_{\phi}^{0}(\eta)}{D_{\phi}(s)}\right]\\ T_{\rho}^{0}(\eta)=\epsilon+\frac{2\beta(s)}{z_{A}}\cos\delta_{P}+3 \left[1-\frac{3\lambda_{0}^{2}}{4}-\frac{A_{\pm}}{6}+\frac{\alpha(s)}{3}+2 \xi_{3}\right]\sin\delta_{P}\\ T_{\phi}^{0}(\eta)=-\left[\frac{2\beta(s)}{z_{A}}\right]\cos\delta_{P}\\ T_{\omega}^{0}(\eta)=-\alpha(s)\sin\delta_{P}\end{array}\right. \tag{9}\] where \(D_{\rho}(s)\), \(D_{\omega}(s)\) and \(D_{\phi}(s)\) are the indicated inverse vector meson propagators; they are parametrized as defined in Section 9 of [17]. Equations (9) displays the dependency upon the angles \(\alpha(s)\) and \(\beta(s)\) defining the dynamical vector meson mixing (see Appendix A.4) and upon the parameter defined by the kinetic breaking mechanism (see Appendix A.5), once the Kroll conditions [58] are applied. It is worth remarking that \(\rho^{0}\) is the only resonant contribution which survives when symmetry breaking terms are turned off. Moreover, the \(\omega\) and \(\phi\) contributions are outside the phase space actually available in the \(\eta\) decay. \(T^{R2}(\eta)\), the second resonant contribution, is produced by the _non-anomalous_\(\rho^{\pm}\eta\pi^{\mp}\) coupling purely generated by our breaking procedures (see Equations (3)) and by the \(\omega\rho^{\pm}\pi^{\mp}\) term of the \(VV\eta\) Lagrangian piece (see Appendix C.2.2). Setting : \[s_{\pm 0}=(p_{\pm}+q)^{2}\ \ \,\ \ q={\rm photon\ momentum}\,\] it writes: \[\left\{\begin{array}{l}T^{R2}(\eta)=c_{3}\ \frac{iem^{2}}{8\pi^{2}f_{\pi}^{3}}\ T _{\rho}^{\pm}(\eta)\left[\frac{1}{D_{\pm}(s_{+0})}+\frac{1}{D_{\pm}(s_{-0})}\right] \\ T_{\rho}^{\pm}(\eta)=\epsilon-\frac{A_{\pm}}{2}\sin\delta_{P}.\end{array}\right. \tag{10}\] The \(D_{\pm}(s_{\pm 0})\)'s denoting the inverse \(\rho^{\pm}\) propagators; the \(T^{R2}\) contribution, a pure product of symmetry breakings, cancels out when all symmetries are restored. Finally, the 3 amplitudes pieces just defined depend on the HLS parameter \(c_{3}\). At the chiral point \[s=s_{+0}=s_{-0}=0\,\] the vector meson inverse propagators fulfill [17]\(D_{V}(0)=-m_{V}^{2}\) with : \[m_{\rho_{\pm}}^{2}=m^{2}\ \,\ \ \ m_{\rho_{0}}^{2}=m^{2}(1+\xi_{3})^{2}\ \,\ \ m_{\omega}^{2}=m^{2}(1+\xi_{0})^{2}\ \,\ \ m_{\phi}^{2}=m^{2}z_{V}(1+\xi_{0})^{2}\ . \tag{11}\] where \(m^{2}=ag^{2}f_{\pi}^{2}\), the conditions \(\alpha(0)=\beta(0)=0\) being exactly fulfilled. ## 6 The \(\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) Amplitude within BHLS\({}_{2}\) The decay process \(\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) undergoes a quite similar treatment to those performed for the \(\eta\to\pi^{+}\pi^{-}\gamma\) decay in the preceding Section and, so, one will avoid duplicating on the \(\eta^{\prime}\) amplitude the comments already stated on the \(\eta\) amplitude. The three different kinds of contributions to the \(\eta^{\prime}\) decay ampitude are : \[T(\eta^{\prime})=T^{NR}(\eta^{\prime})+T^{R1}(\eta^{\prime})+T^{R2}(\eta^{ \prime})\ . \tag{12}\] The first term, which gathers the \(APPP\) and \(VPPP\) contributions to the full amplitude \(T^{\eta^{\prime}}\), is given by : \[T^{NR}(\eta^{\prime})=-\frac{ie}{4\pi^{2}f_{\pi}^{3}}\left[1-\frac{3c_{3}}{2} \right]g_{\eta^{\prime}\pi^{+}\pi^{-}\gamma}\ \ \ {\rm with}\ \ \ g_{\eta^{\prime}\pi^{+}\pi^{-}\gamma}=\epsilon^{\prime}-\left\{1-\frac{A_ {\pm}}{2}-\frac{3\lambda_{0}^{2}}{4}\right\}\cos\delta_{P} \tag{13}\] and does not depend on \(c_{1}-c_{2}\). On the other hand, the contributions gathered in \(T^{R1}(\eta^{\prime})\) are given by : \[\left\{\begin{array}{l}T^{R1}(\eta^{\prime})=c_{3}\frac{iem^{2}}{8\pi^{2}f_ {\pi}^{3}}\left[\frac{T_{\rho}^{0}(\eta^{\prime})}{D_{\rho}(s)}+\frac{T_{\omega }^{0}(\eta^{\prime})}{D_{\omega}(s)}+\frac{T_{\phi}^{0}(\eta^{\prime})}{D_{ \phi}(s)}\right]\\ T_{\rho}^{0}(\eta^{\prime})=\epsilon^{\prime}+\frac{2\beta(s)}{z_{A}}\sin \delta_{P}-3\left[1-\frac{3\lambda_{0}^{2}}{4}-\frac{A_{\pm}}{6}+\frac{\alpha (s)}{3}+2\xi_{3}\right]\cos\delta_{P}\\ T_{\phi}^{0}(\eta^{\prime})=-\left[\frac{2\beta(s)}{z_{A}}\right]\sin\delta_{P }\\ T_{\omega}^{0}(\eta^{\prime})=+\alpha(s)\cos\delta_{P}\end{array}\right. \tag{14}\] where, as for the \(\eta\) decay, only the \(\rho^{0}\) term is \({\cal O}(\delta^{0}=1)\) in breakings. Finally : \[\left\{\begin{array}{l}T^{R2}(\eta^{\prime})=c_{3}\;\frac{iem^{2}}{8 \pi^{2}f_{\pi}^{3}}\;T_{\rho}^{\pm}(\eta^{\prime})\left[\frac{1}{D_{\pm}(s_{+0} )}+\frac{1}{D_{\pm}(s_{-0})}\right]\\ \\ T_{\rho}^{\pm}(\eta^{\prime})=\epsilon^{\prime}+\frac{A_{\pm}}{2} \cos\delta_{P}.\end{array}\right. \tag{15}\] which is purely \({\cal O}(\delta)\). The \(\omega\) contribution in the \(\eta^{\prime}\) decay must be visible in high statistics data samples (like [53]) and worth to compare with its lineshape in the \(e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\) annihilation. Regarding the \(\phi\) contribution, it is somewhat outside of the allowed phase space - by \(\simeq 60\) MeV. Finally, the influence of higher vector mesons, especially the first radial excitation \(\rho^{\prime}\), are outside the HLS scope; global fit properties may reveal their actual influence, w.r.t. the broken HLS context. ## 7 BHLS\({}_{2}\) and the WZW Box Anomalies Traditionally, the amplitudes associated with the box anomalies are derived from the Wess-Zumino-Witten (WZW) Lagrangian [50, 51] : \[{\cal L}_{WZW}=-i\frac{N_{c}e}{3\pi^{2}f_{\pi}^{3}}\;\epsilon^{\mu\nu\alpha \beta}A_{\mu}{\rm Tr}\left[Q\partial_{\nu}P\partial_{\alpha}P\partial_{\beta}P \right]\;. \tag{16}\] where \(P\) is the bare pseudoscalar meson \(U(3)\) matrix. This Lagrangian differs from the anomalous \(APPP\) Lagrangian piece of the HLS model (see Equation (86)) by the factor \[\left[1-\frac{3}{4}(c_{1}-c_{2}+c_{4})\right]\] The BHLS\({}_{2}\)\(\eta/\eta^{\prime}\) decay amplitudes just defined are expected to coincide with their WZW analogs at the chiral point, where the HLS \(c_{i}\)'s dependencies of the decay amplitudes should cancel out. Their expressions at the chiral point (\(s=s_{+0}=s_{-0}=0\)) are given by5 : Footnote 5: The coupling \(\pi^{0}\pi^{+}\pi^{-}\gamma\) is involved in the \(e^{+}e^{-}\rightarrow\pi^{0}\pi^{+}\pi^{-}\) annihilation [17, 18]. \[\left\{\begin{array}{l}T(\eta)=-\frac{ie}{4\pi^{2}f_{\pi}^{3}} \left[\epsilon+\left\{1-\frac{A_{\pm}}{2}-\frac{3\lambda_{0}^{2}}{4}\right\} \sin\delta_{P}\right]\;,\\ T(\eta^{\prime})=-\frac{ie}{4\pi^{2}f_{\pi}^{3}}\left[\epsilon^{ \prime}-\left\{1-\frac{A_{\pm}}{2}-\frac{3\lambda_{0}^{2}}{4}\right\}\cos \delta_{P}\right]\;,\\ T(\pi^{0})=+\frac{ie}{4\pi^{2}f_{\pi}^{3}}\left[\left\{1- \frac{A_{\pm}}{2}-\frac{\lambda_{0}^{2}}{3}\right\}-\epsilon\sin\delta_{P}+ \epsilon^{\prime}\cos\delta_{P}\right]\;.\end{array}\right. \tag{17}\] and coincide with those which can be directly derived from the WZW Lagrangian Equation (16) after applying the breaking procedures reminded in the Appendices. \(\eta/\eta^{\prime}\) Radiative Decays : The BHLS\({}_{2}\) Dipion Mass Spectra The amplitudes \(T(\eta)\) and \(T(\eta^{\prime})\) allowing to describe - within the full EBHLS\({}_{2}\) framework [17, 18] - the dipion mass spectra observed in the \(\eta/\eta^{\prime}\) radiative decays have been derived in resp. Sections 5 and 6; both should be multiplied by the function6\(F(s,s_{0+})\) (see Equation (7)). The differential decay widths can be written : Footnote 6: The notations \(\epsilon(\gamma,q)\) for the photon polarization vector, \(p^{\pm}\) and \(q\) for the pion and photon momenta are generally understood. \[\frac{d^{2}\Gamma_{X}}{dsds_{0+}}=\frac{1}{(2\pi)^{3}}\frac{1}{32M_{X}^{3}}|T_ {X}\;F(s,s_{0+})|^{2}\;\;\;,\;X=\eta,\eta^{\prime} \tag{18}\] in terms of resp. \(s\), the (\(\pi^{+}\pi^{-}\)) and \(s_{0+}\), the (\(\pi^{+}\gamma\)) pair invariant masses squared of the \(\eta/\eta^{\prime}\) decay products. The accessible invariant mass spectra being functions of only \(s\), this expression should be integrated over \(s_{0+}\) : \[\frac{d\Gamma_{X}}{ds}=\frac{1}{(2\pi)^{3}}\frac{1}{32M_{X}^{3}}\int_{s_{min} }^{s_{max}}|T_{X}\;F(s,s_{0+})|^{2}ds_{0+}\;\;\;,\;X=\eta,\eta^{\prime} \tag{19}\] where : \[s_{min/max}=\frac{M_{X}^{2}+2m_{\pi}^{2}-s}{2}\mp p_{\pi}\frac{M_{X}^{2}-s}{ \sqrt{s}}\;\;\;{\rm and}\;\;p_{\pi}=\frac{\sqrt{s-4m_{\pi}^{2}}}{2}\;\;. \tag{20}\] Both amplitudes \(T(\eta)\) and \(T(\eta^{\prime})\), generically referred to as \(T_{X}\), can be written : \[T_{X}(s,s_{0+})=R_{X}(s)+C_{X}G(s,s_{0+})\;\;\;{\rm with}\;\;G(s,s_{0+})=\frac {1}{D_{\rho}(s_{0-})}+\frac{1}{D_{\rho}(s_{0+})}\;\;, \tag{21}\] having defined \(s_{0\pm}=(q+p^{\pm})^{2}\) related by : \[s_{0-}-m_{\pi}^{2}=(M_{X}^{2}-s)-(s_{0+}-m_{\pi}^{2})\;\;.\] \(R_{X}(s)\) collects the contributions previously named \(T^{NR}(X)\) and \(T^{R1}(X)\) and is (by far) the dominant term, whereas7\(T^{R2}(X)=C_{X}G(s,s_{0+})\) is only \({\cal O}(\delta)\) in breakings. Footnote 7: \(C_{X}\) can be read off the relevant expressions for \(T^{R2}(X)\) given in Sections 5 and 6. On the other hand, the \([F(s,s_{0+})]^{2}\) factor in Equation (19) is : \[[F(s,s_{0+})]^{2}=\frac{s}{4}(s_{0+}-m_{\pi}^{2})(s_{0-}-m_{\pi}^{2})-\frac{m _{\pi}^{2}}{4}(M_{X}^{2}-s)^{2} \tag{22}\] and can be solely expressed in terms of \(s\) and \(s_{0+}\) to perform the integration shown in Equation (19). This leads to _predefine_ within the fitting code the following integrals : \[I_{1}(s) =\int_{s_{min}}^{s_{max}}|F(s,s_{0+})|^{2}ds_{0+}\;\;, I_{2}(s) =\int_{s_{min}}^{s_{max}}|F(s,s_{0+})|^{2}|G(s,s_{0+})|^{2}ds_{0+}\] \[I_{3}(s) =\int_{s_{min}}^{s_{max}}|F(s,s_{0+})|^{2}{\bf Re}\left[G(s,s_{0+} )\right]ds_{0+}, I_{4}(s) =\int_{s_{min}}^{s_{max}}|F(s,s_{0+})|^{2}{\bf Im}\left[G(s,s_{0+} )\right]ds_{0+} \tag{23}\] Actually, \(I_{1}(s)\) can be integrated in closed form : \[I_{1}(s)=\frac{(M_{X}^{2}-s)^{3}}{3}\frac{p_{\pi}^{3}}{\sqrt{s}} \tag{24}\] with \(p_{\pi}\) given in Equations (20). The 3 other functions should be integrated numerically within the iterative procedure context already running to address the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\pi^{0}\) annihilation data within the BHLS [15] or BHLS\({}_{2}\)[17, 18] frameworks. One then gets : \[\frac{d\Gamma_{X}}{ds}=\frac{1}{(2\pi)^{3}}\frac{1}{32M_{X}^{3}}\left[|R_{X}( s)|^{2}\;I_{1}(s)+C_{X}^{2}I_{2}(s)+2C_{X}\left(\mathbf{Re}\left[R_{X}(s) \right]\;I_{3}(s)+\mathbf{Im}\left[R_{X}(s)\right]\;I_{4}(s)\right)\right] \tag{25}\] In the BHLS\({}_{2}\) approach, only leading order terms in the breaking parameters \(\mathcal{O}(\delta)\) (as the \(C_{X}\) term) are addressed and then terms of order \(\mathcal{O}(\delta^{2})\) - like the \(C_{X}^{2}\) contribution - can be neglected. The \(I_{1}(s)\) term in Equation (25) can be rewritten, for subsequent use in the text : \[\frac{d\widetilde{\Gamma}_{X}}{ds}=\Gamma_{0}(s)|R_{X}(s)|^{2}\;,\;\;\;{\rm with }\;\;\Gamma_{0}(s)=\frac{s(M_{X}^{2}-s)^{3}[\sigma_{\pi}(s)]^{3}}{3\cdot 2^{11} \pi^{3}M_{X}^{3}}\;\;\;\;{\rm and}\;\;\sigma_{\pi}(s)=\sqrt{1-\frac{4m_{\pi}^{2 }}{s}}\;\;. \tag{26}\] ## 9 Final State Interaction (FSI) in the \(\eta/\eta^{\prime}\) Radiative Decays The study in [62], also referred to hereafter as SHKMW, has placed a valuable emphasis on the connection between the pion vector form factor \(F_{\pi}(s)\) - as it comes out of the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation process - and the dipion spectra from the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) radiative decays. Further works have followed - see, for instance, [63, 64, 65, 66] for further references - generally motivated by a better understanding of the \(\eta\) and \(\eta^{\prime}\) meson properties regarding their contributions to the light-by-light (LbL) fraction of the muon anomalous magnetic moment \(a_{\mu}\). **i)** It is worthwhile to briefly outline how this connection is established [62]. The pion vector form factor \(F_{\pi}(s)\) and the \(P\)-wave \(\pi^{+}\pi^{-}\) scattering amplitude \(T_{\pi\pi}(s)\) are related by : \[{\rm Im}\left[F_{\pi}(s)\right]=\sigma_{\pi}(s)\left[T_{\pi\pi}(s)\right]^{*} F_{\pi}(s)\Theta(s-4m_{\pi}^{2})\;, \tag{27}\] valid along the energy region where the \(\pi^{+}\pi^{-}\) scattering is _elastic_; \(\sigma_{\pi}(s)\) has been defined just above. Therefore, in this energy region, the pion vector form factor \(F_{\pi}(s)\) and the elastic scattering amplitude \(T_{\pi\pi}(s)\) should carry equal phases. The Heaviside function indicates that \(F_{\pi}(s)\) is real below the \(2\pi\) threshold; the first _significant_ inelastic channel being \(\omega\pi\), the validity range of Equation (27) practically extends up to \(\simeq 922\) MeV, much above the \(\eta\) mass and slightly below the \(\eta^{\prime}\) mass (by only 36 MeV). Stated otherwise, the phase equality property holds over almost the whole HLS energy range of validity (\(\sqrt{s}\leq 1.05\) GeV). On the other hand, assuming the \(\pi^{+}\pi^{-}\) scattering is elastic for all \(s\geq 4m_{\pi}^{2}\), the \(P\)-wave amplitude \(T_{\pi\pi}(s)\) writes : \[T_{\pi\pi}(s)=\frac{\sin\delta_{11}(s)e^{i\delta_{11}(s)}}{\sigma_{\pi}(s)} \tag{28}\] in terms of the \(P\)-wave phase shift \(\delta_{11}(s)\) and the solution to Equation (27) can be expressed in terms of the Omnes function \(\Omega(s)\) by : \[F_{\pi}(s)=K(s)\Omega(s)\ \,\ {\rm where}\ \ \ \Omega(s)=\exp\left(\frac{s}{ \pi}\int_{4m_{\pi}^{2}}^{\infty}\frac{dz}{z}\frac{\delta_{11}(z)}{z-s-i\epsilon }\right)\, \tag{29}\] \(K(s)\) being some appropriate real-analytic function, required to be free of singularities over the physical region \(s\geq 4m_{\pi}^{2}\). This expression intends to factor out the non-perturbative contribution to \(F_{\pi}(s)\) which is also contained in the \(\Omega(s)\) function, and so the remaining part \(K(s)\) is expected to behave smoothly and be well approximable by a polynomial [62] along our region of interest (up to \(\simeq m_{\phi}\)). It is shown in [63] that a first degree polynomial \(K(s)=1+\alpha_{\Omega}s\) allows to reach a nice (linear) correlation up to \(s\simeq 1\) GeV\({}^{2}\) between the dipion spectrum from Belle [23] and the \(\Omega(s)\) functions derived from the phase shift data from [67]; a value \(\alpha_{\Omega}\simeq 0.1\) GeV\({}^{-2}\) can be inferred from Figure 1 in [63]. The deterioration of the linear behavior above \(s\simeq m_{\phi}^{2}\) is, actually, not unexpected because of rising inelasticities and of the high mass vector meson influence. **ii)** Assuming the pion pair emerging from the \(\eta/\eta^{\prime}\) radiative decays is purely Isospin 1 and \(P\)-wave [56, 53], its amplitude should carry the same analytic properties than \(F_{\pi}(s)\), _i.e._ they may only differ by a real-analytic function, free of right-hand side singularities. Reference [62] thus proposes to write the differential dipion spectra : \[\frac{d\overline{\Gamma}_{X}}{ds}=\Gamma_{0}(s)|A_{X}P_{X}(s)F_{\pi}(s)|^{2}\,\ \ (X=\eta/\eta^{\prime})\, \tag{30}\] where \(\Gamma_{0}(s)\) has been already defined in Equations (26) and the \(A_{X}\)'s being appropriate normalization constants. The \(P_{X}(s)\) functions (\(P_{X}(0)=1\)) are remaining correction factors specific of the \(\eta\) and \(\eta^{\prime}\) radiative decays which could both be analyzed within the Extended ChPT context [35, 37] (see also [68]) and are free of right-hand side singularities. As just argued regarding the pion form factor and its \(K(s)\) factor, the \(P_{X}(s)\) functions should satisfactorily be approximated by low degree polynomials [62]. This is what is shown by the downmost panel in Figure 1 of [63] which, moreover, indicates that \(P_{\eta}(s)=P_{\eta^{\prime}}(s)\) should likely hold. Of course, procedures to complement this approach by symmetry breaking effects have also to be invoked, prominently the \(\rho^{0}-\omega\) mixing for the \(\eta^{\prime}\) decay process - but not only. **iii)** The issue is now to relate \(d\overline{\Gamma}_{X}\) (Equation (30)) and \(d\widetilde{\Gamma}_{X}\) (Equation (26)) within the HLS framework _when no breaking is at work_. Equivalently, this turns out to check whether the \(R_{X}(s)\)'s and \(F_{\pi}(s)\) (can) carry the same phase in this case. Let us consider the pion vector form factor \(F_{\pi}(s)\) as given in [17], discarding terms of order \({\cal O}(\delta)\) or higher in breaking parameters; keeping only tree contributions (loop corrections, like the \(\rho^{0}-\gamma\) transition amplitude, are counted as \({\cal O}(\delta)\)) and dropping out the \({\cal L}_{p^{4}}\) contributions, one derives (\(m^{2}=ag^{2}f_{\pi}^{2}\), the unbroken \(\rho^{0}\) HK mass) : \[F_{\pi}(s)=\left(1-\frac{a}{2}\left[1+\frac{m^{2}}{D_{\rho}(s)}\right]\right)+{ \cal O}(\delta)\;\;. \tag{31}\] Similarly, the \(R_{X}(s)\) functions in Equation (26) reduce to : \[R_{\eta}=-\frac{ie\sin\delta_{P}}{4\pi^{2}f_{\pi}^{3}}\left(1-\frac{3}{2}c_{3} \left[1+\frac{m^{2}}{D_{\rho}(s)}\right]\right)\;\;,\;\;\;R_{\eta^{\prime}}=+ \frac{ie\cos\delta_{P}}{4\pi^{2}f_{\pi}^{3}}\left(1-\frac{3}{2}c_{3}\left[1+ \frac{m^{2}}{D_{\rho}(s)}\right]\right) \tag{32}\] up to terms of \({\cal O}(\delta)\) in breaking parameters, These Equations lead us to define a _no-breaking_ reference by requiring : **1/** The holding of the Vector Meson Dominance assumption which implies \(a\equiv a_{VMD}=2\) within the generic HLS model [9, 13]. It is worthwhile reminding here (see Section 2 in [18] for details) that the HLS parameter \(a\) is not reachable by fit, once the BKY breaking (see Appendix A.2) is at work; indeed, all Lagrangian terms of interest for our physics depend on the product \(a^{\prime}=a(1+\Sigma_{V})\) and not on each of these parameters separately; therefore one can freely fix \(a=2\) and, then, the term \(\delta a=a_{VMD}\Sigma_{V}\) is clearly8\({\cal O}(\delta)\). Footnote 8: In the course of the fitting procedure, it is as appropriate to either choose fitting \(a\), fixing \(\Sigma_{V}=0\) or fix \(a\) and fit \(\Sigma_{V}\); we choosed the first option. **2/** The universality of the \(\rho\) phase implies that \(R_{\eta}(s)\), \(R_{\eta^{\prime}}(s)\) and \(F_{\pi}(s)\) share the same phase and, therefore, it requires the existence of an "unbroken" value for \(c_{3}\) : Indeed, imposing \(c_{3}^{ref}=2/3\) beside \(a_{VMD}=2\), one can derive a satisfactory no-breaking reference as, one obtains : \[F_{\pi}(s)=-\frac{m^{2}}{D_{\rho}(s)}\;\;\;R_{\eta}=+\frac{ie\sin\delta_{P}}{ 4\pi^{2}f_{\pi}^{3}}\frac{m^{2}}{D_{\rho}(s)}\;\;,\;\;\;R_{\eta^{\prime}}=- \frac{ie\cos\delta_{P}}{4\pi^{2}f_{\pi}^{3}}\frac{m^{2}}{D_{\rho}(s)}\;\;. \tag{33}\] which should be complemented by \({\cal O}(\delta)\) contributions to account for real data. The issue becomes whether the values returned for \(a\) and \(c_{3}\) from fits to the (real) data differ little enough from \(a_{VMD}\) and \(c_{3}^{ref}\) that their differences can be considered \({\cal O}(\delta)\) effects. For this purpose, one can refer to the latest published BHLS\({}_{2}\) standard fit results collected in Table 10 of [18], in particular, one finds : * \(a=1.766\pm 0.001\) which shows a deviation \(\delta a=0.244\) from \(a_{VMD}=2\) corresponding to having \(\Sigma_{V}=0.122\), * \(c_{3}=0.742\pm 0.003\) which deviates by \(\delta c_{3}=0.076\) from \(c_{3}^{ref}=0.667\), focusing on the favored solution \(A_{-}\)[18] to the Kroll conditions (see Section 3) - the \(A_{+}\) solution actually provides similar values. Thus, \(\delta a\) and \(\delta c_{3}\) look small enough to be viewed as departures from resp. \(a_{VMD}\) and \(c_{3}^{ref}\) and treated as \({\cal O}(\delta)\) corrections, on the same footing than the manifest breaking parameters. To our knowledge, it is the first time that an identified physics condition can propose a constraint on one of the FKTUY [10] parameters, namely9\(c_{3}\). Footnote 9: Actually, another condition comes out from the data in analyses performed within the HLS context : \(c_{3}=c_{4}\). **iv)** From what has been just argued, it is clear that, within the BHLS\({}_{2}\) context, the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) decay amplitudes \(T_{X}(s)\) reported in Sections 5 and 6 above can actually be written : \[T_{X}(s)=B_{X}F_{\pi}(s)+{\cal O}(\delta)\;\;\;,\;\;\;{\rm X}=\eta/\eta^{\prime}\;\;, \tag{34}\] Figure 2: The \(\delta_{11}\) phase-shift plotted as function of \(\sqrt{s}\). Beside the data points of [69, 70], the dashed black curve is the solution to the Roy Equations [71], the green full line shows the phase reconstructed in [72] and red full line the BHLS\({}_{2}\) phase-shift exhibiting the \(\omega\) and \(\phi\) signals. The black stars show the smeared BHLS\({}_{2}\) spectrum (e.g. the red curve). the \(B_{X}\)'s being definite constants depending on the breaking parameters. \(F_{\pi}(s)\) contains already manifest breaking terms like the \(\omega\) and \(\phi\) signals with, however, different weights from their analogs in the \(T_{X}(s)\) amplitudes10. Footnote 10: For instance, BHLS\({}_{2}\) predicts that the coupling ratio \(\omega\pi\pi\) to \(\rho^{0}\pi\pi\) is 3 times smaller in the \(\eta^{\prime}\) radiative decay than in the pion vector form factor. On the other hand, as shown in [17], yielding a fair description of the data samples for \(|F_{\pi}(s)|\) (see Figure 2 and Table 3 in [17]), BHLS\({}_{2}\) also leads to a fair account of the phase-shift \(\delta_{11}(s)\) over its whole range of validity _without involving any phase-shift data sample in its derivation_. This is illustrated by11 Figure 2 which reflects the fair accord reached by the BHLS\({}_{2}\) prediction with the phase derived from the Roy Equations [71] or the pion form factor phase of Reference [72] on the one hand, and the experimental phase shift data from [69, 70] on the other hand. Moreover, the same BHLS\({}_{2}\) spectrum smeared over 10 MeV bins - to mimic the Cern-Munich spectrum [69] - (black star symbols) clearly shows that the \(\omega\) and \(\phi\) signals cannot be manifestly observed in the existing data. Footnote 12: Reprinted from Figure 10 in [17]. All this leads to conclude that the SHKMW modification [62] shown in Equation (30) : \[F_{\pi}(s)\to A_{X}P_{X}(s)F_{\pi}(s)\] to account for the Final State Interaction (FSI) among the pions emerging from the radiative \(\eta/\eta^{\prime}\) decays also applies in the global BHLS\({}_{2}\) context. In this case, this turns out to perform the change : \[T_{X}(s)\Longrightarrow H_{X}P_{X}(s)T_{X}(s)\] when using the amplitudes constructed in Sections 5 and 6. Our notations are connected with those in Reference [62] by writing these12 : Footnote 12: Actually, to be formally exact, Reference [62] writes \(A=A_{0}(1+\delta)\) for the \(\eta\) meson decay, and \(A^{\prime}=A^{\prime}_{0}(1+\delta^{\prime})\) for the \(\eta^{\prime}\) meson, as can be read around their Relations (9). \[A_{X}=A_{X}^{0}H_{X}\ \,\ \ H_{X}\equiv 1+\delta_{X}\ \,\ \ X=\eta,\eta^{\prime} \tag{35}\] as the \(A_{X}^{0}\) factors are already acccounted for in the \(T_{X}\) amplitudes derived from the BHLS\({}_{2}\) Lagrangian as shown below. Then, the global character of the BHLS\({}_{2}\) fitting context13, ensures that the non-perturbative effects are suitably accounted for as reflected by Figure 2. Figure 3 sketches the procedure which will be followed. Footnote 13: In this case, its Reference set of data samples \({\cal H}_{R}\)[17, 18], which already includes most of the existing pion form factor data samples will be supplemented with the \(\eta/\eta^{\prime}\) dipion spectra. From now on, the \(P_{X}(s)\) functions are chosen polynomials of the lowest possible degree consistent with a satisfactory fitting. Being beyond the BHLS\({}_{2}\) scope, theses functions are supplemented within the fit procedure by performing the change : \[T_{X}(s)\Longrightarrow H_{X}P_{X}(s)T_{X}(s)\ \ \,\ {\rm with}\ \ P_{X}(0)=1\ \,\ X=\eta,\eta^{\prime} \tag{36}\] in Equation (25) above. Practically, each term in the right-hand side of Equation (25) gets a factor of \(|H_{X}P_{X}(s)|^{2}\), the coefficients of which having to be derived by the global fit where the \([C_{X}]^{2}\) term can be discarded as it is manifestly \({\cal O}(\delta^{2})\). ## 10 Fits of the \(\eta/\eta^{\prime}\) Radiative Decay Spectra within BHLS\({}_{2}\) The reference set of data samples \({\cal H}_{R}\) included within the BHLS\({}_{2}\) framework has been presented several times and recently in [17, 18]; it covers the six \(e^{+}e^{-}\) annihilation channels to \(\pi^{+}\pi^{-}\), \(K^{+}K^{-}\), \(K_{L}K_{S}\), \(\pi^{+}\pi^{-}\pi^{0}\), \(\pi^{0}\gamma\), \(\eta\gamma\), some more decay widths (in particular \(\pi^{0}/\eta/\eta^{\prime}\to\gamma\gamma\)) and, finally, the dipion mass spectrum in the \(\tau\to\pi\pi\nu\) decay. These represent already the largest set of data (altogether 1366 data points) successfully submitted to a global fit, as reflected by Table 9 in [18]; they will not be discussed here any more. It is nevertheless relevant to remind that \({\cal H}_{R}\) encompasses almost all existing samples except for the recent CMD-3 dipion data as already argued in footnote 3, the KLOE08 [32], Babar [30, 31] and the recent SND [73] dipion spectra because of the strong tension they exhibit with respect to the rest of the (more than 60) \({\cal H}_{R}\) samples. This issue has been thoroughly reexamined in [18]. The present study aims at including also the dipion spectra measured in the \(\eta/\eta^{\prime}\) radiative decays within the global BHLS\({}_{2}\) framework. However, it is certainly cautious to avoid using _simultaneously_ the \(\eta/\eta^{\prime}\) dipion spectra and the \(\pi^{+}\pi^{-}\pi^{0}\) annihilation data within global fits as long as a specific study has not assessed some clear statement about FSI in the latter channel14 and data. Footnote 14: The fit results reported in [17, 18] may as well indicate that FSI effects are small or effectively absorbed in the parameter values returned by the fits. Anyway, this certainly deserves a devoted work [74]. On the other hand, it is worthwhile to stress that all the published dipion spectra of the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) decays carry an arbitrary normalization; so, it is impo Figure 3: Diagram sketching the sharing between BHLS\({}_{2}\) and the Final State Interaction process in the \(\eta/\eta^{\prime}\) decays to \(\pi^{+}\pi^{-}\gamma\). In the global fit context, \(T_{X}(s)\), represented by the lower blob, takes care of the non–perturbative effects. The drawing somewhat anticipates about the \(P_{X}(s)\) universality. only provide the spectrum lineshapes measured by the various experiments_. It follows from this peculiarity that they allow to fit _only_ the \(P_{X}(s)\) polynomials and they are totally insensitive to the \(H_{X}\) parameter values; this issue will be addressed by performing global fits where the corresponding partial widths - taken from the Review of Particle Properties (RPP) [59] - are also considered inside the fitting procedure. ### Available Dipion Spectra from the \(\eta/\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\gamma\) Decays Measurements of the dipion spectrum in the \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\gamma\) decay started long ago - as early as 1975 [41] - and several experiments have collected samples of various (but low) statistics motivated by the \(\simeq 20\) MeV shift reported for the \(\rho^{0}\) peak location compared to its value in \(e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\) annihilations : JADE [42], CELLO [43], TASSO [44], PLUTO [45], TPC-2\(\gamma\)[46], ARGUS [47], Lepton F [48]; the Crystal Barrel Collaboration published by 1997 the most precise spectrum [49] carrying 7400 events. The breakthrough has come from the BESIII Collaboration [53] which published a 970,000 event spectrum by 2017. The formerly collected samples have been examined and their behavior is briefly reported below. Dealing with the uncertainty information provided with these \(\eta^{\prime}\) samples is generally straightforward, except for the BESIII dipion spectrum [53] for which a spectrum for the energy resolution is provided. It is accounted for by replacing within the minimization procedure the genuine model function value by that of its convolution with the resolution function, assuming the provided resolutions be the standard deviations of gaussians; the net effect of the BESIII energy resolution information deserves to be shown (see below). The BESIII data [57] are provided as two 112 data point spectra, the former giving the numbers of \(\eta^{\prime}\) event candidates in 10 MeV bins (\(N^{i}_{evt}\)), the latter the estimated numbers of background events (\(N^{i}_{bkg}\)) within the same bins. One has provided our global fitting code with the \(N^{i}_{signal}=N^{i}_{evt}-N^{i}_{bkg}\) spectrum; we have assumed the original distributions poissonian and fully correlated by attributing to \(N^{i}_{signal}\) an uncertainty \(\sigma_{i}=\sqrt{N^{i}_{evt}}+\sqrt{N^{i}_{bkg}}\); it is shown below that these specific assumptions allow a fair dealing with the BESIII spectrum [53]. On the other hand, the reported dipion spectrum observed in the parent \(\eta\rightarrow\pi^{+}\pi^{-}\gamma\) decay has undergone much less measurements. Beside former spectra15 from Layter _et al._[54] and Gormley _et al._[55], WASA-at-COSY reported for a 14,000 event spectrum [56] whereas the KLOE/KLOE2 Collaboration has collected a 205,000 event spectrum [57]; it should be noted that the WASA dipion spectrum is given with only statistical errors. Footnote 15: The numerical content of these spectra can only be derived from the paper Figures. It is worth stressing again that the normalization of all these spectra being arbitrary, the theoretical (absolute) distribution scales provided by the BHLS\({}_{2}\) Lagrangian are lost when normalizing to the specific scale of each data set when fitting; stated otherwise these data samples only allow to address the fit of the \(P_{X}(s)\) functions (\(X=\eta,\eta^{\prime}\)) and _not_ of the \(H_{X}\) constants which are cancelled out when normalizing the model functions to the experimental spectra. ### \(\eta/\eta^{\prime}\) Experimental Spectra : Fits in Isolation The first exercise is thus to explore the degree issue for the \(P_{X}(s)\) polynomials and so, does not need to deal with complications due to keeping the constant \(H_{X}\) within the fit procedure. Therefore, fits have been performed, supplementing the Reference data set of samples \({\cal H}_{R}\) by either of the experimental \(\eta^{\prime}\) or \(\eta\) spectra. In this Section, one only reports on using the \(A_{-}\) BHLS\({}_{2}\) variant16[18] which will be our working BHLS\({}_{2}\) version. Footnote 16: Nevertheless, the most relevant results obtained using the \(A_{+}\) BHLS\({}_{2}\) variant are summarized in the following Subsections. Regarding the \(P_{\eta^{\prime}}(s)\) polynomial, the results given in the Table just below17 focus on only the BESIII \(\eta^{\prime}\) sample (112 data points) [53]; indeed, because of their statistics, all the other \(\eta^{\prime}\) dipion spectra, including the Crystal Barrel one [49], do not exhibit any clear sensitivity to the \(P_{\eta^{\prime}}\) degree and may easily accomodate \(P_{\eta^{\prime}}\equiv 1\). Footnote 17: The fits which provide these results have been performed with our Reference set amputated from the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\pi^{0}\) annihilation data. The number of BESIII data points and the total number of fitted data points are given by the \(N\) values within parentheses. \[\begin{array}{||c||c|c|c|}\hline P_{\eta^{\prime}}(s)\mbox{ degree}&1&2&3\\ \hline\chi^{2}_{BESIII}\;(N=112)&160&99&98\\ \chi^{2}_{TOT}\;(N=1187)&1167&1097&1096\\ \mbox{Probability}&44.7\%&90.6\%&90.8\;\%\\ \hline\end{array}\] This clearly points out that, thanks to the statistics reached by the BESIII Collaboration, the first degree for \(P_{\eta^{\prime}}(s)\) can be excluded (\(<\chi^{2}>=1.43\)) and the third degree is obviously useless. Regarding the \(\eta\) data, complementing \({\cal H}_{R}\) with the KLOE/KLOE2 sample (59 data points) [57] alone or together with the WASA one (37 data points) [56], the picture returned by the fits is much less conclusive as a first degree \(P_{\eta}(s)\) provides18\(\chi^{2}(KLOE/KLOE2)=55\) and \(\chi^{2}\)(WASA)\(=45\), and a second degree \(P_{\eta}(s)\) yields \(\chi^{2}(KLOE)=51\) and \(\chi^{2}\)(WASA)\(=51\) with similar fit probabilities, both at the 90% level, as just above. The choice of minimal degree has been preferred for \(P_{\eta}(s)\). Footnote 18: Note that \(\chi^{2}\)(WASA) is always overestimated because of an incomplete reported experimental error information. Therefore, in the following, when different, the polynomials \(P_{\eta}(s)\) and \(P_{\eta^{\prime}}(s)\), are definitely chosen, the former first degree, the latter second degree. The polynomial coefficients returned by the global fits performed with the \(A_{-}\) BHLS\({}_{2}\) variant are discussed below and given in Table 2. It is worthwhile noting that the degradation of the fit quality observed when assuming a first degree \(P_{\eta^{\prime}}(s)\) is essentially carried by the the BESIII \(\eta^{\prime}(s)\) data sample itself, with a quite marginal influence on the standard channels of the BHLS\({}_{2}\) framework and on the \(\eta\) dipion spectra. This emphasizes the robustness of the BHLS\({}_{2}\) Lagrangian. In order to lighten the forthcoming discussion, let us comment right now on the formerly collected \((\eta/\eta^{\prime})\) dipion spectra listed in the Subsection above which have also been analyzed within the BHLS\({}_{2}\) context; they quite generally yield stable \(\chi^{2}/N\) values. Some of them return large \(\chi^{2}/N\) values from the global fit procedure, namely, those from TPC-2\(\gamma\) (69/13), LEPTON-F (45/20) and Layter _et al._ (60/15). Most of these former samples, however, are getting reasonable \(\chi^{2}/N\) values, typically 8/12 (TASSO), 15/21 (CELLO), 23/18 (PLUTO), 20/15 (ARGUS), 11/17 (CRYSTAL BARREL19), 13/14 (Gormley _et al._) but have a quite negligible impact on the issues examined in the present study. Therefore one focuses on the high statistics data samples from BESIII and KLOE/ KLOE2; the case for the WASA data set may be nevertheless commented18. Footnote 19: Its data point at 812.5 MeV, soon identified as outlier, being dropped out; see footnote 21 in [75]. ### The \(\eta/\eta^{\prime}\) Experimental Spectra : Analysis within the BHLS\({}_{2}\) Context Table 1 collects the relevant fit quality information derived when running global fits within the \(A_{-}\) BHLS\({}_{2}\) variant. The first data column gives the fit information in a global fit performed20 by discarding the \(\eta/\eta^{\prime}\) to provide the BHLS\({}_{2}\) reference fit pattern; using the full \({\cal H}_{R}\), one would have found the numbers given in the last data column of Table 9 in [18]. The second and third data columns report on the fits performed by including \(\eta/\eta^{\prime}\) dipion spectra within the fit data set \({\cal H}_{R}\) under the conditions indicated in the top line of Table 1. Footnote 20: The \(e^{+}e^{-}\to\pi^{+}\pi^{-}\pi^{0}\) annihilation data are switched off. The fit information concerning the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation data collected in scan mode (with different detectors at the various Novosibirk facilities) is displayed in the first data line (the exact sample content behind the wording NSK is explained in [18], for instance). The line KLOE stands for the merging of the KLOE10 [27] and KLOE12 [28] data samples. The spacelike pion form factor data merges the NA7 and Fermilab samples [76, 77]. Taking the first data column of Table 1 as reference, one can clearly conclude that the fit quality obtained when using the \(\eta/\eta^{\prime}\) dipion spectra is unchanged and fairly good. Indeed, the \(\chi^{2}\) increase of the NSK set of scan data samples is obviously negligible and those of the ISR data collected under the name KLOE and spacelike data are unchanged. The description of the data samples in the other channels from BHLS\({}_{2}\) (not shown) is also unchanged21. Footnote 21: Their variations are always \(\chi^{2}\) unit fractions. Regarding the Triangle Anomaly sector, the \(\chi^{2}\) information for the \(\pi^{0}/\eta/\eta^{\prime}\to\gamma\gamma\) decays are : \[\left\{\begin{array}{ll}{\rm BHLS}_{2}&A_{-}\ {\rm variant\ with\ }P_{\eta}(s)\neq P_{\eta^{\prime}}(s):&(\chi^{2}_{\pi^{0}},\chi^{2}_{\eta}, \chi^{2}_{\eta^{\prime}})=(1.08,0.01,3,33)\\ &\\ {\rm BHLS}_{2}&A_{-}\ {\rm variant\ with\ }P_{\eta}(s)\equiv P_{\eta^{\prime}}(s):&( \chi^{2}_{\pi^{0}},\chi^{2}_{\eta},\chi^{2}_{\eta^{\prime}})=(0.73,0.03,4.77) \end{array}\right. \tag{37}\] Thus, the RPP width [59] for \(\pi^{0}\to\gamma\gamma\) is reproduced at the \((0.9\div 1)\ \sigma\) level and the one for \(\eta\to\gamma\gamma\) is reconstructed at nearly its RPP value; the width for \(\eta^{\prime}\to\gamma\gamma\) is found in the range \((1.8\div 2.2)\ \sigma\), somewhat larger but still acceptable. On the other hand and more importantly : Comparing the second and third data columns of Table 1 obviously substantiates the SHKMW conjecture [62] about the universality of the \(P_{X}(s)\) function, e.g. \(P_{\eta}(s)\equiv P_{\eta^{\prime}}(s)\). One may also note the slight improvement generated by having stated \(P_{\eta}(s)\equiv P_{\eta^{\prime}}(s)\); this should be due to having provided its curvature to \(P_{\eta}(s)\) which in turn lessens the (already marginal) tension between the KLOE/KLOE2 and BESIII data samples. Before going on with solely using the \(A_{-}\) variant of the BHLS\({}_{2}\) Lagrangian, it is worthwhile reporting on its \(A_{+}\) variant behavior. Let us limit oneself to reporting on the \(A_{+}\) variant best fit performed assuming \(P_{\eta}(s)\equiv P_{\eta^{\prime}}(s)\) second degree; one obtains \(\chi^{2}/N(BESIII)=110/112\), and the \(\eta\) dipion spectrum from the KLOE/KLOE2 Collaboration yields this ratio at 54/59; for its part, the unfitted WASA sample yields 49/37. The global fit probability is 51.5% only, to be compared to 90.6 % for the global fit performed under the \(A_{-}\) variant reported in Table 1. This drop in probability is noticeable and its reason deserves to be identified; indeed, the \(\chi^{2}(BESIII)\) increases by "only" 8 units, whereas the \(\chi^{2}\) for the \(\eta\) dipion spectra are almost unchanged compared to Table 1. Moreover, the usual BHLS\({}_{2}\) channels also benefit from \(\chi^{2}\)'s comparable in magnitude to their \(A_{-}\) analogs. Surprisingly, the single place where the disagreement blows up is in the \(\gamma\gamma\) decays as : \[(\chi^{2}_{\pi^{0}},\chi^{2}_{\eta},\chi^{2}_{\eta^{\prime}})=(29.92,0.34,0.08) \ \,\] e.g. the \(\pi^{0}\rightarrow\gamma\gamma\) partial width is at more than \(5\sigma\) from its accepted value [59], which is by far too large to be acceptable. Indeed, this implies that the \(A_{+}\) fit central value for the \(\pi^{0}\rightarrow\gamma\gamma\) partial width is reconstructed at 70% of its present RPP value [59]; this should be brought in \begin{table} \begin{tabular}{||c||c|c|c||} \hline \hline \(\chi^{2}/N_{\rm pts}\) Fit Configuration (\(A_{-}\)) & no \(\eta/\eta^{\prime}\) Spectra & \(P_{\eta}(s)\neq P_{\eta^{\prime}}(s)\) & \(P_{\eta}(s)\equiv P_{\eta^{\prime}}(s)\) \\ \hline \hline NSK \(\pi^{+}\pi^{-}\) (127) & \(137/127\) & 139/127 & 140/127 \\ \hline KLOE \(\pi^{+}\pi^{-}\) (135) & \(141/135\) & 140/135 & 140/135 \\ \hline Spacelike \(\pi^{+}\pi^{-}\) (59) & \(64/59\) & 64/59 & 64/59 \\ \hline \(\eta^{\prime}\) BESIII (112) & \(\times\) & 100/112 & 102/112 \\ \hline \(\eta\) KLOE/KLOE2 (59) & \(\times\) & 57/59 & 55/59 \\ \hline Total \(\chi^{2}/N_{\rm pts}\) & 995/1075 & 1156/1246 & \(1154/1246\) \\ \hline Fit Probability & 88.6 \% & 89.7\% & 90.6\% \\ \hline \hline \end{tabular} \end{table} Table 1: Fit properties of selected dipion data sample sets using the \(A_{-}\) BHLS\({}_{2}\) variant. The fit reported in the first data column is free of \(\eta/\eta^{\prime}\) dipion influence. The second data column corresponds to fitting with independent \(P_{\eta}(s)\) and \(P_{\eta^{\prime}}(s)\), whereas the third data column reports on the fit where \(P_{\eta}(s)\equiv P_{\eta^{\prime}}(s)\) has been imposed. The \(\chi^{2}/N_{\rm pts}\) value for the WASA sample, fitted or not, is in the range [18]\((44-47)\) for 37 data points. balance with the \(A_{-}\) variant which yields this partial width reconstructed 5% larger from the expected value (\(7.8\) eV). Therefore, the \(A_{+}\) variant unexpectedly exhibits a strong tension between the Triangle and Box Anomaly sectors of the BHLS\({}_{2}\) Lagrangian, whereas the \(A_{-}\) variant behaves smoothly in both sectors. Therefore, from now on, one will focus on the \(A_{-}\) variant of BHLS\({}_{2}\) which becomes our Reference model; results derived using the \(A_{+}\) variant are no longer reported _except when explicitly stated_. Figure 4: The dipion invariant mass spectrum in the \(\eta\to\pi^{+}\pi^{-}\gamma\) decay. The blue data points are the KLOE/KLOE2 spectrum, the green ones display the WASA spectrum. The red curve is the BHLS\({}_{2}\) fit leaving free the \(P_{\eta}(s)\) polynomial. Vertical units are arbitrary. Regarding the \(\eta\) spectra, Figure 4 shows an almost perfect account of the KLOE/KLOE2 spectrum : the BHLS\({}_{2}\) spectrum matches the dipion spectrum from KLOE/KLOE2 [57] on the whole energy range, except for a marginal issue in the 0.45 GeV energy region. Even if its \(\chi^{2}\) value is acceptable, the WASA spectrum [56] may look somewhat distorted with respect to its KLOE/KLOE2 partner, clearly favored by BHLS\({}_{2}\) expectations [18]. Figure 5: The dipion invariant mass spectrum in the \(\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\gamma\) decay. The blue data points are the BESIII spectrum, the green ones are those from Crystal Barrel. The red curve is the fit function, _i.e._ the convolution of the BHLS\({}_{2}\) model function with the energy resolution function assumed gaussian; the blue curve is the underlying BHLS\({}_{2}\) model function itself. Both curves superimpose over the whole energy range except for the \(\rho-\omega\) drop–off region. Vertical units are arbitrary. Regarding the \(\eta^{\prime}\) spectrum, Figure 5 shows a noticeably fair accord between the BHLS\({}_{2}\) modeling and the BESIII spectrum [53] all along the energy range. The vertical green dotted lines locate the \(\omega\) mass and so the \(\rho-\omega\) drop-off region, otherwise magnified in the inset. Here, one can observe the effect of convoluting the BHLS\({}_{2}\) model function with energy resolution gaussians as provided by the BESIII Collaboration : It does perfectly what it is supposed to do, _i.e._ soften the drop-off to its right lineshape with, moreover, a noticeable accuracy. On the rest of the spectrum, the convoluted curve and the underlying model one superimpose on each other within the thickness of the curves. One should also state that no tension in the \(\rho-\omega\) drop-off region is observed in the fits with any of the other dipion spectra submitted to the fit. It is useful to consider the spectra22 : Footnote 22: It is, of course, understood that, when dealing with the BESIII \(\eta^{\prime}\) dipion sample, \(d\Gamma_{theor}(s)/d\sqrt{s}\) is, actually, the convolution product of the model function with the BESIII energy resolution function. \[\overline{P}_{X}(s)=\left[\frac{d\Gamma_{exp}(s)}{d\sqrt{s}}/\frac{d\Gamma_{ theor}(s)}{d\sqrt{s}}\right]_{X}\ P_{X}(s)\ \,\ \ X=\eta,\eta^{\prime} \tag{38}\] to illustrate the behavior of the \(P_{X}(s)\) polynomials under the two assumptions discussed above. As the bracketed term in Equation (38) fluctuates around 1 and reflects the experimental uncertainty spectrum, the \(\overline{P}_{X}(s)\) spectrum looks an appropriate experimentally based evaluation of its corresponding model function \(P_{X}(s)\). Figure 6 displays the \(\overline{P}_{\eta^{\prime}}(s)\) and \(\overline{P}_{\eta}(s)\) spectra defined just above in the case of BESIII, KLOE/KLOE2 and WASA spectra together with their model partners \(P_{\eta^{\prime}}(s)\) (second degree) and \(P_{\eta}(s)\) (first degree). As could be inferred from the fit properties shown in Table 1, \(P_{\eta^{\prime}}(s)\) (the red dashed curve in the inset) is also a good evaluation for \(\overline{P}_{\eta}(s)\). Figure 7 also displays the \(\overline{P}_{\eta^{\prime}}(s)\) and \(\overline{P}_{\eta}(s)\) spectra for the BESIII, KLOE/KLOE2 and WASA data samples, but together with their common model fit function denoted \(P_{X}(s)\), a second degree polynomial. As reflected by the fit information reminded in the body of the Figure, one has reached a fair simultaneous parametrization of the \(\eta\) and \(\eta^{\prime}\) dipion spectra by only supplying the BHLS\({}_{2}\) model amplitudes with a single second degree polynomial \(P_{X}(s)\) fulfilling \(P_{X}(0)=1\). ### Final State Interaction: BHLS\({}_{2}\) Fit Results versus Others The top bunch in Table 2 displays the values returned for the polynomial coefficients of : \[P_{\eta}(s)=1+\alpha_{1}s\ \ {\rm and}\ \ P_{\eta^{\prime}}(s)=1+\alpha^{ \prime}_{1}s+\alpha^{\prime}_{2}s^{2}\ . \tag{39}\] When using the same polynomial for the \(\eta\) and \(\eta^{\prime}\) spectra, it is second degree and denoted \(P_{X}(s)\). It should be noted that the coefficients for \(P_{\eta^{\prime}}(s)\) (second data column) and \(P_{X}(s)\) (third data column) carry numerical values close to each other, _i.e._ at \(\simeq 1\ \sigma\) from each other for both the first and second degree coefficients23. In the case of having a (single) common FSI function \(P_{X}(s)\), the covariance is \(<\delta\alpha^{\prime}_{1}\ \delta\alpha^{\prime}_{2}>=-0.746\). Footnote 23: It might be useful to provide, for completeness, the covariances when \(P_{\eta}(s)\neq P_{\eta^{\prime}}(s)\) : Using obvious notations, they are \(<\delta\alpha_{1}\ \delta\alpha^{\prime}_{1}>=-0.005\), \(<\delta\alpha_{1}\ \delta\alpha^{\prime}_{2}>=-0.026\) and \(<\delta\alpha^{\prime}_{1}\ \delta\alpha^{\prime}_{2}>=-0.812\). Regarding the systematics : In the BHLS\({}_{2}\) approach, the statistical and systematic uncertainties provided by the experiments together with their spectra are carefully embodied within the fitting code without any modification; so our reported uncertainties automatically merge both kinds of experimental errors. On the other hand, the last two data lines in Table 2 clearly illustrate that \(\delta a=a-2\) and \(\delta c_{3}=c_{3}-2/3\) remain consistent with expectations, _i.e._ they can Figure 6: The \(\overline{P}_{\eta^{\prime}}(s)\) and, in the inset, the \(\overline{P}_{\eta}(s)\) spectra (Equation (38). The full red curve and full black curve superimposed to resp. \(\overline{P}_{\eta^{\prime}}(s)\) and \(\overline{P}_{\eta}(s)\) are resp. the \(P_{\eta^{\prime}}(s)\) and \(P_{\eta}(s)\) polynomials returned by the fits. The dashed red curve in the inset is also \(P_{\eta^{\prime}}(s)\), but superimposed to the \(\overline{P}_{\eta}(s)\) spectrum. Some pieces of fit information are also displayed. be regarded as \({\bf O}(\delta)\) breaking parameters. The other fit parameter values are given in Table 5 displayed in Appendix E; they are scrutinized in order to detect some hint regarding the FSI effects in the \(3\pi\) channel of the BHLS\({}_{2}\) model - where they are not implemented by now. * \(\mathbf{j}\!\!/\) Regarding the \(P_{\eta}(s)\) FSI polynomial, it is worth comparing our numerical value for \(\alpha_{1}\) with those available in the literature. The first published evaluation (GeV\({}^{-2}\)) of \(\alpha_{1}\) is the Figure 7: The \(\overline{P}_{\eta^{\prime}}(s)\) and, in the inset, the \(\overline{P}_{\eta}(s)\) spectra (Equation (38). The full red curve superimposed on the \(\overline{P}_{\eta^{\prime}}(s)\) and, in the inset, the \(\overline{P}_{\eta}(s)\) spectra is their common fit function \(P_{X}(s)\). The \(\omega\) pole location is indicated. Some pieces of fit information are also displayed. one from the WASA-at-COSY Collaboration \(\alpha_{1}=1.89\pm 0.25_{stat}\pm 0.59_{syst}\pm 0.02_{th}\)[56], soon followed by \(\alpha_{1}=1.96\pm 0.27_{fit}\pm 0.02_{F_{\pi}}\)[62]; more precise evaluations have been proposed24 since (GeV\({}^{-2}\)) : Footnote 24: Introducing a possible \(a_{2}\) exchange, Reference [64] also reports for a smaller value (\(\alpha_{1}=1.42\pm 0.06_{stat}\)). \[\alpha_{1}=1.32\pm 0.08_{stat}\pm 0.10_{syst}\pm 0.02_{th}\@@cite[cite]{[ \@@bibref{}{BESIII}{}{}]},\ \ \alpha_{1}=1.52\pm 0.06_{stat}\ \ \ \@@cite[cite]{[ \@@bibref{}{BESIII}{}{}]}\ \.\] (40) Our own evaluation - reported in Table 2 - is in good agreement (\(\simeq 1\sigma\)) with the KLOE/KLOE2 Collaboration result [57]. * **jj/** As far as we know, there are only two evaluations for the \(P_{\eta^{\prime}}(s)\) coefficients available in the literature, the former from the BESIII Collaboration [53] : \[\left\{\begin{array}{l}\alpha^{\prime}_{1}(\ \ {\rm GeV}^{-2})=\ \ 0.992\pm 0.039_{stat}\pm 0.067_{syst}\pm 0.163_{th}\\ \alpha^{\prime}_{2}(\ \ {\rm GeV}^{-4})=-0.523\pm 0.039_{stat}\pm 0.066_{syst} \pm 0.181_{th}\end{array}\right\},\] (41) the latter from the HHHK group [66]. Actually their Tables 2 and 3 propose slightly different pairs of values with, seemingly, a preference for the latter : \[{\rm HHHK}\ :\left\{\begin{array}{l}\alpha^{\prime}_{1}=\ \ 0.523\pm 0.046\ \ {\rm GeV}^{-2}\ \, \ \ \alpha^{\prime}_{2}=-0.138\pm 0.046\ \ {\rm GeV}^{-4}\ \ \right\}.\] (42) \begin{table} \begin{tabular}{||c||c|c||c|c||} \hline \hline Fit Parameter Value & no \(\eta/\eta^{\prime}\) & \(P_{\eta}(s)\neq P_{\eta^{\prime}}(s)\) & \(\left[A_{-}\right]\,:P_{X}(s)\) & \(\left[A_{+}\right]\,:P_{X}(s)\) \\ \hline \hline \(\alpha^{\prime}_{1}\) (GeV\({}^{-2}\)) & \(\times\) & \(1.388\pm 0.072\) & \(1.326\pm 0.053\) & \(0.953\pm 0.065\) \\ \hline \(\alpha^{\prime}_{2}\) (GeV\({}^{-4}\)) & \(\times\) & \(-0.607\pm 0.055\) & \(-0.553\pm 0.048\) & \(-0.511\pm 0.052\) \\ \hline \(\alpha_{1}\) (GeV\({}^{-2}\)) & \(\times\) & \(1.169\pm 0.063\) & \(\times\) & \(\times\) \\ \hline \hline \(a_{HLS}\) & \(1.789\pm 0.001\) & \(1.842\pm 0.001\) & \(1.821\pm 0.001\) & \(1.830\pm 0.001\) \\ \hline \((c_{3}+c_{4})/2\) & \(0.756\pm 0.005\) & \(0.773\pm 0.005\) & \(0.772\pm 0.004\) & \(0.819\pm 0.007\) \\ \hline \hline Fit Probability & 88.6 \% & 89.7\% & 90.6\% & 51.4\% \\ \hline \hline \end{tabular} \end{table} Table 2: The FSI parameter values from the \(A_{-}\) BHLS\({}_{2}\) variant fit. the first data column reports on the fit performed by submitting the usual set of data samples \({\cal H}_{R}\) to fits, excluding the \(e^{+}e^{-}\) annihilation to \(3\pi\) data. The second and third data columns report on the fits performed on the same amputated \({\cal H}_{R}\) sample set, completed with the \(\eta/\eta^{\prime}\) dipion spectra under the conditions indicated in the top line of the Table (\(P_{X}(s)=P_{\eta}(s)\equiv P_{\eta^{\prime}}(s)\)). The fair probability values can be emphasized. The last data column displays the fit results when using the \(A_{+}\) variant. Here one is faced with a surprizing pattern : While the BESIII parametrization for \(P_{X}(s)\) is far from the favored \(A_{-}\) variant one reported in Table 2, it is in quite remarkable accord with the \(A_{+}\) solution displayed in the last data column of Table 2; as BESIII does not deal with the intrinsic relationship between the Box and the Triangle Anomalies, their modelling is not influenced by the \(\pi^{0}\to\gamma\gamma\) partial width issue identified in Subsection 10.3 just above. On the other hand, the HHHK parametrization displayed in Expressions (42) is clearly at variance with both parametrizations displayed in Table 2. As a matter of conclusion, within the BHLS\({}_{2}\) framework, it has been shown that the conjecture \(P_{\eta^{\prime}}(s)=P_{\eta}(s)\) is a valid statement at the (high) degree of precision permitted by the spectra from the BESIII and KLOE/KLOE2 Collaborations. Moreover, Table 1 exhibits fair fit probabilities and does not reveal any noticeable tension among the dipion spectra from KLOE/KLOE2 and BESIII on the one hand and, on the other hand, the other channels embodied within the BHLS\({}_{2}\) fit procedure and their data, especially the dipion spectra collected in \(e^{+}e^{-}\) annihilations25. Footnote 25: It should be reminded that the KLOE08 [32], Babar [30, 31] and SND [73] dipion spectra have been discarded because of their strong tension with the rest of the \({\cal H}_{R}\) set of samples; one can refer to the analysis in [18] for more information. ### The \(T^{r2}(\eta/\eta^{\prime})\) Terms in BHLS\({}_{2}\) : The Role of \(\rho^{\pm}\) Exchanges Thanks to the breaking mechanisms [17, 18] which lead to the BHLS\({}_{2}\) Lagrangian, the derived \(\eta/\eta^{\prime}\) decay amplitudes involve \(\rho^{\pm}\) exchanges as depicted in Figure 1 by the diagram classes (c1) and (c2). Relying on previous works in the HLS context which have shown that \(c_{3}=c_{4}\) is fairly well accepted by the data, this constraint is assumed; as a straightforward consequence [9, 13] all diagrams involving direct \(AVP\) couplings - all proportional to (\(c_{3}-c_{4}\)) - identically vanish and, therefore, the diagram class (c1) contributions also do. Nevertheless, the (c2) diagram class, also \({\cal O}(\delta)\) in breakings, survives and participates to the decay amplitudes \(T_{\eta^{\prime}}\) and \(T_{\eta}\) at \({\bf O}(\delta)\). Such contributions are not involved in the BHLS\({}_{2}\) pion form factor \(F_{\pi}(s)\) expression [17]; they come naturally in the derivation of the amplitude \(T(\eta/\eta^{\prime})\) and are not governed by an additional _ad hoc_ parameter. Even if \({\bf O}(\delta)\) corrections, the \(T^{R2}(\eta/\eta^{\prime})\) amplitudes play a noticeable role within the BHLS\({}_{2}\) context : * **i/** They are necessary in order for the full amplitudes \(T(\eta/\eta^{\prime})=T^{NR}(\eta/\eta^{\prime})+T^{R1}(\eta/\eta^{\prime})+T ^{R2}(\eta/\eta^{\prime})\) to coincide with their analogs directly derived from the WZW Lagrangian [50, 51] at the chiral point26\(s=s_{0+}=s_{0-}=0\). Indeed, at the chiral point, the intensities \(T^{\pm}(\eta/\eta^{\prime})\) of the \(T^{R2}(\eta/\eta^{\prime})\) amplitudes defined in Sections 5 and 6 write : \[T^{R2}(\eta)=-\frac{iec_{3}}{4\pi^{2}f_{\pi}^{3}}\left[\epsilon-\frac{A_{\pm} }{2}\sin\delta_{P}\right]\ \ \ \ \ {\rm and}\ \ \ T^{R2}(\eta^{\prime})=-\frac{iec_{3}}{4\pi^{2}f_{\pi}^{3}}\left[ \epsilon^{\prime}+\frac{A_{\pm}}{2}\cos\delta_{P}\right]\] (43) and manifestly depend on the FKTUY parameter [10]\(c_{3}\). The condition for the amplitudes \(T(\eta^{\prime})\) and \(T(\eta)\) to coincide with those derived from the WZW Lagrangian (see Equations (17)) is that all dependencies upon the FKTUY parameters vanish at \(s=s_{0+}=s_{0-}=0\); this condition cannot be fulfilled if dropping out (artificially) the \(T^{R2}(\eta/\eta^{\prime})\) terms from the full amplitude expressions \(T(\eta/\eta^{\prime})\). * **ii/** To identify the effects of the \(T^{R2}(\eta/\eta^{\prime})\) terms, fits have been performed by discarding them in the full amplitudes and rather fit using \(T(\eta/\eta^{\prime})=T^{NR}(\eta/\eta^{\prime})+T^{R1}(\eta/\eta^{\prime})\). The fits have been performed by imposing the constraint \(P_{\eta}(s)=P_{\eta^{\prime}}(s)\) and return the results collected in the next Table. The \(\chi^{2}\) values indicate that \(T^{R2}(\eta)\) can be safely neglected, but also that discarding \(T^{R2}(\eta^{\prime})\) is not safe. The \(P_{X}(s)\) parametrization returned by the fit is : \[\left\{{\rm A}_{-}/{\rm no}\ {\rm TR}2\ :\ \alpha^{\prime}_{1}=\ \ 0.437\pm 0.039\ \ {\rm GeV}^{-2}\,\ \ \alpha^{\prime}_{2}=-0.573\pm 0.007\ \ {\rm GeV}^{-4}\ \right\},\] (44) closer to the HHHK results [66] reminded in Expressions (42) than to those in Table 2. Therefore, it is clear from the results collected in Table 2 and the other presented ones that : **1/** The \(\eta\) dipion spectrum is essentially insensitive to using or discarding the \(T^{R2}\) term in its parametrization, whereas **2/** The \(\eta^{\prime}\) dipion spectrum parametrization is significantly degraded if its \(T^{R2}\) component is dropped out. This absence may explain the reported failure of the so-called "model-dependent" fit in [53]. As a summary, one may conclude that, once the FSI effects and the \({\cal O}(\delta)\)\(T^{R2}\) contribution predicted by the kinetic breaking of BHLS\({}_{2}\)[18] are considered, the average \(\chi^{2}\) per data point for the \(\eta/\eta^{\prime}\) dipion spectra can be considered optimum (\(<\chi^{2}>\simeq 1\)). Thus, at the level of precision permitted by the presently available \(\eta\)[57] and \(\eta^{\prime}\)[53] dipion spectra, additional contributions beyond those of the basic vector meson nonet - like the higher mass vector mesons [53] or the \(a_{2}(1320)\) exchanges [64] - need not be invoked. ### Dealing with the Absolute Scale of the \(\eta/\eta^{\prime}\) dipion spectra Having determined the \(\eta/\eta^{\prime}\) dipion spectrum lineshapes by fitting their common FSI factor \(P_{X}(s)\) (\(X=\eta/\eta^{\prime}\)), it remains to derive the value of the \(H_{X}\)'s (\(X=\eta/\eta^{\prime}\)) to also have their absolute magnitudes. As already noted the value of the \(H_{X}\) constants can be derived by introducing the accepted values [59] for the \(\Gamma(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma)\) partial widths into the fitting procedure. This can be (and has been) done and global fits have been performed in order to get the optimum values for the \(\{H_{\eta},\;H_{\eta^{\prime}},\;P_{X}(s)\}\) triplets. However, regarding the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) decays, each of the published dipion spectra is solely given by its lineshape; concerning their normalization, they are tightly related to their partial widths. It happens that the single available "measurement" for each of these decays is the corresponding RPP piece of information [59]. In this case, as just argued, the values for \(H_{X}\) (\(X=\eta/\eta^{\prime}\)) can be derived through the fitting code appropriately modified to take the partial widths into account, but also algebraically once the fit to determine the \(P_{X}(s)\) (\(X=\eta/\eta^{\prime}\) \begin{table} \begin{tabular}{||c||c|c||c|c||} \hline \hline Fit Parameter & NSK+KLOE & NSK+KLOE & NSK+BaBar & NSK+BaBar \\ & fit \(P_{X}(s)\) only & fit \(P_{X}(s)\) \& \(H_{X}\) fit \(P_{X}(s)\) only & fit \(P_{X}(s)\) \& \(H_{X}\) \\ \hline \hline \(H_{\eta}\) & \(\times\) & \(0.789\pm 0.017\) & \(\times\) & \(0.797\pm 0.017\) \\ \hline \(H_{\eta^{\prime}}\) & \(\times\) & \(0.671\pm 0.017\) & \(\times\) & \(0.682\pm 0.015\) \\ \hline \(\alpha^{\prime}_{1}\) (GeV\({}^{-2}\)) & \(1.326\pm 0.053\) & \(1.309\pm 0.055\) & \(1.248\pm 0.058\) & \(1.241\pm 0.041\) \\ \hline \(\alpha^{\prime}_{2}\) (GeV\({}^{-4}\)) & \(-0.553\pm 0.048\) & -\(0.562\pm 0.047\) & -\(0.535\pm 0.048\) & \(-0.560\pm 0.037\) \\ \hline \hline \(10^{10}\times a_{\mu}(\pi\pi)\) & \(490.09\pm 0.89\) & \(490.15\pm 0.89\) & \(494.98\pm 0.91\) & \(494.85\pm 0.88\) \\ \hline \hline \((\chi^{2}/N)_{BESIII}\) & 102/112 & 99/112 & 101/112 & 99/112 \\ \hline \((\chi^{2}/N)_{KLOE/KLOE2}\) & 55/59 & 53/59 & 55/59 & 53/59 \\ \hline \((\chi^{2}/N)_{TOTAL}\) & 1154/1246 & 1149/1248 & 1346/1381 & 1341/1383 \\ \hline Fit Probability & 90.6 \% & 92.3\% & 55.9\% & 59.4\% \\ \hline \hline \end{tabular} \end{table} Table 3: Main global fit results involving the KLOE+NSK and BaBar+NSK samples collected in \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilations. On top are displayed the parameters involved in the FSI functions (see text for details) followed by the contribution to \(a_{\mu}(\pi\pi)\) of the \([2m_{\pi},1.0\) GeV] energy range. The lowest bunch provides statistical information relative to the corresponding global fits. function has been performed. In this case one has, using obvious notations : \[\left[\Gamma(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma)\right]_{RPP}\equiv\int \left[\frac{d\Gamma_{X}(s)}{d\sqrt{s}}\right]_{exp.}d\sqrt{s}=H_{X}^{2}\int \left[\frac{d\Gamma_{X}(s)}{d\sqrt{s}}\right]_{BHLS2}[P_{X}(s)]^{2}d\sqrt{s}\ \, \tag{45}\] the integrals being performed over the whole energy range of the \(X=\eta/\eta^{\prime}\) decays and the fit values for the \(\Gamma(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma)\) partial widths coincide with the RPP pieces of information. Two cases have been considered regarding the specific \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation sample combinations involved; the first one is \(\{{\cal H}_{R}+\eta/\eta^{\prime}\}\) which corresponds to global fitting with the {KLOE, NSK, BESIII, CLEO-c} combination. Correspondingly, the second case involves the {BaBar, NSK, BESIII, CLEO-c} sample combination. The relevant fit results regarding FSI are summarized in Table 3. The average \(\chi^{2}\) per point of the \(\eta\) and \(\eta^{\prime}\) dipion spectra are clearly insensitive to using either of the KLOE or BaBar \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation data within the global fit procedure. The global fit probabilities are instead quite different and correspond to our previous BHLS\({}_{2}\) results [17, 18]. This insensitivity to the KLOE versus BaBaR issue is well reflected by the fit results collected in the top part of Table 3: None of the \(P_{X}\) and \(H_{X}\) parameter central values is observed to differ by more than \(1\sigma\) in the various fit configurations. Similarly, as the different \(P_{X}(s)\) parameter values derived from fitting with the various sample combinations look statistical fluctuations, differences observed between fitting only \(P_{X}(s)\) or the \((P_{X}(s)\&H_{\eta/\eta^{\prime}})\) triplet look statistical fluctuations. Moreover, defining \(\delta_{X}=H_{X}-1\) and focussing, for instance, on the KLOE+NSK combination, one gets : \[\delta_{\eta}=-0.211\pm 0.017\ \ \,\delta_{\eta^{\prime}}=-0.329\pm 0.017 \tag{46}\] which correspond to resp. \(\delta\) and \(\delta^{\prime}\) as defined by Stollenwerk _et al._[62] for which these authors derived the values \(\delta=-0.22\pm 0.04\) and \(\delta^{\prime}=-0.40\pm 0.09\); these are clearly identical to our \(\delta_{\eta}\) and \(\delta_{\eta^{\prime}}\) respectively. As a last remark, it should be noted that, once \(P_{X}(s)\) is determined - which implies that both \(\frac{d\Gamma_{X}(s)}{d\sqrt{s}}\) and both BHLS\({}_{2}\) functions are known, Equation (45) implies that both \(H_{X}\) are not free but are algebraically related. ## 11 \(\eta/\eta^{\prime}\) Decays : The Muon Anomalous Magnetic Moment The renewed interest27 in the \(\eta/\eta^{\prime}\) physics is intimately related to dealing with the Light-by-Light contribution to the anomalous magnetic moment (AMM) of the muon. As shown above and previously in [18], the BHLS\({}_{2}\) approach can address accurately several topics related with the \(\eta/\eta^{\prime}\) physics and its results are supported by fair probabilities; these probabilities faithfully reflect the actual behavior of each of the data samples within the global framework as the error information provided with it is embodied without any _ad hoc_ enlargement inside the fitting code. Footnote 27: See, for instance, [63, 78] and the references collected therein. ### Accuracy of the FSI Parametrization It has been shown above that a single FSI polynomial \(P_{X}(s)\) allows to address simultaneously both the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) decays within the BHLS\({}_{2}\) framework and that second degree is quite satisfactory. The \(P_{X}(s)\) parametrizations derived using the \(A_{\pm}\) variants of BHLS\({}_{2}\) displayed in Table 2 are based on the choice of the largest set of data samples collected in almost all physics channels covering the HLS energy region (e.g. up to the \(\simeq\phi\) mass region) and _consistent with each other_. It was also shown that the \(A_{-}\) parametrization is the best favored but, nevertheless, one found it relevant to also provide the \(A_{+}\) parametrization despite its (sole real) identified failure with the \(\pi^{0}\) lifetime (or partial width) that \(A_{+}\) reconstructs at more than \(5\sigma\) from its commonly accepted value [59]. In this Section, one aims to emphasize the reliability of the \(A_{-}\) parametrization by examining carefully how the \(P_{X}(s)\) parameter values evolve while using the various dipion spectra collected in \(e^{+}e^{-}\) annihilations which are known to exhibit - sometimes severe - inconsistencies among themselves. A possible bias in the parametrizations reported in Table 2 being the choice of the dipion data samples holds for the fits because of their mutual consistency, this issue is examined first. For this purpose, it is useful to define (or remind the definition) of some sets of data samples in order to ease the reading. Basically, the data samples[20] common to the sets of data samples presently embodied within the BHLS\({}_{2}\) based fit procedure are the \(\{(\pi^{0}/\eta)\gamma\), \(K_{L}K_{S}\), \(K^{+}K^{-}\}\)\(e^{+}e^{-}\) annihilation channels, the dipion spectra from the \(\tau\) decay provided by the ALEPH, CLEO and BELLE \begin{table} \begin{tabular}{||c||c||c|c||c||} \hline \hline Data Set & \(<\chi^{2}_{\pi\pi}>\) & \(\alpha^{\prime}_{1}\) & \(\alpha^{\prime}_{2}\) & Prob. (\%) \\ \hline \hline \({\cal X}_{\tau}\) + KLOE08 + \(\eta/\eta^{\prime}\) & 1.57 & \(1.294\pm 0.053\) & \(-0.379\pm 0.049\) & \(61.4\)\% \\ \hline \({\cal X}_{\tau}\) + BaBar + \(\eta/\eta^{\prime}\) & 1.20 & \(1.249\pm 0.076\) & \(-0.522\pm 0.0.69\) & \(39.6\)\% \\ \hline \({\cal X}_{\tau}\) + NSK + \(\eta/\eta^{\prime}\) & 0.98 & \(1.314\pm 0.054\) & \(-0.606\pm 0.052\) & \(96.6\)\% \\ \hline \({\cal X}_{\tau}\) + KLOE + \(\eta/\eta^{\prime}\) & 0.99 & \(1.341\pm 0.054\) & \(-0.525\pm 0.050\) & \(92.4\)\% \\ \hline \hline \({\cal H}_{R}\) + \(\eta/\eta^{\prime}\) & 1.07 & \(1.326\pm 0.053\) & \(-0.553\pm 0.048\) & \(90.6\)\% \\ \hline \hline \({\cal X}_{\tau}\) + \(\eta/\eta^{\prime}\) & \(\times\) & \(1.453\pm 0.060\) & \(-0.792\pm 0.065\) & \(96.3\)\% \\ \hline \hline \end{tabular} \end{table} Table 4: The FSI parameter values from the \(A_{-}\) BHLS\({}_{2}\) variant fit. The first column indicates which is the data set combination submitted to the global fit. \(<\chi^{2}_{\pi\pi}>\) indicates the average \(\chi^{2}\) of the timelike \(F_{\pi}(s)\) data points of the sample named in the first column. \(\alpha^{\prime}_{1}\) and \(\alpha^{\prime}_{2}\) are the coefficients of resp. the first and second degree terms of \(P_{X}(s)\). The last data column displays the probability of the corresponding global fit. Collaborations and the pion and kaon spacelike spectra from NA7[76] and Fermilab [77]; let us, for clarity, name this basic set \({\cal X}_{\tau}\). Regarding the available \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation spectra, one has distinguished four groups28 (two of which being, actually, one sample "groups") : **1/** The scan data collected under the name NSK (see [16] for its content), **2/** the KLOE (\(\equiv\) KLOE10+KLOE12) [27, 28] ISR data sample group, **3/** the KLOE08 ISR sample [32] and **4/** the Babar one [30, 31]. For definiteness, the largest set of data samples found consistent with each other and referred to here and before [17, 18] as \({\cal H}_{R}\) gathers the sets \({\cal X}_{\tau}\), NSK and KLOE just listed. Finally, the set of dipion spectra from the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) decays [53, 57] is referred to as \(\eta/\eta^{\prime}\). Footnote 28: As the more recent dipion spectra from BESIII [79, 80] and Cleo-c [20] accomodate easily any of the groups we are listing, they would not be conclusive and have been put aside for clarity; regarding the SND20 spectrum [33] deeply analyzed in our [18], we have proceeded likewise. The four top lines in Table 4 display the coefficient values of the first (\(\alpha^{\prime}_{1}\)) and second degree (\(\alpha^{\prime}_{2}\)) terms of the FSI polynomial \(P_{X}(s)\); as indicated in its first column, the corresponding fits differ from each other only by the exact content of \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation spectra sample set submitted to the minimization procedure. Whatever the fit quality, reflected by its Figure 8: The curve displayed in the left-side panel (a) is the pion form factor _predicted_ by fitting the data sample set \(\{{\cal X}_{\tau}+\eta/\eta^{\prime}\}\) and, superimposed, the _unfitted_ pion form factor spectra (including those from BaBar). The right-hand side panel (b) shows the pion form factor derived from fitting the full \(\{{\cal H}_{R}+\eta/\eta^{\prime}\}\) data sample set which includes the KLOE and NSK pion form factors (but not the BaBar spectrum). See the text for comments. corresponding \(<\chi^{2}_{\pi\pi}>\) value and its probability, the different values derived for \(\alpha^{\prime}_{1}\) as for \(\alpha^{\prime}_{2}\) are not distant by more than \((1\div 2)\sigma\) from each other. It should also be remarked that the parameter values derived in the fit for \(\{{\cal H}_{R}\) + \(\eta/\eta^{\prime}\}\) - which includes the KLOE and NSK data sets together - are intermediate between those involving the KLOE and NSK sample sets separately. Therefore, the large spread of probabilities between the fits involving NSK and/or KLOE and those rather involving BaBar or KLOE08, does not produce a significant change in the determination of the common \(\eta/\eta^{\prime}\) FSI function \(P_{X}(s)\). The last line in Table 4 displays the \(P_{X}(s)\) coefficients returned by a fit excluding the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation spectra. The linear term coefficient \(\alpha^{\prime}_{1}\) is never found distant by more than \(\simeq 2\sigma\) from the other corresponding values displayed in the same Table. In contrast, the curvature coefficient \(\alpha^{\prime}_{2}\) exhibits a \(\simeq(4\div 5)\sigma\) departure regarding the other reported fit values. Relying on Figures 6 and 7, one expects the second degree term (\(\alpha^{\prime}_{2}\)) to mostly affect the \(\rho^{0}-\omega\) energy region. This piece of information renders interesting to compare the pion form factor _predicted_ by the fit of the \(\{{\cal X}_{\tau}+\eta/\eta^{\prime}\}\) set29 with the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation data and the fit results derived when fitting the \(\{{\cal H}_{R}+\eta/\eta^{\prime}\}\) set. This is the purpose of Figure 8. Footnote 29: Supplemented by the phase information between the \(\rho\) and \(\omega\) propagators or by the product of branching fractions \({\cal B}(\omega\to e^{-}e^{+})\times{\cal B}(\omega\to\pi^{+}\pi^{-})\) available in the RPP [59]. Comparing the curve in both panels of Figure 8, the overall agreement between both fits is fairly good, except for the magnitude at the very \(\rho^{0}\) peak location which may look somewhat underestimated30 by the \(\eta^{\prime}\) dipion spectrum. Instead, the drop-off location and its intensity are fairly well predicted by the \(\{{\cal X}_{\tau}+\eta/\eta^{\prime}\}\) sample set. Footnote 30: Nevertheless, the lineshape is in good correspondance with those of the KLOE12 spectrum included in the \(\{{\cal H}_{R}+\eta/\eta^{\prime}\}\) sample set, but slightly smaller than the others. This behavior deserves to be confirmed by new precise \(\eta^{\prime}\) dipion spectra, complementing [53]. Indeed, within the BHLS\({}_{2}\) framework, the \(\eta^{\prime}\) decay provides a mechanism 100% independent of the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation process and, nevertheless, this does not prevent its prediction for \(F_{\pi}(s)\) to exhibit a fair accord with the (fully independent) \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation spectra. This accord may support a possible effect beyond the Standard Model which might affect the \(e^{+}e^{-}\) annihilation channel below the \(\phi\) meson mass to reconcile the DR approach with the FNAL measurement [2]. Regarding the FSI function \(P_{X}(s)\), awaiting for other theoretical estimates of it, one can conclude that our favored \(P_{X}(s)\) parametrization 31 derived from fitting (\({\cal H}_{R}+\eta/\eta^{\prime}\)) provides already a reliable \(P_{X}(s)\) and benefits from resp. a \(\simeq 3\%\) and \(\simeq 10\%\) precision for resp. the linear and the curvature terms. Footnote 31: The \(P_{X}(s)\) polynomial may well be interpreted as the lowest order terms of the Taylor expansion of a more complicated function which does not behave as fast as a power law; for instance, one has checked that the function \(U(s)=1+0.5\log{(1+4s)}\) (_i.e._ with no free parameter) gives results identical to those derived using the second degree polynomials \(P_{X}(s)\). Indeed the probability returned by the fit of the \(\{{\cal H}_{R}+\eta/\eta^{\prime}\}\) data sample set is then 91.7%, and the average \(\chi^{2}\)’s per data point are quite favorable : For instance, 1.08 for NSK, 1.04 for KLOE, 0.92 for the BESIII \(\eta^{\prime}\) spectrum and 0.90 for the \(\eta\) spectrum from KLOE/KLOE2. Figure 9: \(a_{\mu}(\pi\pi,\sqrt{s}<1.0\) GeV) in units of \(10^{-10}\) for various data sample combinations. The top two data points display the values derived by a direct integration of all the dipion spectra (when including BaBar, KLOE is excluded). The point tagged by KNT19 [6] is a usual (external) reference; the following point is derived using BHLS\({}_{2}\) with the indicated (largest) content of \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) spectra. The two following points show our fit results for two indicated combinations of data samples; within parentheses, one also displays the results obtained by also including the \(\eta/\eta^{\prime}\) samples within the global fit procedure. The small magnitude of the BHLS\({}_{2}\) derived uncertainties should be noted (see text). The downmost entry in this Figure exhibits the prediction derived for \(a_{\mu}(\pi\pi,\sqrt{s}<1.0\) GeV) when all annihilation to dipion data are discarded from the fit. The growth of its uncertainty reflects the drastic reduction of the statistics involved in the corresponding fit. ### The \(\eta/\eta^{\prime}\) Spectra and HVP Estimates The purpose of Figure 9 is to figure out the overall picture of the estimates for \(a_{\mu}(\pi\pi,\sqrt{s}<1.0\) GeV) which emerges from the present work. The top bunch data points displays the values for \(a_{\mu}(\pi\pi,\sqrt{s}\leq 1\;{\rm GeV})\) in units of \(10^{-10}\) derived by direct integration of the dipion data taking all dipion spectra, but either excluding the Babar spectrum or excluding the KLOE spectra; the reason to proceed this way is related with inconsistencies occuring when fitting the pion form factors [18] as reported since a long time [34]. The point showing the KNT19 result [6], the usual reference [3], is followed by the evaluation derived from the BHLS\({}_{2}\) global fit involving \({\cal X}_{\tau}+KLOE+NSK+BABAR)\) sample set which contains the same \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) dipion spectra32 as KNT19. The central values derived for \(a_{\mu}(\pi\pi,\sqrt{s}\leq 1\;{\rm GeV})\) are substantially identical, reflecting the fact that the normalization uncertainty treatment used to derive the KNT19 evaluation is similar to our own [81]. The BHLS\({}_{2}\) uncertainty is however much improved (by a factor of \(\simeq\)2), as can be expected from having performed a (more constraining) global fit; indeed, within a global context, in contrast with KNT19 and others who treat the dipion spectra in a standalone mode, one benefits from also involving the \(\tau\) dipion spectra and all non-\(\pi\pi\) final state spectra which play as an increased statistics for all the channels involved by the underlying HLS context, in particular the \(\pi\pi\) one. Therefore, comparing KNT19 and our evaluation illustrates that the BHLS\({}_{2}\) Lagrangian approach does not generate biases and that the difference of the central values is essentially due to the data samples chosen to derive motivatedly physical conclusions. Footnote 32: It should be reminded that the corresponding fit probability is low [18] (11.4%), reflecting the KLOE–BaBar tension. The top two data points of the lowest bunch substantiate numerically the amplitude of the tension beween using \({\cal X}_{\tau}+KLOE+NSK\) and \({\cal X}_{\tau}+BABAR+NSK\); both agree with the direct integration results and exhibit a \(\simeq 5.4\times 10^{-10}\) distance beween their evaluations of \(a_{\mu}(\pi\pi,\sqrt{s}\leq 1\;{\rm GeV})\). In both cases, the first number displayed is the evaluation derived by a standard BHLS\({}_{2}\) fit and is 100% consistent with the results published in [18]; the number within parentheses instead displays the result obtained when adding the \(\eta/\eta^{\prime}\) data set defined above to resp. \({\cal X}_{\tau}+KLOE+NSK\) and \({\cal X}_{\tau}+BABAR+NSK\). One should note that the fit probabilities are unchanged when adding the \(\eta/\eta^{\prime}\) data set and reflect fairly good fits : \(88.7\%\to 90.6\%\) for \({\cal X}_{\tau}+KLOE+NSK(+\eta/\eta^{\prime})\), \(47.2\%\to 55.9\%\) for \({\cal X}_{\tau}+BABAR+NSK(+\eta/\eta^{\prime})\). This illustrates that there is no tension between the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) dipion spectra and those derived from the \((\eta/\eta^{\prime})\) decays as the probability difference between the fits involving the two data sample sets is not degraded by including the \((\eta/\eta^{\prime})\) samples. ### \(\eta/\eta^{\prime}\) Based Evaluations of the HVP If, as conjectured long ago [62], an accurate enough determination of the FSI function \(P_{X}(s)\) can be provided (by Extended ChPT [35, 36, 37], possibly), dipion spectra from the \(\eta^{\prime}\) decay may provide a new way to estimate the dipion contribution to the muon HVP up to \(\simeq 1\) GeV. The present work has shown that phenomenology is able to provide already a FSI function \(P_{X}(s)\) carrying a noticeable precision and, moreover, it has also been shown that a unique FSI function accomodates easily the available \(\eta\) and \(\eta^{\prime}\) high precision dipion spectra simultaneously. Indeed, within the BHLS\({}_{2}\) context [17, 18], the amplitudes for the \(\eta/\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) decays and for the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\) annihilation proceed from the same Lagrangian and do not call for a special treatment of their common dominant neutral \(\rho\) meson signal. Moreover, once the FSI effects are factored out, the derivation of both amplitudes from the same Lagrangian is unchanged. On the other hand, discrepancies revealed by comparing with each other the dipion spectra collected in scan mode (NSK) and the various samples collected in ISR mode by KLOE [32, 27, 28] and Babar [30, 31] has not found a really satisfactory solution; the recent SND20 [33] - and even more, presumably, the new CMD3 [7] data - seems rather to darken the picture. Therefore, getting high statistics dipion spectra independent of the \(e^{+}e^{-}\) annihilation mechanism, carrying different kinds of systematics, may helpfully contribute to a more satisfactory understanding of the crucial \(\pi^{+}\pi^{-}\) contribution to the muon HVP. For the time being, the limited number of high statistics \(\eta\)[57] and \(\eta^{\prime}\)[53] dipion spectra allows to already derive the prediction for \(a_{\mu}(\pi\pi,\sqrt{s}<1.0\) GeV) displayed in the bottom of Figure 9, namely : \[a_{\mu}(\pi\pi,\sqrt{s}\leq 1\ {\rm GeV})=(484.98\pm 1.93)\times 10^{-10} \tag{47}\] with a 96.3 % fit probability, and is distant from its estimate based on fitting \({\cal H}_{R}\) data sample set33 by \(2.6\sigma\). Therefore, additional high statistics \(\eta/\eta^{\prime}\) data samples can put more light on the issue, clearly located in the \(\rho^{0}-\omega\) invariant mass region. Footnote 33: It is interesting to note that the distance between this prediction and the solution derived using NSK+KLOE is almost equal to the distance between the NSK+KLOE and NSK+BaBar solutions. ## 12 Concluding Remarks The present work has shown that, beside the already reported \(e^{-}e^{+}\) annihilation spectra, some decay modes (especially the \(P\to\gamma\gamma\) ones) or \(\tau\) dipion spectra [17, 18], BHLS\({}_{2}\) can encompass the dipion spectra from the \(\eta\) and \(\eta^{\prime}\) decays; however, to reach this result, one has to invoke the so-called Final State Interaction (FSI) mechanism - not a part of the HLS model - as inferred by the SHKMW group in [62]. Supplying BHLS\({}_{2}\) with a FSI function, one has thus obtained a fairly good simultaneous fit of the \(\eta\) and \(\eta^{\prime}\) dipion spectra together with the \(e^{+}e^{-}\) annihilations into \(\pi^{+}\pi^{-}/K\overline{K}/\pi^{0}\gamma/\eta\gamma\) final states and the \(\tau^{\pm}\to\pi^{\pm}\pi^{0}\nu_{\tau}\) decay usually addressed by BHLS\({}_{2}\) framework in our previous [17, 18]. This proves that, once the FSI mechanism is accounted for, the BESIII \(\eta^{\prime}\) spectrum [53] does not need more information that those already present in BHLS\({}_{2}\) to get a satisfactory picture; the picture is found as fair for the \(\eta\) spectrum reported in [57] - and, actually, even for those in [56]. The role of charged \(\rho\) meson - a natural feature of BHLS\({}_{2}\), [18], never considered elsewhere - has been shown to provide a fair treatment of the \(\eta^{\prime}\to\pi^{+}\pi^{-}\gamma\) dipion spectrum. This turns out to state that most of the parameters needed to write out the relevant decay amplitudes are not free but numerically shared with the other channels embodied within the same BHLS\({}_{2}\) framework. This is an additional step in the proof that a unified Effective Lagrangian can fairly describe the low energy physics up to and including the \(\phi\) mass region. The additional parameters, needed to achieve a good description of these \(\eta/\eta^{\prime}\) decay spectra are those involved by the FSI mechanism34. One has first shown that the \(\eta\) and \(\eta^{\prime}\) dipion spectra are well fitted with specific low degree FSI polynomials supplementing the amplitudes derived from the BHLS\({}_{2}\) Lagrangian. In a second step, it has been proved that, actually, a same second degree FSI polynomial \(P_{X}(s)\) is involved in the considered \(\eta\) and \(\eta^{\prime}\) decays as inferred in [62]. As already noted, the \(\rho^{\pm}\) exchange implied by the kinetic breaking defined in [18] is shown to enhance the global fit quality. The polynomial coefficients have been derived from our fits with fair precision and found that they remain stable when varying the fit conditions (see Table 4). Footnote 34: One may notice the parameter free FSI choice given in footnote 31 which might have to be further explored. It should be noted that the picture revealed by comparing both panels of Figure 8 strikingly suggests that the traditionally used dipion spectra carry a lineshape favored by the \(\eta^{\prime}\) dipion spectrum and that higher statistics on this can be a helpful tool in the present controversy concerning the Dispersive (DR) approaches and LQCD. Moreover, the systematics affecting the \(\eta^{\prime}\) dipion spectrum are certainly independent of those involved in the \(e^{-}e^{+}\to\pi^{+}\pi^{-}\) annihilation. At its level of accuracy, the present \(\eta^{\prime}\) dipion spectrum [53] rather favors the DR prediction, as shown in Figure 9; however better statistics and a finer binning in the \(\rho^{0}-\omega\) energy region looks mandatory for a competing estimate of the muon \(a_{\mu}(\pi^{+}\pi^{-},\sqrt{s}<1.0~{}{\rm GeV})\). This may motivate to enlarge the available \(\eta^{\prime}\) dipion sample by analyzing the already existing data or to collect new samples at other detectors. ## Acknowlegements We gratefully acknowledge Andrzej Kupsc, Uppsala University, for having provided the KLOE/ KLOE2 and the BESIII dipion spectra; additional information on these have also been quite helpful. The CNRS/IN2P3 Computing Center (Lyon - France) is also gratefully acknowledged for having provided the computing and data-processing resources needed for this work. ## Appendix A Brief Outline of the HLS/BHLS\({}_{2}\) Approach For the reader convenience, it looks worth to avoid too much cross-references and briefly collect here the various ingredients which participate to the definition and working of our symmetry broken Hidden Local Symmetry (HLS) model which are spread out into several references. The HLS model admits a non-anomalous sector [9] and, beside, an anomalous one [10] -- see also [13]. To make this approach a successful tool in its physical realm, the HLS model should undergo, symmetry breaking mechanisms. The salient features of the broken version named BHLS\({}_{2}\) which underly the present study can be found, reminded or defined, in35[17, 18]. As it grounds the present study, the anomalous sector of the HLS model [10, 13] is mostly discussed in the body of the text. Footnote 35: For full details the interested reader is referred to these articles, where former references can also be found. ### The Unbroken Non-Anomalous HLS Lagrangian The non-anomalous HLS Lagrangian is a generalization of the ChPT Lagrangian [82, 83] which can be written [13] : \[{\cal L}_{\rm chiral}=\frac{f_{\pi}^{2}}{4}{\rm Tr}\left[\partial_{\mu}U\; \partial^{\mu}U^{\dagger}\right]=-\frac{f_{\pi}^{2}}{4}{\rm Tr}\left[\partial_ {\mu}\xi_{L}\;\xi_{L}^{\dagger}-\partial_{\mu}\xi_{R}\;\xi_{R}^{\dagger} \right]^{2}\;, \tag{48}\] where \(f_{\pi}\) (= 92.42 MeV) is the pion decay constant and : \[\xi_{R/L}(x)=\exp\left[\pm iP(x)/f_{\pi}\right]\;\Longrightarrow U(x)=\xi_{L} ^{\dagger}(x)\xi_{R}(x)\;, \tag{49}\] when working in the so-called unitary gauge which removes a scalar field term in the definition of \(\xi_{R/L}(x)\); \(P(x)\) is the usual pseudoscalar (PS) field matrix. Ignoring in this reminder the weak sector [13, 17], the HLS approach turns out to replace in Equation (48) the usual derivative by the covariant derivative : \[D_{\mu}\xi_{R/L}=\partial_{\mu}\xi_{R/L}-igV_{\mu}\xi_{R/L}+ie\xi_{R/L}A_{\mu} Q\;, \tag{50}\] where \(A_{\mu}\) is the photon field, \(Q={\rm Diag}[2/3,-1/3,-1/3]\) the quark charge matrix and \(V_{\mu}\) is the vector field matrix; the expressions for \(P\) and36\(V\) are the usual ones - fulfilling the \(U(3)\) flavor symmetry - and can be found in [13, 84, 15], for example. In this way, the first HLS Lagrangian piece named \({\cal L}_{A}\) is derived from Equation (49). However, a second piece - \({\cal L}_{V}\) - can be defined which vanishes in the inverse substition \(D_{\mu}\rightarrow\partial_{\mu}\). The two pieces write : Footnote 36: In the \(V\) matrix the \(\rho\), \(\omega\) and \(\phi\) fields correspond to the so–called ideal fields. \[{\cal L}_{A}=-\frac{f_{\pi}^{2}}{4}{\rm Tr}\left[D_{\mu}\xi_{L}\;\xi_{L}^{ \dagger}-D_{\mu}\xi_{R}\;\xi_{R}^{\dagger}\right]^{2}\;\;,\;\;\;{\cal L}_{V}=- \frac{f_{\pi}^{2}}{4}{\rm Tr}\left[D_{\mu}\xi_{L}\;\xi_{L}^{\dagger}+D_{\mu} \xi_{R}\;\xi_{R}^{\dagger}\right]^{2}\;. \tag{51}\] and the full non-anomalous HLS Lagrangian writes : \[{\cal L}_{\rm HLS}={\cal L}_{A}+a{\cal L}_{V}\,, \tag{52}\] where \(a\) is a free parameter specific of the HLS approach [13]. This (unbroken) HLS Lagrangian can be found expanded in [84]. ### Breaking the HLS Lagrangian I : The BKY Mechanism The first breaking mechanism for the HLS Lagrangian has been proposed in [85]; one uses a modified version of it given in [84] in order to avoid identified undesirable properties of the original proposal [86]. Originally, the BKY mechanism was intended to only break the \(U(3)\) symmetry of the HLS Lagrangian; it has been extended following the lines of [87] to also cover isospin breaking effects. Defining \(L=D_{\mu}\xi_{L}\;\xi_{L}^{\dagger}\) and \(R=D_{\mu}\xi_{R}\;\xi_{R}^{\dagger}\), the (modified and extended) BKY breaking is implemented in the BHLS\({}_{2}\) framework by modifying Equations (51) as follows : \[{\cal L}_{A}=-\frac{f_{\pi}^{2}}{4}{\rm Tr}\left[(L-R)X_{A}\right]^{2}\;\;\;, \;\;\;{\cal L}_{V}=-\frac{f_{\pi}^{2}}{4}{\rm Tr}\left[(L+R)X_{V}\right]^{2}\;\;\;, \tag{53}\] where the constant matrices \(X_{A/V}\) provide departures from the unit matrix; they have been parametrized as \(X_{A/V}=\)Diag\((q_{A/V},y_{A/V},z_{A/V})\). In practice, one prefers setting \(q_{A/V}=1+(\Sigma_{A/V}+\Delta_{A/V})/2\) and \(y_{A/V}=1+(\Sigma_{A/V}-\Delta_{A/V})/2\). As \(z_{A}\) and \(z_{V}\) are affecting the \(s\overline{s}\) entries, their departure from 1 can be (and are found) large compared to \(q_{A/V}\) and \(y_{A/V}\) - which refer to resp. the \(u\overline{u}\) and \(d\overline{d}\) entries [15, 17, 18]. Within the BHLS\({}_{2}\) context opened in [17], it has been shown that the diagonalization of the vector meson mass term implies \(\Delta_{V}=0\); on the other hand, it has also been proved [17] that \(\Sigma_{V}\) is actually out of reach and can be fixed to zero without any loss of generality. Therefore the BKY breaking mechanism introduces 3 free parameters : \(z_{A}\) and \(\Delta_{A}\) tightly related with the ratio \(f_{K}/f_{\pi}\) and \(z_{V}\) with the Higgs-Kibble \(\phi\) meson mass. ### Breaking the HLS Lagrangian II : The Covariant Derivative (CD) Breaking The main ingredient in the HLS approach is the covariant derivative as displayed in Equation (50), complemented when relevant by \(W\) and \(Z^{0}\) terms [13]. Thus, a relevant breaking mechanism can be chosen affecting the covariant derivative itself; this can be done by replacing Equation (50) by : \[D_{\mu}\xi_{R/L}=\partial_{\mu}\xi_{R/L}-ig\left[V_{\mu}^{I}+\delta V_{\mu} \right]\xi_{R/L}+ie\xi_{R/L}A_{\mu}Q\,, \tag{54}\] where \(\delta V_{\mu}\) can be chosen to break the \(U(3)_{V}\) symmetry in a controlled way. Breaking the universality of the vector coupling \(g\) is an interesting tool; _a priori_ one may think that breaking nonet symmetry (_i.e._ along the Gell-Mann matrix \(T^{0}\)) can be performed independently of breaking the \(SU(3)_{V}\) symmetry (_i.e._ along the Gell-Mann matrix \(T^{8}\)); the diagonalization of the vector meson mass term as well as the expected values of the pion and kaon form factors at the chiral point prevent such a freedom of choice [17]. Identifying the field combinations associated with each of the canonical Gell-Mann \(T_{a}\)\(U(3)\) matrix basis, one is led to define the following components which can participate to separately or together : \[\left\{\begin{array}{l}\delta V_{\mu}^{0}=\frac{\xi_{0}}{\sqrt{2}} \left[\frac{\sqrt{2}\omega_{\mu}^{I}+\Phi_{\mu}^{I}}{3}\right]\mathrm{Diag}[1,1,1 ]\,,\\ \delta V_{\mu}^{8}=\frac{\xi_{8}}{\sqrt{2}}\left[\frac{\omega_{\mu}^{I}-\sqrt{ 2}\Phi_{\mu}^{I}}{3\sqrt{2}}\right]\mathrm{Diag}[1,1,-2]\,,\\ \delta V_{\mu}^{3}=\frac{\xi_{3}}{\sqrt{2}}\left[\frac{\rho_{I}^{0}}{\sqrt{2}} \right]\mathrm{Diag}[1,-1,0]\,,\end{array}\right. \tag{55}\] in terms of the usual ideal field combinations; the CD breaking term is \[\delta V_{\mu}=\delta V_{\mu}^{0}+\delta V_{\mu}^{8}+\delta V_{\mu}^{3}\ \.\] The (free) breaking parameters \(\xi_{0}\), \(\xi_{8}\) and \(\xi_{3}\) are only requested to be real in order that \(\delta V_{\mu}\) is hermitian as \(V_{\mu}^{I}\) itself. Clearly, \(\delta V_{\mu}^{0}\) defines a breaking of the nonet symmetry down to \(SU(3)_{V}\times U(1)_{V}\), \(\delta V_{\mu}^{8}\) rather expresses the breaking of the \(SU(3)_{V}\) symmetry, while \(\delta V_{\mu}^{3}\) is related to a direct breaking of Isospin symmetry in the vector sector. As mentioned just above, it happens that the \(\xi\) parameters introduced by Equations (55) should fulfill [17]\(\xi_{0}=\xi_{8}\) and so, that the CD breaking only involves 2 new free parameters. This means that within BHLS\({}_{2}\), one cannot solely break nonet symmetry which should be accompanied by a \(SU(3)\) breaking of same intensity. ### Breaking the HLS Lagrangian III : Dynamical Vector Meson Mixing The unbroken HLS Lagrangian already exhibits couplings for \(\rho_{I}/\omega_{I}/\phi_{I}\to K^{+}K^{-}/K^{0}\overline{K}^{0}\) transitions; this property is naturally transfered to all its broken versions. This implies that, at one loop order, the \(\rho^{0}/\omega/\phi\) squared mass matrix exhibits non-diagonal entries and thus, the ideal vector fields are no longer mass eigenstates. At one loop order, the squared mass matrix of the \(\rho^{0}/\omega/\phi\) system can be written : \[M^{2}(s)=M_{0}^{2}(s)+\delta M^{2}(s)\,, \tag{56}\] where the dependence upon the momentum squared \(s\) flowing through the vector lines is made explicit. After the \(BKY\) and \(CD\) breakings just sketched, the vector mesons masses write37 : Footnote 37: One should note that within BHLS\({}_{2}\) the charged and neutral \(\rho\) mesons carry different masses as \(m_{\rho^{\pm}}^{2}=m^{2}\:(1+\Sigma_{V})\). \[\left\{\begin{array}{l}m_{\rho^{0}}^{2}=m^{2}\left[1+\Sigma_{V}+2\:\xi_{3} \right]\,,\\ m_{\omega}^{2}=m^{2}\left[1+\Sigma_{V}+\frac{4}{3}\:\xi_{0}+\frac{2}{3}\:\xi_{ 8}\right]=m^{2}\left[1+\Sigma_{V}+2\:\xi_{0}\right]\,,\\ m_{\Phi}^{2}=m^{2}\:z_{V}\left[1+\frac{2}{3}\:\xi_{0}+\frac{4}{3}\:\xi_{8} \right]=m^{2}\:z_{V}\left[1+2\:\xi_{0}\right]\.\end{array}\right. \tag{57}\] in terms of the various breaking parameters; \(\Sigma_{V}\) has been kept for convenience. The \(M_{0}^{2}(s)\) matrix occuring in Equation (56) thus writes : \[M_{0}^{2}(s)={\rm Diag}(m_{\rho^{0}}^{2}+\Pi_{\pi\pi}(s),m_{\omega}^{2},m_{\phi} ^{2}). \tag{58}\] and is diagonal; \(\Pi_{\pi\pi}(s)\) is the pion loop and includes the \(\rho\pi^{+}\pi^{-}\) coupling squared. The expression for \(\delta M^{2}(s)\) is slightly more involved. Having defined the (\(\rho^{0},\ \omega,\ \phi\)) renormalized fields, generally indexed by \(R\) (_i.e._ those which diagonalize the vector meson mass term), one can derive the \({\cal V}_{R}^{i}\to{\cal V}_{R}^{j}\) transitions (\(i,j=\rho^{0},\ \omega,\ \phi\)). For this purpose, having defined \(\Pi_{\pm}(s)\) and \(\Pi_{0}(s)\), resp. the _amputated_ charged and neutral kaon loops, the transition amplitudes (\(i,j=\rho^{0},\ \omega,\ \phi\)) write : \[\delta M_{i,j}^{2}(s)=g_{K^{+}K^{-}}^{i}g_{K^{+}K^{-}}^{j}\Pi_{\pm}(s)+g_{K^{0 }\overline{K}^{0}}^{i}g_{K^{0}\overline{K}^{0}}^{j}\Pi_{0}(s) \tag{59}\] where the \(g_{K\overline{K}}\) coupling constants are displayed in Section 10 of [17]. The _physical_\(\rho^{0},\ \omega,\ \phi\) are the eigenvectors of the full squared mass matrix \(M^{2}(s)\); they are related to their _renormalized_ partners by : \[\left(\begin{array}{c}\rho_{R}\\ \omega_{R}\\ \Phi_{R}\end{array}\right)=\left(\begin{array}{ccc}1&-\alpha(s)&\beta(s)\\ \alpha(s)&1&\gamma(s)\\ -\beta(s)&-\gamma(s)&1\end{array}\right)\left(\begin{array}{c}\rho_{Phys} \\ \omega_{Phys}\\ \Phi_{Phys}\end{array}\right) \tag{60}\] The 3 complex angles occuring here are combinations of the \(\delta M^{2}(s)\) matrix elements and of the eigenvalues of the full \(M^{2}(s)\) matrix, as displayed in Subsection 10.2 of [17]. It is worth remarking that the dynamical mixing just sketched has provided the first solution [14, 15] to the long standing puzzle "\(e^{+}e^{-}\) versus \(\tau\)" [88, 89, 90] as it generates a \(s\)-dependent difference between the \(\rho^{\pm}-W^{\pm}\) and \(\rho^{0}-\gamma\) transition amplitudes. ### The Kinetic Breaking and the [\(\pi^{0},\ \eta,\ \eta\)/] System This Section mostly aims at reminding notations used in the body of the paper; these essentially deal with the pseudoscalar meson (PS) sector of the HLS model. The full pseudoscalar meson kinetic energy term of the BHLS\({}_{2}\) Lagrangian [18] writes : \[{\cal L}^{\prime}_{kin}={\rm Tr}\left[\partial P_{bare}X_{A}\partial P_{bare} X_{A}\right]+2\ \{{\rm Tr}\left[X_{H}\partial P_{bare}\right]\}^{2}. \tag{61}\] where \(P_{bare}\) is the PS _bare_ field matrix. The first term is already broken by the BKY mechanism applied to the \({\cal L}_{A}\) HLS Lagrangian piece (see Equation (53) in Appendix A) and the second one expresses the so-called kinetic breaking generalizing the 'tHooft mechanism [60]. It has been shown in [18] that an appropriate choice for the \(X_{H}\) matrix is : \[X_{H}=\lambda_{0}T_{0}+\lambda_{3}T_{3}+\lambda_{8}T_{8} \tag{62}\] in terms of the canonical \(U(3)\) Gell-Mann matrices (\(T_{0}=I/\sqrt{6}\), \({\rm Tr}[T_{a}T_{b}]=\delta_{ab}/2\)) with real \(\lambda_{i}\) coefficients in close correspondence with the CD breaking term \(\delta V\) affecting the vector sector (see Appendix A.3). This choice manifestly allows for Isospin Symmetry breaking, nonet symmetry breaking (the so-called 't Hooft term [60]) and \(SU(3)\) breaking. It is useful to introduce the vector of PS fields : \[{\cal V}_{any}=(\pi^{3}_{any},\eta^{0}_{any},\eta^{8}_{any})\ \ \ \ \ {\rm where}\ \ any=(bare,\ R1,\ R) \tag{63}\] to clarify the component indexing. The diagonalization of the kinetic energy Equation (61) which leads from the _bare_ PS fields to their renormalized partners (hereafter indexed by \(R\)) is performed in 2 steps. The intermediate step (from _bare_ to to \(R1\) fields) turns out to diagonalizing \({\rm Tr}\left[\partial P_{bare}X_{A}\partial P_{bare}X_{A}\right]\) and to define the \(W\) transformation matrix : \[W=\left(\begin{array}{ccc}1&-\frac{\Delta_{A}}{\sqrt{6}}&-\frac{\Delta_{A}}{ 2\sqrt{3}}\\ -\frac{\Delta_{A}}{\sqrt{6}}&B&A\\ -\frac{\Delta_{A}}{2\sqrt{3}}&A&C\end{array}\right) \tag{64}\] which depends on the BKY breaking parameter \(\Delta_{A}\) and via : \[A=\sqrt{2}\frac{z_{A}-1}{3z_{A}}\,\ \ B=\frac{2z_{A}+1}{3z_{A}}\,\ \ C=\frac{z_{A}+2}{3z_{A}} \tag{65}\] on the other BKY breaking parameter \(z_{A}\) (see Appendix A.2 above). In order to achieve the diagonalization of the (full) kinetic energy term of the BHLS\({}_{2}\) Lagrangian, one still has to define the linear transform which relates the intermediate \(R1\) and final \(R\) renormalized PS fields (see Equation (28) in [18]). Given the (co-)vector : \[a^{t}=(\ \lambda_{3},\lambda_{0}B+\lambda_{8}A,\lambda_{0}A+\lambda_{8}C)\ \, \tag{66}\] one can then prove [18] that Equation (61) becomes canonical (at first order in breakings) when expressed in terms of the \({\cal V}_{R}\) fields defined by : \[{\cal V}_{bare}=W\cdot\left[1-\frac{1}{2}a\cdot a^{t}\right]\cdot{\cal V}_{R}\ . \tag{67}\] However, the \({\cal V}_{R}\) fields are not still the PS mass eigenstates denoted by the triplet (\(\pi^{0},\ \eta,\ \eta^{\prime}\)). One expects these _physical_ states to be related with the \({\cal V}_{R}\) fields via a 3-dimensional rotation and thus 3 angles. Adopting the Leutwyler parametrization [91], one has : \[\left(\begin{array}{c}\pi^{3}_{R}\\ \eta^{8}_{R}\\ \eta^{0}_{R}\end{array}\right)=\left(\begin{array}{ccc}1&-\epsilon&- \epsilon^{\prime}\\ \epsilon\cos\theta_{P}+\epsilon^{\prime}\sin\theta_{P}&\cos\theta_{P}&\sin \theta_{P}\\ -\epsilon\sin\theta_{P}+\epsilon^{\prime}\cos\theta_{P}&-\sin\theta_{P}&\cos \theta_{P}\end{array}\right)\left(\begin{array}{c}\pi^{0}\\ \eta\\ \eta^{\prime}\end{array}\right) \tag{68}\] to relate the \(R\) fields which diagonalize the kinetic energy to the physical (_i.e._ mass eigenstates) neutral PS fields. The three angles (\(\epsilon\), \(\epsilon^{\prime}\) and even \(\theta_{P}\)) are assumed \({\cal O}(\delta)\) perturbations; nevertheless, for clarity, the so-called third mixing angle [61] is not treated as manifestly small. On the other hand, the "angles" \(\epsilon\) and \(\epsilon^{\prime}\) are related with the light quark masses and it is worth stating that they are expected likesign (see the discussion in [18]). ## Appendix B Erratum : The VPP/APP interaction pieces in BHLS\({}_{2}\) It is worthwhile to list the \(VPP\) and \(APP\) interaction terms of the BHLS\({}_{2}\) Lagrangian, corrected when needed, related with the present study, _i.e._ the charged and neutral pion fields, the \(\eta\) and \(\eta^{\prime}\) mesons. We have : \[\begin{array}{ll}{\cal L}_{\pi^{-}\pi^{+}}=&ie\left[1-\frac{a}{2}(1+\Sigma_{ V})\right]A\cdot\pi^{-}\stackrel{{\leftrightarrow}}{{ \partial}}\pi^{+}+\frac{iag}{2}(1+\Sigma_{V})\left[1+\xi_{3}\right]\rho_{I}^{0 }\cdot\pi^{-}\stackrel{{\leftrightarrow}}{{\partial}}\pi^{+}\\ {\cal L}_{\pi^{0}\pi^{\pm}}=&\frac{iag}{2}(1+\Sigma_{V})(1-\frac{\lambda_{3}^ {2}}{2})\left[\rho^{-}\cdot\pi^{+}\stackrel{{\leftrightarrow}}{{ \partial}}\pi^{0}-\rho^{+}\cdot\pi^{-}\stackrel{{\leftrightarrow }}{{\partial}}\pi^{0}\right]\\ {\cal L}_{\eta\pi^{\pm}}=&-\frac{iag}{2}\left[\left\{\frac{1}{2 \sqrt{3}}\Delta_{A}+\frac{\lambda_{3}\widetilde{\lambda}_{8}}{2}\right\} \cos\theta_{P}-\left\{\frac{1}{\sqrt{6}}\Delta_{A}+\frac{\lambda_{3} \widetilde{\lambda}_{0}}{2}\right\}\sin\theta_{P}+\epsilon\right]\\ &[1+\Sigma_{V}]\left[\rho^{-}\cdot\pi^{+}\stackrel{{ \leftrightarrow}}{{\partial}}\eta-\rho^{+}\cdot\pi^{-}\stackrel{{ \leftrightarrow}}{{\partial}}\eta^{\prime}\right]\\ \end{array} \tag{69}\] The last 2 Lagrangian pieces superseed the corresponding formulae displayed in Equations (45) of [18]; they were given for completeness but unused. In the present study they should be considered. In the expressions above, the kinetic breaking parameters occur; beside \(\lambda_{3}\), one also has : \[\widetilde{\lambda}_{0}=\lambda_{0}B+\lambda_{8}A\;,\;\;\;\widetilde{\lambda }_{8}=\lambda_{0}A+\lambda_{8}C \tag{70}\] where \(A\), \(B\) and \(C\) have also been reminded in the Appendix A.5 just above. On the other hand we have chosen, here to keep the \(\Sigma_{V}\) parameter for clarity. However in [18] it has been shown that it is out of reach and can be fixed to zero without any loss of generality. ## Appendix C \(A_{\pm}\) Solutions : The \(Aap\) and \(Vvp\) Lagrangians It is worthwhile displaying the anomalous BHLS\({}_{2}\) Lagrangian pieces associated with the so-called triangle anomalies, having imposed the Kroll Conditions [58], examined in full details in [18] and briefly sketched in Section 3. Using obvious notations, these anomalous pieces are derived from [10, 13] : \[\left\{\begin{array}{ll}{\cal L}_{VVP}=&-\frac{N_{c}g^{2}}{4\pi^{2}f_{\pi}}\ c_{3} \epsilon^{\mu\nu\alpha\beta}{\rm Tr}[\partial_{\mu}V_{\nu}\partial_{\alpha}V_{ \beta}P]\\ \\ {\cal L}_{AAP}=&-\frac{N_{c}e^{2}}{4\pi^{2}f_{\pi}}\ (1-c_{4})\epsilon^{\mu\nu \alpha\beta}\partial_{\mu}A_{\nu}\partial_{\alpha}A_{\beta}{\rm Tr}[Q^{2}P]\\ \\ {\cal L}_{AVP}=&-\frac{N_{c}ge}{8\pi^{2}f_{\pi}}\ (c_{4}-c_{3})\epsilon^{\mu \nu\alpha\beta}\partial_{\mu}A_{\nu}{\rm Tr}[\{\partial_{\alpha}V_{\beta},Q\}P ]\end{array}\right. \tag{71}\] The phenomenology examined so far with the broken variants of the HLS model never led to consider a non-zero \(c_{3}-c_{4}\); therefore, one assumes \(c_{3}=c_{4}\) which turns out to discard \({\cal L}_{AVP}\) Lagrangian piece. Unless otherwise stated the neutral vector fields displayed here are the so-called ideal combinations generally named \(\rho^{I}\), \(\omega^{I}\) and \(\phi^{I}\). The transformation which connects the _bare_ vector fields to their _physical_ partners is treated in [17] and briefly reminded in Appendix A above. We also remind here the definition for \(\delta_{P}\) : \[\left\{\begin{array}{ll}\sin\delta_{P}=\frac{1}{\sqrt{3}}\left(\sqrt{2}\sin \theta_{P}-\cos\theta_{P}\right),&\cos\delta_{P}=\frac{1}{\sqrt{3}}\left(\sqrt{ 2}\cos\theta_{P}+\sin\theta_{P}\right)\end{array}\right. \tag{72}\] and (\(d_{\pm}\equiv\pm 1\)) : \[A_{\pm}=\Delta_{A}+d_{\pm}\lambda_{0}^{2}\ . \tag{73}\] used below. ### The \(Aap\) Lagrangian The \(AAP\) Lagrangian defined in the header just above where \(Q\) is the quark charge matrix and \(P\) the \(U(3)\) symmetric matrix of the bare pseudoscalar fields is given for definiteness. Defining : \[\left\{\begin{array}{ll}g_{\pi^{0}\gamma\gamma}=&\frac{1}{6} \left\{1-\frac{5}{6}A_{\pm}-\frac{\lambda_{0}^{2}}{3}\right\}\\ &-\frac{\epsilon}{18z_{A}}\left\{5z_{A}\sin\delta_{P}+\sqrt{2}\cos \delta_{P}\right\}-\frac{\epsilon^{\prime}}{18z_{A}}\left\{\sqrt{2}\sin\delta _{P}-5z_{A}\cos\delta_{P}\right\}\,,\\ \\ g_{\eta\gamma\gamma}=&-\frac{\epsilon}{6}-\frac{\sqrt{2}}{18z_{A}}\cos \delta_{P}+\frac{1}{12}\left\{A_{\pm}+\frac{5}{6}(3\lambda_{0}^{2}-4)\right\} \sin\delta_{P}\\ \\ g_{\eta^{\prime}\gamma\gamma}=&-\frac{\epsilon^{\prime}}{6}-\frac{\sqrt{2}}{18 z_{A}}\sin\delta_{P}-\frac{1}{12}\left\{A_{\pm}+\frac{5}{6}(3\lambda_{0}^{2}-4) \right\}\cos\delta_{P}\end{array}\right. \tag{74}\] the coupling constants for the physical mesons \(P_{0}\gamma\gamma\) (\(P_{0}=\pi^{0},\eta,\eta^{\prime}\)) are given by: \[G_{P_{0}\gamma\gamma}=-\frac{3\alpha_{em}}{\pi f_{\pi}}(1-c_{4})g_{P_{0} \gamma\gamma}, \tag{75}\] and the \(AAP\) Lagrangian can also be written : \[{\cal L}_{AAP_{0}}=G_{P_{0}\gamma\gamma}\;\;P_{0}\;\epsilon^{\mu\nu\alpha\beta} \partial_{\mu}A_{\nu}\partial_{\alpha}A_{\beta}\;\;\mbox{for each of}\;\;\;\;P_{0}=\pi^{0},\eta,\eta^{\prime}\;\;. \tag{76}\] ### The \(Vvp\) Lagrangian The \(VVP\) Lagrangian is given by : \[{\cal L}_{VVP}=-\frac{3g^{2}}{4\pi^{2}f_{\pi}}\;c_{3}\;\epsilon^{\mu\nu\alpha \beta}{\rm Tr}\left[\partial_{\mu}V_{\nu}\partial_{\alpha}V_{\beta}P\right]\; \;\;,\;\;\;\;C=-\frac{N_{c}g^{2}c_{3}}{4\pi^{2}f_{\pi}}\;, \tag{77}\] #### c.2.1 The \(Vv\pi\) Lagrangians The \(VV\pi\) Lagrangians relevant for our phenomenology are given by : \[{\cal L}_{VVP}(\pi^{\pm})= \frac{C}{2}\epsilon^{\mu\nu\alpha\beta}\Bigg{\{}\left[\left(1+ \frac{2\xi_{0}+\xi_{8}}{3}\right)\partial_{\mu}\omega_{\nu}^{I}+\frac{\sqrt{2 }}{3}(\xi_{0}-\xi_{8})\partial_{\mu}\phi_{\nu}^{I}\right]\times\left[\partial_ {\alpha}\rho_{\beta}^{+}\pi^{-}+\partial_{\alpha}\rho_{\beta}^{-}\pi^{+} \right]\Bigg{\}}\] (78) and : \[{\cal L}_{VVP}(\pi^{0})= \frac{C}{2}\epsilon^{\mu\nu\alpha\beta}\Bigg{\{}G_{0}\partial_{ \mu}\rho_{\nu}^{I}\partial_{\alpha}\omega_{\beta}^{I}+G_{1}\left[2\partial_{ \mu}\rho_{\nu}^{-}\partial_{\alpha}\rho_{\beta}^{+}+\partial_{\mu}\rho_{\nu}^ {I}\partial_{\alpha}\rho_{\beta}^{I}+\partial_{\mu}\omega_{\nu}^{I}\partial_{ \alpha}\omega_{\beta}^{I}\right] \tag{79}\] \[\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+G_ {2}\partial_{\mu}\phi_{\nu}^{I}\partial_{\alpha}\phi_{\beta}^{I}+G_{3}\partial_ {\mu}\rho_{\nu}^{I}\partial_{\alpha}\phi_{\beta}^{I}\Bigg{\}}\;\pi^{0}\] where : \[\left\{\begin{array}{l}G_{0}=\left[1-\frac{\lambda_{0}^{2}}{3}+ \frac{2\xi_{0}+\xi_{8}}{3}+\xi_{3}\right]\\ G_{1}=-\frac{A_{\pm}}{4}+\frac{1}{2}\left[\;\epsilon^{\prime}\cos\delta_{P}- \epsilon\sin\delta_{P}\right]\\ G_{2}=-\frac{1}{z_{A}\sqrt{2}}\left[\;\epsilon^{\prime}\sin\delta_{P}+ \epsilon\cos\delta_{P}\right]\\ G_{3}=\frac{\sqrt{2}}{3}(\xi_{0}-\xi_{8})\end{array}\right. \tag{80}\] Actually, one imposes \(\xi_{0}=\xi_{8}\), so that, always, \(G_{3}=0\). #### c.2.2 The \(VV\eta\) Lagrangian The \(VV\eta\) Lagrangian is given by : \[\begin{array}{ll}{\cal L}_{VVP}(\eta)=&\frac{C}{2}\epsilon^{\mu \nu\alpha\beta}\Bigg{\{}K_{1}\partial_{\mu}\rho_{\nu}^{-}\partial_{\alpha}\rho_ {\beta}^{+}+K_{2}\partial_{\mu}\rho_{\nu}^{I}\partial_{\alpha}\rho_{\beta}^{I}+ K_{3}\partial_{\mu}\omega_{\nu}^{I}\partial_{\alpha}\omega_{\beta}^{I}+K_{4} \partial_{\mu}\phi_{\nu}^{I}\partial_{\alpha}\phi_{\beta}^{I}\\ &\hskip 14.226378pt+K_{5}\partial_{\mu}\omega_{\nu}^{I}\partial_{\alpha}\phi_ {\beta}^{I}+K_{6}\partial_{\mu}\rho_{\nu}^{I}\partial_{\alpha}\omega_{\beta}^{ I}\Bigg{\}}\,\eta\end{array} \tag{81}\] Having defined38 : Footnote 38: Referring to [18], the Kroll condtions turns out to fix \(H_{1}=0\). \[\left\{\begin{array}{ll}H_{2}=\frac{1}{8}\left[3\lambda_{0}^{2}-4\right]&, &H_{3}=-\frac{\sqrt{2}}{6z_{A}}\left[(3+2\xi_{0}+4\xi_{8})\right]\ \right\}\end{array}\right. \tag{82}\] the \(VV\eta\) couplings become : \[\left\{\begin{array}{ll}K_{1}=2H_{2}\sin\delta_{P}\ \,&K_{2}=(H_{2}-\xi_{3})\sin \delta_{P}\\ K_{3}=\left[H_{2}-\frac{2\xi_{0}+\xi_{8}}{3}\right]\sin\delta_{P}\ \,&K_{4}=H_{3}\cos\delta_{P}\\ K_{5}=-\frac{(\xi_{0}-\xi_{8})}{3z_{A}}\left[2\cos\delta_{P}+z_{A}\sqrt{2}\sin \delta_{P}\right]&,&K_{6}=\frac{A_{\pm}}{2}\sin\delta_{P}-\epsilon\end{array}\right. \tag{83}\] Actually, similarly to just above, the \(K_{5}\) term drops out in the practical BHLS\({}_{2}\) context. #### c.2.3 The \(VV\eta^{\prime}\) Lagrangian The \(VV\eta^{\prime}\) Lagrangian is given by : \[\begin{array}{ll}{\cal L}_{VVP}(\eta^{\prime})=&\frac{C}{2} \epsilon^{\mu\nu\alpha\beta}\Bigg{\{}K_{1}^{\prime}\partial_{\mu}\rho_{\nu}^{ -}\partial_{\alpha}\rho_{\beta}^{+}+K_{2}^{\prime}\partial_{\mu}\rho_{\nu}^{I} \partial_{\alpha}\rho_{\beta}^{I}+K_{3}^{\prime}\partial_{\mu}\omega_{\nu}^{I} \partial_{\alpha}\omega_{\beta}^{I}+K_{4}^{\prime}\partial_{\mu}\phi_{\nu}^{I} \partial_{\alpha}\phi_{\beta}^{I}\\ &\hskip 14.226378pt+K_{5}^{\prime}\partial_{\mu}\omega_{\nu}^{I} \partial_{\alpha}\phi_{\beta}^{I}+K_{6}^{\prime}\partial_{\mu}\rho_{\nu}^{I} \partial_{\alpha}\omega_{\beta}^{I}\Bigg{\}}\,\eta^{\prime}\end{array} \tag{84}\] the \(VV\eta^{\prime}\) couplings being : \[\left\{\begin{array}{ll}K_{1}^{\prime}=-2H_{2}\cos\delta_{P}\ \,&K_{2}^{\prime}=-(H_{2}-\xi_{3})\cos\delta_{P}\\ K_{3}^{\prime}=-\left[H_{2}-\frac{2\xi_{0}+\xi_{8}}{3}\right]\cos\delta_{P}\ \,&K_{4}^{ \prime}=H_{3}\sin\delta_{P}\\ K_{5}^{\prime}=-\frac{(\xi_{0}-\xi_{8})}{3z_{A}}\left[-z_{A}\sqrt{2}\cos\delta _{P}+2\sin\delta_{P}\right]&,&K_{6}^{\prime}=-\frac{A_{\pm}}{2}\cos\delta_{P}- \epsilon^{\prime}\end{array}\right. \tag{85}\] where also the \(K_{5}^{\prime}\) term drops out in the practical BHLS\({}_{2}\) context, where \(\xi_{0}=\xi_{8}\). The \(H_{i}\) functions occuring here have been defined in our previous paper and have been reminded in the Subsection just above - \(H_{1}\) vanishes thanks to having requested the Kroll Conditions. One should also note that the \(VV\eta^{\prime}\) couplings are related to the \(VV\eta\) couplings and can be derived herefrom by making in the \(VV\eta\) couplings : \[\left\{\sin\delta_{P}\rightarrow-\cos\delta_{P}\ \ {\rm and}\ \ \ \cos\delta_{P} \rightarrow\sin\delta_{P}\right\}\.\] ## Appendix D \(A_{\pm}\) Solutions : The \(Appp\) and \(Vppp\) Lagrangians Beside the Lagrangian pieces associated with the triangle anomalies reminded in the Appendix just above, those associated with the so-called box anomalies play an important role in the \(\eta/\eta^{\prime}\rightarrow\pi^{+}\pi^{-}\gamma\) decays and in the \(e^{+}e^{-}\rightarrow\pi^{+}\pi^{-}\pi^{0}\) annihilation thoroughly considered in our [18]. We find it helpful to provide their expressions while the Kroll conditions are applied. The \(APPP\) and \(VPPP\) Lagrangian pieces introduce a new HLS parameter (\(c_{1}-c_{2}\)) not fixed by the model and should be derived from fits. As for the \(VVP\) interactions reminded in Appendix C, the neutral vector fields occuring in the \(VPPP\) interaction Lagrangian are their ideal combinations; they should be expressed in terms of _physical_ vector fields as developped in [17] in practical applications. ### The \(Appp\) Lagrangian The \(APPP\) Lagrangian is given by : \[{\cal L}_{APPP}=D\ \epsilon^{\mu\nu\alpha\beta}A_{\mu}{\rm Tr}\left[Q\partial _{\nu}P\partial_{\alpha}P\partial_{\beta}P\right]\ \,\ \ D=-i\frac{N_{c}e}{3\pi^{2}f_{\pi}^{3}}\left[1-\frac{3}{4}(c_{1}-c_{2}+c_{4}) \right]\, \tag{86}\] Regarding the phenology we address, the relevant \(APPP\) Lagrangian piece to be considered is : \[{\cal L}_{APPP}^{1}=D\epsilon^{\mu\nu\alpha\beta}A_{\mu}\left\{g_{\gamma\pi^{0 }}\partial_{\nu}\pi^{0}+g_{\gamma\eta}\partial_{\nu}\eta+g_{\gamma\eta^{ \prime}}\partial_{\nu}\eta^{\prime}\right\}\ \partial_{\alpha}\pi^{-}\partial_{\beta}\pi^{+}\, \tag{87}\] in terms of fully renormalized PS fields. Requiring the \(A_{\pm}\) Kroll conditions, these \(g_{\gamma P}\) couplings can be written : \[\left\{\begin{array}{l}g_{\gamma\pi^{0}}=-\frac{1}{4}\left[1- \frac{A_{\pm}}{2}-\frac{\lambda_{0}^{2}}{3}-\epsilon\sin\delta_{P}+\epsilon^{ \prime}\cos\delta_{P}\right]\\ g_{\gamma\eta}=\ \ \ \left[1-\frac{A_{\pm}}{2}-\frac{3\lambda_{0}^{2}}{4} \right]\frac{\sin\delta_{P}}{4}+\frac{\epsilon}{4}\\ g_{\gamma\eta^{\prime}}=-\left[1-\frac{A_{\pm}}{2}-\frac{3\lambda_{0}^{2}}{4} \right]\frac{\cos\delta_{P}}{4}+\frac{\epsilon^{\prime}}{4}\end{array}\right. \tag{88}\] keeping only the leading order terms in breakings. ### The \(Vppp\) Lagrangian The \(VPPP\) anomalous HLS Lagrangian is : \[{\cal L}_{VPPP}=-i\frac{N_{c}g}{4\pi^{2}f_{\pi}^{3}}(c_{1}-c_{2}-c_{3})\epsilon^ {\mu\nu\alpha\beta}{\rm Tr}[V_{\mu}\partial_{\nu}P\partial_{\alpha}P\partial_{ \beta}P] \tag{89}\] where the \(c_{i}\) are the FKTUY parameters not fixed by the model. \(N_{c}\) is the number of colors fixed to 3.The \(V\) and \(P\) field matrices are the bare ones. The relevant part of \({\cal L}_{VPPP}\) within the present context is : \[\left\{\begin{array}{l}{\cal L}_{VP_{0}\pi^{+}\pi^{-}}=E\epsilon^{\mu\nu \alpha\beta}\left\{\left[g^{0}_{\rho\pi}\partial_{\nu}\pi^{0}+g^{0}_{\rho\eta} \partial_{\nu}\eta+g^{0}_{\rho\eta^{\prime}}\partial_{\nu}\eta^{\prime}\right] \right.\right.\right.\left.\rho_{\mu}^{I}}\\ \\ \left.\left.+\left[g^{0}_{\omega\pi}\partial_{\nu}\pi^{0}+g^{0}_{\omega\eta} \partial_{\nu}\eta++g^{0}_{\omega\eta^{\prime}}\partial_{\nu}\eta^{\prime} \right]\right.\right.\left.\omega_{\mu}^{I}+g^{0}_{\phi\pi}\partial_{\nu}\pi^ {0}\left.\phi_{\mu}^{I}\right.\right\}\partial_{\alpha}\pi^{-}\partial_{\beta }\pi^{+}\\ \\ {\rm with}\ \ E=-i\frac{3g(c_{1}-c_{2}-c_{3})}{4\pi^{2}f_{\pi}^{3}}\end{array}\right. \tag{90}\] in terms of the _physical_ pseudoscalar fields. Keeping only the \(A_{\pm}\) solutions and the leading order breaking terms, the couplings just defined are : \[\left\{\begin{array}{l}g^{0}_{\rho\pi^{0}}=\frac{1}{4}\left[\frac{A_{\pm}}{2 }+\epsilon\sin\delta_{P}-\epsilon^{\prime}\cos\delta_{P}\right]\\ \\ g^{0}_{\rho\eta}=\frac{1}{4}\left[1+\xi_{3}-\frac{3}{4}\lambda_{0}^{2} \right]\sin\delta_{P}\\ \\ g^{0}_{\rho\eta^{\prime}}=-\frac{1}{4}\left[1+\xi_{3}-\frac{3}{4}\lambda_{0}^{ 2}\right]\cos\delta_{P}\end{array}\right. \tag{91}\] and : \[\left\{\begin{array}{l}g^{0}_{\omega\pi^{0}}=-\frac{3}{4}\left[1+\frac{2 \xi_{0}+\xi_{8}}{3}-\frac{1}{3}\lambda_{0}^{2}\right]\\ \\ g^{0}_{\omega\eta}=\frac{3}{4}\left\{\epsilon-\frac{A_{\pm}}{2}\sin\delta_{P} \right\}\\ \\ g^{0}_{\omega\eta^{\prime}}=\frac{3}{4}\left\{\epsilon^{\prime}+\frac{A_{\pm}} {2}\cos\delta_{P}\right\}\\ \\ g^{0}_{\phi\pi}=-\frac{\sqrt{2}}{4}\left[\xi_{0}-\xi_{8}\right]\ \,\ \ g^{0}_{\phi\eta}=0,\ \ g^{0}_{\phi\eta^{ \prime}}=0\ \.\end{array}\right. \tag{92}\] As pseudoscalar meson form factor values at origin imply [17]\(\xi_{0}=\xi_{8}\), one observes that no term involving \(\phi^{I}\) survives at leading order in breakings. ## Appendix E Brief Analysis of the BHLS\({}_{2}\) Parameters Values Table 5 collects the model parameter values of the BHLS\({}_{2}\) Lagrangian. In order to figure out the effect of the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\pi^{0}\) annihilation data on the numerical results, its first data column (replicated from Table 10 in [18]) displays the fit parameter values derived when they are considered, whereas the second data column provides the same information when they are excluded from the fit procedure. The third and fourth data columns report the fit results when the \(\eta/\eta^{\prime}\) dipion spectra are included within the set of data samples \({\cal H}_{R}\) amputated from the 3-pion data. Beside providing the parameter values themselves, the issue here is to reach an educated guess about Final State Interaction effects in the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\pi^{0}\) annihilation process : Could FSI in this channel be numerically invisible or is it absorbed effectively by the other model parameters? First of all, the last line in Table 3 clearly shows that one always reaches fair accounts of \begin{table} \begin{tabular}{||c||c||c|c|c||} \hline \hline Fit Parameter & with \(3\pi\) spectra & no \(\eta/\eta^{\prime}/3\pi\) Spectra & \(P_{\eta}(s)\neq P_{\eta^{\prime}}(s)\) & \(P_{\eta}(s)\equiv P_{\eta^{\prime}}(s)\) \\ \hline \(a_{HLS}\) & \(1.766\pm 0.001\) & \(1.789\pm 0.001\) & \(1.842\pm 0.001\) & \(1.821\pm 0.001\) \\ \hline \(g\) & \(6.954\pm 0.002\) & \(6.334\pm 0.001\) & \(6.236\pm 0.001\) & \(6.379\pm 0.001\) \\ \hline \((c_{3}+c_{4})/2\) & \(0.742\pm 0.003\) & \(0.756\pm 0.005\) & \(0.773\pm 0.005\) & \(0.772\pm 0.004\) \\ \hline \(\theta_{P}\) (degrees) & \(-15.59\pm 0.8\) & \(-16.471\pm 0.295\) & \(-17.614\pm 0.282\) & \(-17.433\pm 0.282\) \\ \hline \(\lambda_{0}\) & \(0.285\pm 0.009\) & \(0.325\pm 0.008\) & \(0.339\pm 0.008\) & \(0.334\pm 0.008\) \\ \hline \(z_{A}\) & \(1.406\pm 0.004\) & \(1.416\pm 0.015\) & \(1.418\pm 0.005\) & \(1.415\pm 0.005\) \\ \hline \(z_{V}\) & \(1.420\pm 0.001\) & \(1.375\pm 0.007\) & \(1.304\pm 0.001\) & \(1.320\pm 0.001\) \\ \hline \(\Delta_{A}\,\times 10^{2}\) & \(12.94\pm 4.91\) & \(12.191\pm 4.05\) & \(10.173\pm 5.39\) & \(10.249\pm 5.428\) \\ \hline \hline \(\epsilon\,\times 10^{2}\) & \(3.62\pm 0.30\) & \(5.383\pm 0.440\) & \(6.456\pm 0.439\) & \(6.385\pm 0.411\) \\ \hline \(\epsilon^{\prime}\,\times 10^{2}\) & \(0.17\pm 0.27\) & \(-3.623\pm 0.711\) & \(-6.809\pm 0.581\) & \(-7.021\pm 0.475\) \\ \hline \(\xi_{0}\,\times 10^{2}\) & \(-6.838\pm 0.018\) & \(1.178\pm 0.018\) & \(1.119\pm 0.013\) & \(-0.538\pm 0.014\) \\ \hline \(\xi_{3}\,\times 10^{2}\) & \(1.496\pm 0.150\) & \(6.082\pm 0.153\) & \(6.070\pm 0.136\) & \(5.609\pm 0.137\) \\ \hline \hline Fit Probability & 83.5 \% & 88.6 \% & 89.7\% & 90.6\% \\ \hline \hline \end{tabular} \end{table} Table 5: Fit parameter values based on the \(A_{-}\) BHLS\({}_{2}\) variant : The first data column reminds the parameter values when including the \(3\pi\) spectra, the second one provides the same information when the \(3\pi\) spectra are discarded from the fit procedure. The third and fourth data columns display the fit results when the \(\eta/\eta^{\prime}\) spectra are incluced and the \(3\pi\) spectra excluded. the spectra submitted to the BHLS\({}_{2}\) global fit. Regarding the parameters collected in the top bunch of the Table, one observes value differences beyond the reported fit uncertainty, however with magnitudes consistent with reasonable systematic effects. The lower bunch of parameters looks rather confusing. Indeed, regarding \(\epsilon\), \(\epsilon^{\prime}\) and \(\xi_{3}\), the pieces of information derived by the three fits excluding the 3-pion data are consistent with each other and not with the first column result. The values for \(\xi_{0}\) look confusing and may only indicate large systematics. Therefore, there is no obvious hint for significant FSI effects in the \(e^{+}e^{-}\to\pi^{+}\pi^{-}\pi^{0}\) annihilation process; nevertheless, this certainly deserves devoted works [74].
2302.00049
Transformers Meet Directed Graphs
Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and undirected graphs. However, transformers for directed graphs are a surprisingly underexplored topic, despite their applicability to ubiquitous domains, including source code and logic circuits. In this work, we propose two direction- and structure-aware positional encodings for directed graphs: (1) the eigenvectors of the Magnetic Laplacian - a direction-aware generalization of the combinatorial Laplacian; (2) directional random walk encodings. Empirically, we show that the extra directionality information is useful in various downstream tasks, including correctness testing of sorting networks and source code understanding. Together with a data-flow-centric graph construction, our model outperforms the prior state of the art on the Open Graph Benchmark Code2 relatively by 14.7%.
Simon Geisler, Yujia Li, Daniel Mankowitz, Ali Taylan Cemgil, Stephan Günnemann, Cosmin Paduraru
2023-01-31T19:33:14Z
http://arxiv.org/abs/2302.00049v3
# Transformers Meet Directed Graphs ###### Abstract Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and _undirected_ graphs. However, transformers for _directed_ graphs are a surprisingly underexplored topic, despite their applicability to ubiquitous domains including source code and logic circuits. In this work, we propose two direction- and structure-aware positional encodings for _directed_ graphs: (1) the eigenvectors of the Magnetic Laplacian - a direction-aware generalization of the combinatorial Laplacian; (2) directional random walk encodings. Empirically, we show that the extra directionality information is useful in various downstream tasks, including correctness testing of sorting networks and source code understanding. Together with a data-flow-centric graph construction, our model outperforms the prior state of the art on the Open Graph Benchmark Code2 relatively by 14.7%3. Machine Learning, ICML ## 1 Introduction Transformers have become a central component in many state-of-the-art machine learning models spanning a wide range of modalities. For example, transformers are used to generate solutions for competitive programming tasks from textual descriptions (Li et al., 2022), for conversational question answering with the popular ChatGPT (OpenAI, 2022), or to find approximate solutions to combinatorial optimizations problems like the Traveling Salesman Problem (Kool et al., 2019). Transformers have also had success on graph learning tasks, e.g., for predicting the properties of molecules (Min et al., 2022). While virtually all prior works focus on undirected graphs, we advocate the use of directed graphs as they are omnipresent and the directedness can determine the semantics. Such transformers that handle both undirected and directed graphs could become an important building block for many applications. The biggest challenge is arguably making the attention mechanism aware of the graph structure. For example, prior work modified the attention mechanism to incorporate structural information (Ying et al., 2021) or proposed hybrid architectures that also contain contain Graph Neural Networks (GNNs) (Mialon et al., 2021; Chen et al., 2022). Another (complementary) option are positional encodings that are used by many, if not most, structure-aware transformers (Min et al., 2022). **Directional positional encodings.** Specifically, most of the literature for structure-aware positional encodings either uses basic measures like pairwise shortest path distances (Guo et al., 2021) or symmetrizes the graph for principled positional encodings, e.g. based on graph spectral theory (Dwivedi and Bresson, 2021). Importantly, due to symmetrization we might ignore essential information that determine semantics. For this reason, we propose to use the eigenvectors of the Magnetic Laplacian (SS 3), a natural direction-aware generalization of the well-known combinatorial Laplacian for undirected graphs (see Fig. 1). Moreover, we study directional random walk encodings (SS 4) that generalize basic measures like the shortest path distances. We show that our positional encodings are predictive for different distance measures on graphs in SS 5. Moreover, our positional encodings can also improve GNNs (see SS 6). **Motivation for directed graphs.** We next discuss the importance of directed graphs using one of the studied tasks, namely correctness prediction for sorting networks (SS 6), as example. A sorting network (Knuth, 1973) is a special type of sorting procedure that can be represented by a fixed sequence operations. Equivalently, we can construct a (directed acyclic) data-flow graph from the operations modeling their dependencies. Conversely, the topological sorts of Figure 1: First eigenvector of Magnetic Laplacian. Node size encodes the real value and colors the imaginary value. this graph correspond to different but semantically equivalent sequences of operations. Thus, directed graphs can drastically reduce the effective input dimensionality. Moreover, we show that ignoring the edge directions maps both correct and incorrect sorting networks to the same _undirected_ graph, losing critical information. Interestingly, representing source code as a sequence is the de facto standard (Li et al., 2022; Feng et al., 2020; Chen et al., 2021; OpenAI, 2022). Even graph-based representations of code (Allamanis et al., 2018; Hu et al., 2020; Cummins et al., 2020; Guo et al., 2021; Bieber et al., 2022) only enrich sequential source code, e.g., with an Abstract Syntax Tree (AST). The insights above motivate us to rethink the graph construction for source code (SS 7), which not only boosts performance but makes the model invariant w.r.t. to certain meaningless reorderings of statements. **Contributions:****[I]** We make the connection between sinusoidal positional encodings and the eigenvectors of the Laplacian explicit (SS 2). **[II]** We propose _spectral positional encodings_ that also generalize to directed graphs (SS 3). **[III]** We extend random walk positional encodings to directed graphs (SS 4). **[IV]** As a plausibility check, we assess the predictiveness of structure-aware positional encodings for a set of graph distances (SS 5). **[V]** We introduce the task of predicting the correctness of _sorting networks_, a canonical ambiguity-free application where directionality is essential (SS 6). **[VI]** We quantify the benefits of modeling a sequence of program statements as a directed graph and rethink the graph construction for source code to boost predictive performance and robustness (SS 6 & 7). **[VII]** We set a new _state on the art_ on the OGB Code2 dataset (2.85% higher F1 score, 14.7% relatively) for function name prediction (SS 7). ## 2 Sinusoidal and Laplacian Encodings Due to the permutation equivariant attention, we typically introduce a domain-specific inductive bias with Positional Encodings (PEs). For example, Vaswani et al. (2017) proposed sinusoidal positional encodings for sequences along with the transformer architecture. It is commonly argued (Bronstein et al., 2021; Dwivedi and Bresson, 2021) that the eigenvectors of the (combinatorial) Laplacian generalize the sinusoidal positional encodings (see Fig. 2) to graphs, due to their relationship via the Graph Fourier Transformation (GFT) and Discrete Fourier Transformation (DFT) (Smith, 1999). Even though sinusoidal positional encodings capture the direction, eigenvectors of the Laplacian do not. But why is this the case? To understand differences and commonalities, we next contrast sinusoidal encodings, DFT, and Laplacian eigenvectors for a sequence (Fig. 0(a)0(b)). **Sequence encodings.** Specifically, sinusoidal encodings (Vaswani et al., 2017) form a \(d_{\text{model}}\)-dimensional embedding of token \(u\)'s integer position in the sequence using cosine \(\operatorname{PE}_{u,2j}^{(\sin)}\coloneqq\cos(\nicefrac{{u}}{{10}},000^{2j /d_{\text{model}}})\) and sinus \(\operatorname{PE}_{u,2j+1}^{(\sin)}\coloneqq\sin(\nicefrac{{u}}{{10}},000^{2j /d_{\text{model}}})\) waves of varying frequencies with \(j\in\{0,1,...,\nicefrac{{d_{\text{model}}}}{{2}}-1\}\). Analogously, the **DFT** could be used to define positional encodings: \[X_{j}\coloneqq\sum_{v=0}^{n-1}x_{v}\Big{[}\underbrace{\cos\Big{(}\frac{2\pi}{ n}jv\Big{)}}_{\operatorname{PE}_{v,2j}^{(\text{pert})}}-i\cdot\underbrace{\sin \Big{(}\frac{2\pi}{n}jv\Big{)}}_{\operatorname{PE}_{v,2j+1}^{(\text{pert})}} \Big{]} \tag{1}\] Here \(X\) corresponds to signal \(x\) in the frequency domain. In contrast to the DFT, sinusoidal encodings (a) sweep the frequencies using a geometric series instead of linear; (b) also contain frequencies below \(\nicefrac{{1}}{{n}}\); and (c) have \(d_{\text{model}}\) components instead of \(2n\) (i.e. \(0\leq j<n\) in Eq. 1). **Graphs** generalize sequences to sets of tokens/nodes with arbitrary connections. In a graph \(\mathcal{G}=(V,E)\), the \(m\) edges \(E\) represent connections between the \(n\) nodes \(V\). Equivalently, the adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\) is \(A_{u,v}=1\) if \((u,v)\in E\) and zero otherwise (see SS E for weighted graphs). We use \(\mathbf{X}^{(n)}\) for node features and \(\mathbf{X}^{(m)}\) for edge features. **Eigenvectors of Laplacian.** A "Graph Fourier Transformation" (GFT) can be defined via the eigendecomposition of the combinatorial Laplacian \(\mathbf{L}=\mathbf{\Gamma}\mathbf{\Lambda}\mathbf{\Gamma}^{-1}\), with diagonal matrix \(\mathbf{\Lambda}\) of eigenvalues and orthogonal matrix \(\mathbf{\Gamma}\) of eigenvectors (see SS B for details on GFT). Similarly to the DFT, \(\mathbf{\Gamma}\) can be used as positional encodings. The real, symmetric, and positive semi-definite unnormalized Laplacian \(\mathbf{L}_{U}\) as well as degree-normalized Laplacian \(\mathbf{L}_{N}\) are defined as: \[\mathbf{L}_{U}\coloneqq\mathbf{D}_{s}-\mathbf{A}_{s}\ \ (2)\ \ \mathbf{L}_{N}\coloneqq\mathbf{I}-(\mathbf{D}_{S}^{- \nicefrac{{1}}{{2}}}\mathbf{A}_{S}\mathbf{D}_{S}^{-\nicefrac{{1}}{{2}}}) \tag{3}\] with the diagonal degree matrix \(\mathbf{D}_{s}\) of the symmetrized adjacency matrix \(\mathbf{A}_{s}=\mathbf{A}\lor\mathbf{A}^{\top}\). Symmetrization is required s.t. \(\mathbf{L}\) is guaranteed to be diagonalizable and that \(\mathbf{\Gamma}\in\mathbb{R}^{n\times n}\) forms an orthogonal basis, which entails important properties of the GFT (see SS C). We assume eigenvalues and eigenvectors to be ordered: \(0\leq\lambda_{0}\leq\lambda_{1}\leq\cdots\leq\lambda_{n-1}\). We call \(\lambda_{0}\) or \(\Lambda_{0,0}\) the first eigenvalue and \(\mathbf{\gamma}_{0}\) or \(\mathbf{\Gamma}_{:,0}\) the first eigenvector (that reflects the lowest frequency). Figure 2: (a) Sinusoidal encodings (\(sin\) components top and \(cos\) below) with denominator \(1,000^{2j/d_{\text{model}}}\) and (\(d_{\text{model}}=100\). (b) Lap. eigenvec. of sequence Fig. 0(b) of len. \(n=100\). **Laplacian vs. DFT.** Two notable differences to the DFT are (1) the eigenvectors of the Laplacian are real-valued; (2) the _sign invariance_, i.e., if \(\mathbf{\gamma}\) is an eigenvector, so is \(-\mathbf{\gamma}\). While is is clear from symmetrization that the combinatorial Laplacian does not distinguish direction, we can also observe this in the resulting eigenvectors via the _sign invariance_ (2). **Cosine Transformation.** A possible set of eigenvectors of the combinatorial Laplacian for a sequence (Fig. 1b) is given by the Cosine Transformation Type II (Strang, 1999): \(\Gamma_{v,j}=\pm\cos((v+\nicefrac{{1}}{{2}})j\bar{\pi}/n)\), where we must choose the same sign per \(j\). Since the Cosine Transformation typically fixes \(\pm\) to \(+\) (as does the DFT, Eq. 1), encodings of the first token/node are non-negative. However, for general graphs, it is not that simple to resolve this ambiguity (e.g. multiple sink and source nodes). Thus, we typically use an arbitrary sign for each \(\mathbf{\gamma}\)(Dwivedi & Bresson, 2021). ## 3 Directional Spectral Encodings We propose to use the Magnetic Laplacian, a direction-aware generalization of the combinatorial Laplacian that encodes the direction with complex numbers. We can then use its eigenvectors for a structure-aware positional encoding that acknowledge the directedness of the graph. We define the **Magnetic Laplacian**(Furutani et al., 2020) \[\mathbf{L}_{U}^{(q)}\coloneqq\mathbf{D}_{S}-\mathbf{A}_{S}\odot\exp\Big{(}i\mathbf{\Theta}^{ (q)}\Big{)} \tag{4}\] with Hadamard product \(\odot\), element-wise \(\exp\), \(i=\sqrt{-1}\), \(\Theta_{u,v}^{(q)}\coloneqq 2\pi q(A_{u,v}-A_{v,u})\), and potential \(q\geq 0\). Recall, \(\mathbf{D}_{S}\) is the symmetrized degree matrix and \(\mathbf{A}_{S}\) the symmetrized adjacency matrix. The Magnetic Laplacian is a Hermitian matrix since it is equal to its conjugate transpose \(\mathbf{L}^{(q)}=(\bar{\mathbf{L}}^{(q)})^{\top}\) and, thus, comes with complex eigenvectors \(\mathbf{\Gamma}\in\mathbb{C}^{n\times n}\). Except for \(\exp(\mathbf{\Theta}^{(q)})\), Eq. 4 is equivalent to the combinatorial Laplacian and we recover it for \(q=0\). Moreover, if the graph is undirected, we recover the combinatorial Laplacian for any \(q\leq 0.25\). The Magnetic Laplacian for a sequence with \(q=0\) and \(q=\nicefrac{{1}}{{4}}\) are given next as well as their first eigenvector in Fig. 1b and 1a. \[\mathbf{L}_{U}^{(0)}=\left[\begin{smallmatrix}\begin{smallmatrix}1&-1&\cdots&0&0\\ \cdot&2&\cdots&0&0\\ \vdots&\cdot&\cdot&\cdot&\vdots\\ 0&0&\cdots&\cdot&\cdot&1\end{smallmatrix}\right] \tag{5}\] The \(\exp\big{(}i\mathbf{\Theta}^{(q)}\big{)}\) term encodes the edge direction. It resolves to \(1\) if \(A_{u,v}=A_{v,u}\) and, otherwise, to \(\exp(\pm i2\pi q)\), with the sign encoding the edge direction. The _potential_\(q\) determines the ratio of real and imaginary part. Recall that \(\exp(i\alpha)=\cos(\alpha)+i\sin(\alpha)\). Conversely, \(\angle(\Gamma_{u,0})=\arctan 2(\Im(\Gamma_{u,0}),\Re(\Gamma_{u,0}))\) with real / imag. value \(\Re\) / \(\Im\). In our experiments, we use the degree-normalized counterpart \(\mathbf{L}_{N}^{(q)}\coloneqq\mathbf{I}-\left(\mathbf{D}_{S}^{-\nicefrac{{1}}{{2}}}\mathbf{ A}_{S}\mathbf{D}_{S}^{-\nicefrac{{1}}{{2}}}\right)\odot\exp\big{(}i\mathbf{\Theta}^{ (q)}\big{)}\). **Directedness.** To illustrate how the _eigenvectors_ of the Magnetic Laplacian encode direction, we now analyze the first eigenvector \(\mathbf{\gamma}_{0}\). Each _directed_ edge \((u,v)\) encourages a phase difference in the (otherwise constant) first eigenvector, i.e., between \(\Gamma_{u,0}\) and \(\Gamma_{v,0}\). For simple cases, as in Fig. 3, each directed edge induces a rotation of \(2\pi q\) while each undirected edge synchronizes the rotation of the adjacent nodes. Note that self-loops are assumed to be undirected. In this example, the one-hop neighbors of node 3, namely nodes 1 and 4, have a relative rotation of \(2\pi q\), while the two-hop neighbor 2 has a relative rotation of \(4\pi q\). In general, the normalized first eigenvector \(\|\mathbf{\gamma}_{0}\|_{2}=\tilde{\mathbf{\gamma}}_{0}^{\top}\mathbf{\gamma}_{0}=1\) minimizes \[\min_{\mathbf{x}\in\mathbb{C}}\frac{\bar{\mathbf{x}}^{\top}\mathbf{L}^{(q)}\mathbf{x}}{\bar{ \mathbf{x}}^{\top}\mathbf{x}}=\frac{1}{2}\sum_{(u,v)\in E_{S}}|\Gamma_{u,0}-\Gamma_{v, 0}\exp(\Theta_{u,v}^{(q)})|^{2} \tag{6}\] and, therefore, trades off conflicting edges, e.g., when there are multiple (directed) routes of different lengths between two arbitrary nodes \(u\) and \(v\). See SS D for more details. **Choosing potential \(q\).** We propose to scale potential \(q\) with the number of nodes \(n\) and the amount of directed edges. Specifically, we choose \(q=\nicefrac{{q}}{{d_{\mathcal{G}}}}\) with _relative potential_\(q^{\prime}\) and graph-specific normalizer \(d_{\mathcal{G}}\). This normalizer is an upper bound on the number of directed edges in a simple path \(d_{\mathcal{G}}=\max(\min(\vec{m},n),1)\) with the number of _purely directed edges_\(\vec{m}=|\{(u,v)\in E\,|\,(v,u)\notin E\}|\) and is motivated by Eq. 6 (see SS D.1). We typically fix \(q^{\prime}=\nicefrac{{1}}{{4}}\) and empirically verify this in Fig. 9 where it is among the best. Interestingly, for high values of \(q^{\prime}\) the performance drops severely (corresponds to absolute \(q>0.05\)). **Scale and rotation.** If \(\mathbf{\gamma}\) is an eigenvector of \(\mathbf{L}\) then so is \(c\mathbf{\gamma}\), even if \(c\in\mathbb{C}\) with \(|c|>0\) (proof: \(c\mathbf{L}\mathbf{\gamma}=c\lambda\mathbf{\gamma}\implies\mathbf{L}(c\mathbf{\gamma})=\lambda(c \mathbf{\gamma})\)). For real symmetric matrices, there is the convention to choose \(c\in\mathbb{R}\) s.t. \(\mathbf{\Gamma}\) is orthonormal (\(\mathbf{\Gamma}^{\top}\mathbf{\Gamma}=\mathbf{I}\)). Similarly, we also propose to choose \(c\in\mathbb{C}\) by a convention. First, we choose \(|c|\) s.t. \(\mathbf{\Gamma}\) is unitary (\(\bar{\mathbf{\Gamma}}^{\top}\mathbf{\Gamma}=\mathbf{I}\)). Second, we determine the sign of each eigenvector such that the maximum real magnitude is positive. Third, we find the node \(u\) with the maximum phase shift in the lowest eigenvector \(\mathbf{\gamma}_{0}\), i.e. \(u=\arg\max_{v}\angle(\Gamma_{v,0})\), and use this node as "origin". We then rotate all eigenvectors, such that the phase shift \(\angle(\Gamma_{u,j})\) is 0 for all \(j\in\{0,1,\dots,n-1\}\). This procedure relies on a sufficiently small \(q\) which we obtain with \(q^{\prime}\leq\nicefrac{{1}}{{4}}\). We provide more details in SS D.2. Figure 3: First eigenvector. \(\mathbf{\gamma}_{0}\) of Magnetic Laplacian (Eq. 4). **MagLapNet.** Similarly to prior approaches (Lim et al., 2022; Kreuzer et al., 2021), we also preprocess eigenvectors before using them as positional encodings (Fig. 4b) to obtain a structure-aware transformer (Fig. 4a). We consider the eigenvectors associated with the \(k\) lowest eigenvalues \(\mathbf{\Gamma}_{:::k-1}\) and treat \(k\) as hyperparameter. To tackle the sign ambiguity of eigenvectors \(\mathbf{\Gamma}\), we follow a similar approach as SignNet (Lim et al., 2022), except for the first eigenvector \(\mathbf{\gamma}_{0}\) for which we can fully resolve the sign ambiguity. SignNet is invariant to sign changes since each eigenvector is processed as \(f_{\text{elem}}(-\mathbf{\gamma}_{j})+f_{\text{elem}}(\mathbf{\gamma}_{j})\), where \(f_{\text{elem}}\) is permutation invariant over the nodes (e.g. MLP on each node or GNN; we use an MLP). The crucial piece is that \(\mathbf{\gamma}_{j}\) is a complex number and that real and imaginary parts are processed jointly via \(f_{\text{elem}}\). We stress that we use a lightweight network using \(\approx 0.5\%\) of the total # parameters. After this first block, we stack the different components associated with each node and apply LayerNorm, Self-Attention, and Dropout. Similar to Kreuzer et al. (2021), we apply self-attention for each node \(u\) over its \(k\) corresponding eigenvector components (\(\mathbf{\Gamma}_{u:k-1}\)). The last reshape stacks each node's encoding and the MLP \(f_{\text{re}}\) matches the transformer dimensions. ## 4 Directional Random Walks An alternative principled and general approach for encoding node positions in a graph is through random walks. Li et al. (2020) have shown that such positional encodings can provably improve the expressiveness of GNNs. Consequently, such random walk encodings have been applied to transformers as well (Mialon et al., 2021). Here we study random walks where the transition probability is determined by the adjacency matrix and we obtain the probability of visiting node \(u\) starting at node \(v\). Interestingly, this generalizes the shortest-path distance via the number of steps required for a non-zero landing probability. However, naively applying random walks to directed graphs comes with caveats. **Directedness.** In a \(k\)-step random walk \(\mathbf{T}^{k}\mathbf{I}=\mathbf{T}^{k}\) with transition matrix \(\mathbf{T}=\mathbf{A}\mathbf{D}_{\text{out}}^{-1}\), we only look in _forward_ direction, i.e, ancestor nodes have zero landing probability. For this reason and in contrast to Li et al. (2020), we also consider the _reverse_ direction \(\mathbf{R}=\mathbf{A}^{\top}\mathbf{D}_{\text{in}}^{-1}\). Additionally, we add self-loops to sink nodes (nodes with zero out or in degree for \(\mathbf{T}\) or \(\mathbf{R}\), respectively). This avoids that \(\mathbf{A}\) might be nilpotent and ensures that the landing probabilities sum up to one. We then define the positional encoding for node \(v\) as \(\zeta(v|\mathcal{G})=f_{\text{rw}}^{(1)}(\operatorname{AGG}(\{\zeta(v|u)\,|\, u\in V\}))\), where \(\zeta(v|u)=f_{\text{rw}}^{(2)}[(\mathbf{R}^{k})_{u,v},\dots,(\mathbf{R}^{2})_{u,v},R_ {u,v},T_{u,v},(\mathbf{T}^{2})_{u,v},\dots,(\mathbf{T}^{k})_{u,v}]\) and \(\operatorname{AGG}\) performs summation. \(f_{\text{rw}}^{(1)}\) and \(f_{\text{rw}}^{(2)}\) is a stack of linear layers with non-linear activations. **Large distances.** A large amount of random walk steps \(k\) is expensive and for a sufficiently large \(k\) the probability mass concentrates in sink nodes. Thus, the random walk positional encodings are best suited for capturing short distances. For the global relations, we extend \(\zeta(v|u)\) with a forward and reverse infinite step random walk, namely Personalized PageRank (PPR) (Page et al., 1999). Importantly, PPR includes the restart probability \(p_{r}\) to jump back to the starting node \(u\) and has closed form solution \(p_{r}(\mathbf{I}+(p_{r}-1)\mathbf{T})^{-1}\). ## 5 Positional Encodings Playground Now that we have defined two directional structure-aware positional encodings, we next assess their efficacy. We start with the verification if the encodings are predictive for (relative) distance measures on (directed) graphs. **Tasks.** We hypothesize that a good positional encoding should be able to distinguish between ancestors/successors and should have a notion of distance on the graph. To cope with general graphs, instead of ancestor/successor nodes, we predict if a node is reachable acknowledging the edge directions. As distance measures, we study the prediction of adjacent nodes as well as the _directed_ and _undirected_ shortest path distance. With _undirected_ shortest path distance we refer to the path length on the symmetrized graph and in both cases we ignore node pairs for which no path exists. In summary, we study _(1) reachability_, _(2) adjacency_, _(3) undirected distance_, and _(4) directed distance_. **Models.** We use a transformer encoder (Vaswani et al., 2017) that becomes structure-aware solely because of positional encodings (see Fig. 4a). We compare our Magnetic Laplacian (ML) positional encodings (SS 3) with the direction-aware random walk (RW) of SS 4 and eigenvectors of the combinatorial Laplacian (Lap.) from SS 2. The eigen Figure 4: In (a) we show a transformer encoder operating on a graph. For simplicity, we omit the residual connection. (b) is one specific instantiation of the (optional) “PosEncNet” using the eigenvectors of the Magnetic Lap (see “MagLapNet” paragraph). vectors of the combinatorial Laplacian are preprocessed like eigenvectors of the Magnetic Laplacian (Fig. 3(b)), except that the "Stack" step is superfluous due to the real eigenvectors. Moreover, with the goal of obtaining general positional encodings, we do not explicitly study any heuristics that can be considered "directional". For example, if solely considering trees, it might be sufficient to add features for source and sink nodes next to undirected positional encodings. The predictions for all these tasks here are of shape \(n\times n\) (ignoring the disconnected pairs of nodes in distance regression), modeling the relative interactions between nodes. For this, we broadcast the resulting encodings \(\mathbf{H}_{l}^{(n)}\) (see Fig. 3(a)) of the sender and receiver nodes and concatenate a global readout. Thereafter, we use a shallow MLP with 3-layers in total and task-dependent output activation. **Setup.** We use cross-entropy for classification and \(L^{2}\) loss for regression. We asses classification with the F1 score and regression with the Root Mean Squared Error (RMSE). We sample Erdos-Renyi graphs with equally probable average degree \(\{1,1.5,2\}\) and, additionally, Directed Acyclic Graphs (DAGs), where we draw the average degree from \(\{1,1.5,2,2.5,3\}\) to account for the greater sparsity. We then extract the largest (weakly) connected component. For the regression tasks, we sample graphs with 16 to 63, 64 to 71, and 72 to 83 nodes for train, validation, and test, respectively. To counteract a too severe class imbalance, we choose 16 to 17, 18 to 19, and 20 to 27 nodes for the classification tasks, respectively. We report the average over three random reruns. We sample 400,000 training instances and for test/validation 2,500 for each number of nodes \(n\). **Results.** In Fig. 5 we show the performance of the positional encodings for the four curated tasks. We see that the eigenvectors of the Magnetic Laplacian outperform the eigenvectors of the Combinatorial Laplacian for all measures that rely on directedness. For _(3) undirected distance_ they perform roughly equally well. Except for _(3)_, the random walk encodings perform similarly to the Magnetic Laplacian for data similar to the training data (i.e. validation) but outperform the Magnetic Laplacian on test. Random walks seem to show their strength for tasks that are well-aligned with their design. For example, a random walk with \(k=1\) resembles the adjacency matrix that we predict in task _(2)_. However, the random walk encodings fail loudly for _(3) undirected distance_ prediction (up to 130% worse). See SS 1 for a hyperparameter study of the random walk encodings. ## 6 Application: Sorting Networks We also compare the different positional encodings for the correctness prediction task of sorting networks. Sorting networks (Knuth, 1973) are a certain class of comparison-based algorithms that sort any input sequence of fixed size with a static sequence of comparators. Sorting networks are a particularly interesting application since they mark the middle ground between logical statements and source code. Specifically, the sequence of comparators reflects a sequence of program instructions while asserting their correctness is related to satisfiability (Knuth, 1968). We use this task to make the implications of symmetrization (Laplacian encodings) and sequentialization (sinusoidal encodings) explicit. We consider sorting networks that consist of a sequence of conditional exchange instructions. In Fig. 5(a), each horizontal line represents an element of the sequence of variables of length \(p\) that are to be sorted. A vertical line is a comparator that sorts the two connected elements. Thus, a sorting network can also be expressed by \(n\) v_i, v_j = sorted((v_i, v_j)) statements, where v_i and v_j are two of the \(p\) variables, i.e. \(i,j\in\{0,1,\dots,p-1\}\). In our graph construction (Fig. 5(b)), we treat every instruction as a node with \(i\) and \(j\) as features (sinusoidal encoding). If a node operates on indices \(i\) and \(j\), we add an edge from the last occurrences of \(i\) and \(j\) (if there are any). Thus, in this data-flow graph, the in- and outdegree equals two for all nodes, except source and sink nodes. **Directed graph vs. sequence.** An important implication is that each topological sort of the directed graph is an equivalent "program", i.e., a particular ordering of the statements. Figure 5: Positional encodings playground results. Dark green encodes the best scores and bright green bad ones. For F1 score high values are better and for RMSE low values. Figure 6: (a) Common illustration for a Sorting network with sequence length \(p=5\) and (b) as directed graph. In Fig. 7, we show the number of topological sorts for a type of compact and deterministically constructed sorting network over different sequence lengths \(p\). For such networks and a sequence length of just 8, the number of equivalent sequentializations already exceeds 1 million (see also SS 0.A). In the worst case, a directed graph has \(n!\) topological sorts. Representing directed graphs as sequences, therefore, can introduce a huge amount of arbitrary orderedness. In contrast to sequential modeling, a graph based representation can significantly reduce the size of the effective input space. **Symmetrization hurts.** There exist correct and incorrect sorting networks that map to the same undirected graph. A model using an undirected graph cannot distinguish these cases. For example, the _correct sorting network_ for length three with comparators \([(0,2),(0,1),(1,2)]\) and its reversed version (_incorrect_) map to the same undirected graph. **Dataset.** We construct a dataset consisting of 800,000 training instances for equally probable sequence lengths \(7\leq p_{\text{train}}\leq 11\), generate the validation data with \(p_{\text{val}}=12\), and assess performance on sequence lengths \(13\leq p_{\text{test}}\leq 16\). We construct the sorting networks greedily until we have a correct sorting network. For this, we draw a random pair of comparators, excluding immediate duplicates and comparators between inputs that are already sorted. We then generate an incorrect example by omitting the last comparator (i.e., train is balanced). This procedure is similar to datasets for the related task of satisfibility (Selsam et al., 2019). Moreover, we add additional incorrect sorting networks by reversing the directions of the correct networks to make the test sets more challenging. Thus, the test and validation data consist of \(\nicefrac{{1}}{{3}}\) correct sorting networks (20,000) and \(\nicefrac{{2}}{{3}}\) of incorrect ones (40,000). Therefore, the task is to generalize the correctness prediction to longer sequences and reversed (slightly out of distribution) sorting networks. See SS 0.A) for more details on the dataset construction. **Empirical Evaluation** We follow the setup of SS 0.A), but also include sinusoidal positional encodings (Sin) and report the mean/error for 5 reruns. The eigenvectors of the Magnetic Laplacian (ML) outperform all other positional encodings as shown in Fig. 8 with just one exception. This shows that, without bells and whistles, the Magnetic Laplacian can give a vanilla transformer a considerable structural awareness for directed graphs (see Fig. 3(a)). On the other hand, the eigenvectors of the Combinatorial Laplacian barely outperform the naive baseline that randomly chooses a class based on the prior probabilities. The random walk encodings seem to hold the middle ground between Laplacian and Magnetic Laplacian. Random Walk encodings are here slightly better for short sequences but struggle to generalize to longer ones. nation of Magnetic Laplacian and sinusoidal positional encodings for sequence length 13 (F1 score of roughly 0.875). However, here the GNN generalizes slightly better to longer sequences. Motivated by Li et al. (2020), we additionally pair the GNN with positional encodings and find that the Magnetic Laplacian eigenvectors can also help a GNN's generalization. The random walk encodings again struggle to generalize to longer sequences and harm performance. ## 7 Application: Function Name Prediction We study function name prediction since it is an established task in the graph learning community (Hu et al., 2020) where the direction of edges can influence the true label. Similar to sorting networks, each program represents a specific ordering of statements, and there can be many equivalent programs with different orderings. Thus, it is surprising that graphs for source code used for machine learning retain the sequential connections between instructions. In other words, these graphs "only" enrich sequential source code (here add a hierarchy). For example, the Open Graph Benchmark Code2 dataset represents the 450,000 functions with its Abstract Syntax Tree (AST) and _sequential connections_. Since the input space of sequences can be much larger than the input space of directed graphs (see SS 6), for some tasks such a graph construction can be an unfortunate choice. **Robustness.** We trained the state-of-the-art model (Chen et al., 2022) on the Open Graph Benchmark Code2 dataset. Thereafter, we used OGB's code to generate the graph representation. In Fig. 11, we show that the state-of-the-art model is susceptible to semantics-preserving permutation of the source code1. If using constructing a data-flow DAG, this function has 16 topological sorts. Further considering commutativity for ==, &, +, and \(*\), we find 4,096 possibilities to write this seemingly simple function. Footnote 1: Shortly before submission, Luo (2022) proposed in their preprint a new model with 0.4% higher F1 test score **Our graph construction** is greatly inspired by Bieber et al. (2022), although, they also connect most instructions sequentially. While we do not use the sequentialism, we leverage their static code analyis for a graph representation that also handles the sharp bits like if-else, loops, and exceptions. The most significant differences are: (a) We construct a Directed Acyclic Graph (DAG) for each "block" (e.g. body of if statement) that reflects the dependencies between instructions. We then connect the statements between blocks considering the control flow(b) we address the (non-) commutative properties for basic python operations via edge features; (c) we do not reference the tokenized source code; (d) we omit the (in our case) unnecessary "last read" edges; (e) we construct the graph similarly to OGB Code2 for comparability. For example, we aggregate child nodes containing only attributes into their parent's node attributes. For more details including a side-by-side comparison of OGB's graph construction to ours see SS L. **Assumptions.** While the right equi-/invariances are task-dependent, we argue that for high-level reasoning tasks, including function name prediction or correctness prediction, such reordering should not affect the true label nor the prediction. Nevertheless, e.g. for predicting the runtime of a program, reorderings can have an impact. Moreover, we assume that non-class-member methods are side-effect-free. For example, this includes reordering print statements. Even though this will result in a different output stream, we argue that these differences are typically not vital. Moreover, since we construct the graph with lexicographical static code analysis, we construct the graph on a best-effort basis and do not capture all dynamic runtime effects. **Empirical Evaluation.** In Table 1, we report the results on OGB Code2. Here we additionally compare to a transformer w/ GNN for query and key, called Structure Aware Transformer (SAT) (Chen et al., 2022). SAT was the previous state of the art and we closely follow its data preprocessing. If we omit the GNN, we recover the vanilla transformer encoder Fig. 3(a) (plus degree-sensitive residual). We improve the current state-of-the-art model with a number of small tricks (i.e., no new positional encoding yet). Our SAT++ (w/ GNN) improves the F1-score by 1.66% (relatively 8.6%). Besides smaller changes like replacing ReLU with GeLU Figure 11: State-of-the-art model on OGB Code2 is susceptible to meaningless permutations (highlighted in yellow) due to OGB Code2’s graph construction. Code minimally modified for better layout. activations, we most notably (1) add dropout on the sparsely populated node attributes and, (2) offset the softmax score to adjust for class imbalance of the special tokens for _unknown words_ as well as _end of sequence_. We also replace the GCN with a three-layer GNN following Battaglia et al. (2018) (excluding a global state). The edge and node embeddings are updated sequentially and forward as well as backward messages are aggregated independently. Then, a Multi-Layer Perceptron (MLP) with two layers processes the concatenated embeddings (twice lower dimensionality as the transformer). **Our graph construction** ("data-flow" in Table 1) consistently increases the predictive performance. For example, with the AST depth positional encodings and the SAT++ architecture (w/ GNN) the performance improves by almost 0.58% (relatively 2.8%). Moreover, we want to emphasize that due to the graph construction, we additionally gain robustness w.r.t. certain reorderings of statements in the source code. We do not report results w/o GNN and solely w/ AST depth positional encodings because this approach does not make use of the enhanced graph structure. **Hybrid.** The Magnetic Laplacian also helps in the hybrid transformer GNN architecture. _Our SAT++ with Magnetic Laplacian positional encodings marks the new state of the art on the Code2 dataset, outperforming SAT by 2.85% (relatively 14.7%)_. Surprisingly the Random Walk positional encodings even slightly decay performance. For the Code2 graphs, the GNN for query and key appears to be of great importance. We hypothesize that this is due to the sparsely populated node features. Only a few nodes are attributed and, additionally, the permitted vocabulary is restrictive. The local message passing might spread the information to neighboring nodes to adjust for this sparseness. Moreover, w/o GNN we do not make use of edge features. **Dataset challenges.** The node attributes (e.g. variable names) and function name are only lightly preprocessed. For example, for perfect performance, one needs to distinguish singular and plural method names. Although singular/plural semantically make a difference, the naming consistency is lacking for the 450k functions taken from github. For comparability, we do not adjust the dataset accordingly. ## 8 Related Work Prior work on positional encodings includes traditional graph metrics, like shortest path distances (Guo et al., 2021). Similar measures are used in the relative positional encodings of Zugner et al. (2021). Related to the distance from a node to the the AST root node in the OGB Code2 dataset (see SS 7), Luo (2022) proposes sinusoidal positional encodings for DAGs leveraging their partial order. An alternative form of spectral encodings, based on Singular Value Decomposition (SVD), was used for positional encodings (Hussain et al., 2022). The authors argue that this encodings also subsumes directed graphs, however, they do not verify this choice and the SVD of the adjacency matrix has undesirable properties (see SS 0.D.4). Moreover, we include a discussion of Laplacians for directed graphs in SS 0.C. For an in-depth overview and a how-to for graphs transformers, we refer to Min et al. (2022) and Rampasek et al. (2022). They also provide an overview of graph transfomrner that rethink attention architectures for structure-awareness like (Dwivedi and Bresson, 2021; Mialon et al., 2021; Chen et al., 2022; Kim et al., 2022; Hussain et al., 2022; Diao and Loyd, 2022). **Source Code Representation.** There are many attempts on enriching source code in a graph-structured manner for machine learning (Allamanis et al., 2018; Cummins et al., 2020; Guo et al., 2021; Bieber et al., 2022). However, they all retain the sequentialism of the underlying source code. As we see in Fig. 11, this can lead to a fragile representation w.r.t. to semantically meaningless reorderings. Such reorderings are a novel perspective on the robustness of models for source code (e.g. see (Jha and Reddy, 2022; Yefet et al., 2020)) and similar properties can be important for models operating on logical expressions (Geisler et al., 2022). However, the relationship between a directed graph and its sequentializations is well known in, e.g., task scheduling. ## 9 Conclusion We propose positional encodings for directed graphs based on the Magnetic Laplacian and random walks. Both positional encodings can help transformers to gain considerable structure awareness and show complementary strengths in our experiments. We argue that direction-aware positional encodings are an important step towards true multi-purpose transformers universally handling undirected and directed graphs. We show that directedness can be central for the semantics in the target domain and that directed graphs can drastically lower the effective input dimensionality (i.e. many instances map to one graph). \begin{table} \begin{tabular}{c l c c c c} \hline \hline & **Position. Enc.** & **GNN** & **Test F1-Score** & **Val. F1-Score** \\ \hline \multirow{4}{*}{**GNN**} & & ✗ & 16.70\(\pm\)0.05 & 15.46\(\pm\)0.06 \\ & & ✓ & 19.37\(\pm\)0.09 & 17.73\(\pm\)0.07 \\ \cline{2-6} & & ✗ & 19.09\(\pm\)0.10 & 17.68\(\pm\)0.06 \\ & & ✓ & 21.03\(\pm\)0.07 & 19.38\(\pm\)0.07 \\ \hline \multirow{4}{*}{**GNN**} & AST depth & ✓ & 21.61\(\pm\)0.12 & 19.79\(\pm\)0.11 \\ \cline{2-6} & Random walk & ✗ & 19.34\(\pm\)0.08 & **17.96\(\pm\)0.05** \\ \cline{1-1} \cline{2-6} & & ✓ & 21.32\(\pm\)0.12 & 19.58\(\pm\)0.08 \\ \cline{1-1} \cline{2-6} & Magnetic Lap. & ✗ & **19.43\(\pm\)0.03** & 17.83\(\pm\)0.05 \\ \cline{1-1} \cline{2-6} & & ✓ & **22.22\(\pm\)0.10** & **20.44\(\pm\)0.06** \\ \hline \hline \end{tabular} \end{table} Table 1: Results on the Open Graph Benchmark Code2 dataset. The first two rows correspond to prior work. All other approaches are our contribution. We report the average and error of the mean over 10 reruns. Best is bold. ## Acknowledgements We want to thank for the helpful discussions and feedback during various states of this work from Dimitrios Vytiniotis, Shariq Iqbal, Andrea Michi, Marco Selvi, and Jan Schuchardt. This research was supported by the Helmholtz Association under the joint research school "Munich School for Data Science - MUDS".
2310.20188
Spectral clumping for functions decreasing rapidly on a half-line
We demonstrate a phenomenon of condensation of the Fourier transform $\widehat{f}$ of a function $f$ defined on the real line $\mathbb{R}$ which decreases rapidly on one half of the line. For instance, we prove that if $f$ is square-integrable on $\mathbb{R}$, then a one-sided estimate of the form \[\rho_f(x) := \int_x^{\infty} |f(t)| \,dt = \mathcal{O}\big(e^{-c\sqrt{x}} \big), \quad x > 0\] for some $c > 0$, forces the non-zero frequencies $\sigma(f) := \{ \zeta \in \mathbb{R} : |\widehat{f}(\zeta)| > 0 \}$ to clump: this set differs from an open set $U$ only by a set of Lebesgue measure zero, and $\log |\widehat{f}|$ is locally integrable on $U$. In particular, if $f$ is non-zero, then there exists an interval on which $\log |\widehat{f}|$ is integrable. The roles of $f$ and $\widehat{f}$ above may be interchanged, and the result extends also to a large class of tempered distributions. We show that the above decay condition is close to optimal, in the following sense: a non-zero entire function $f$ exists which is square-integrable on $\mathbb{R}$, for which $\sigma(f)$ is a subset of a compact set $E$ containing no intervals, and for which the estimate $\rho_f(x) = \mathcal{O}\big( e^{-x^a}\big)$, $x > 0$, holds for every $a \in (0, 1/2)$.
Bartosz Malman
2023-10-31T05:13:57Z
http://arxiv.org/abs/2310.20188v2
# Spectral Clumping for functions and distributions decreasing rapidly on a half-line ###### Abstract. We demonstrate a phenomenon of condensation of the Fourier transform \(\widehat{f}\) of a function \(f\) defined on the real line \(\mathbb{R}\) which decreases rapidly on one half of the line. For instance, we prove that if \(f\) is square-integrable on \(\mathbb{R}\), then a one-sided estimate of the form \[\rho_{f}(x):=\int_{x}^{\infty}|f(t)|\,dt=\mathcal{O}\big{(}e^{-c\sqrt{x}}\big{)},\quad x>0\] for some \(c>0\), forces the non-zero frequencies \(\sigma(f):=\{\zeta\in\mathbb{R}:|\widehat{f}(\zeta)|>0\}\) to clump: this set differs from an open set \(U\) only by a set of Lebesgue measure zero, and \(\log|\widehat{f}|\) is locally integrable on \(U\). In particular, if \(f\) is non-zero, then there exists an interval on which \(\log|\widehat{f}|\) is integrable. The roles of \(f\) and \(\widehat{f}\) above may be interchanged, and the result extends also to a large class of tempered distributions. We show that the above decay condition is close to optimal, in the following sense: a non-zero entire function \(f\) exists which is square-integrable on \(\mathbb{R}\), for which \(\sigma(f)\) is a subset of a compact set \(E\) containing no intervals, and for which the estimate \(\rho_{f}(x)=\mathcal{O}\big{(}e^{-x^{2}}\big{)}\), \(x>0\), holds for every \(a\in(0,1/2)\). ## 1. Introduction ### Fourier transform, its support and size This note studies a certain manifestation of the _uncertainty principle in Fourier analysis_, where a smallness condition on a function \(f\) forces its Fourier transform \(\widehat{f}\) to be, in some sense, large. Vice versa, smallness of \(\widehat{f}\) forces \(f\) to be large. In our context, the smallness is defined in terms of a one-sided decay condition, and the largeness in terms of the existence of a _clump_. This will be our moniker for an interval on which the function has an integrable logarithm. We emphasize that our results concern functions with a spectrum which might vanish on an interval (commonly referred to as functions with a _spectral gap_), but for which the spectrum should be large on some other interval. We will use the following definition of the transform: \[\widehat{f}(\zeta):=\int_{\mathbb{R}}f(x)e^{-ix\zeta}\,d\lambda(x),\quad\zeta \in\mathbb{R}. \tag{1.1}\] Here \(d\lambda(x)=dx/\sqrt{2\pi}\) is a normalization of the Lebesgue measure \(dx\) on \(\mathbb{R}\). Then, the inversion formula is given by \[f(x):=\int_{\mathbb{R}}\widehat{f}(\zeta)e^{i\zeta x}\,d\lambda(\zeta),\quad x \in\mathbb{R}. \tag{1.2}\] For \(p>0\), let \(\mathcal{L}^{p}(\mathbb{R},dx)\) be the usual Lebesgue space of functions \(f\) for which \(|f|^{p}\) is integrable with respect to \(dx\). The formula (1.1) can be interpreted literally only for \(f\in\mathcal{L}^{1}(\mathbb{R},dx)\). It is interpreted in terms of Plancherel's theorem in the case \(f\in\mathcal{L}^{2}(\mathbb{R},dx)\), and in order to state our most general results we will later need to interpret the transform in the sense of distribution theory. The _spectrum_\(\sigma(f)\) of a function \(f\) is the subset of \(\mathbb{R}\) on which \(\widehat{f}\) lives. Since \(\widehat{f}\in\mathcal{L}^{2}(\mathbb{R},dx)\) is defined only up to a set of Lebesgue measure zero, so is the spectrum \(\sigma(f)\) in this case. If we accept making errors of measure zero (which we will), we may define the spectrum as \[\sigma(f):=\{\zeta\in\mathbb{R}:|\widehat{f}(\zeta)|>0\},\quad f\in\mathcal{L} ^{1}(\mathbb{R},dx)\cup\mathcal{L}^{2}(\mathbb{R},dx).\] Note specifically that our definition of \(\sigma(f)\) might not coincide with the usual notion of closed _support_ of the distribution \(\widehat{f}\). The uncertainty principle in Fourier analysis presents itself in plenty of ways, and the excellent monograph [6] of Havin and Joricke describes many of its most interesting interpretations. One of them is the following statement, well-known to function theorists. If \(f\in\mathcal{L}^{2}(\mathbb{R},dx)\) is non-zero and \(\mathbb{R}_{-}\) is the negative half-axis, then we have the implication \[f(x)\equiv 0\text{ on }\mathbb{R}_{-}\quad\Rightarrow\quad\int_{\mathbb{R}}\frac{ \log|\widehat{f}(C)|}{1+\zeta^{2}}\,d\zeta>-\infty. \tag{1.3}\] Here the extreme decay (indeed, vanishing) of \(f\) on a half-axis implies global integrability of \(\log|\widehat{f}|\) against the Poisson measure \(d\zeta/(1+\zeta^{2})\). A fortiori, \(\log|\widehat{f}|\) is integrable on every interval \(I\) of \(\mathbb{R}\). Naturally, this is not typical. By Plancherel's theorem, every function in \(\mathcal{L}^{2}(\mathbb{R},dx)\) is the Fourier transform of some other function in the same space. So on the other extreme, plenty of functions \(f\in\mathcal{L}^{2}(\mathbb{R},dx)\) have a Fourier transform which lives on sparse sets containing no intervals. This forces the divergence of the logarithmic integral of \(\widehat{f}\) over any interval. In other words, plenty of functions admit no spectral clumps. The results of this note give conditions under which such clumps form. ### Condensation and sparseness of spectra and supports We shall introduce our results at first in the context of the Hilbert space \(\mathcal{L}^{2}(\mathbb{R},dx)\). Here we can prove a claim which symmetric in \(f\) and \(\widehat{f}\), and also we can argue for near-optimality of the result. This is the content of Theorem \(A\) and Theorem \(B\). The more general distributional clumping result is presented in Theorem \(C\). **Theorem A**.: _If \(f\in\mathcal{L}^{2}(\mathbb{R},dx)\) satisfies the estimate_ \[\rho_{f}(x):=\int_{x}^{\infty}|f(t)|\,dt=\mathcal{O}\big{(}e^{-c\sqrt{x}} \big{)},\quad x>0 \tag{1.4}\] _for some constant \(c>0\), then there exists an open set \(U\) which coincides with \(\sigma(f)\) up to a set of Lebesgue measure zero, and for every \(x\in U\) there exists an interval \(I\) containing \(x\) such that_ \[\int_{I}\log|\widehat{f}(t)|\,dt>-\infty.\] In other words, the one-sided decay condition (1.4) implies that \(\widehat{f}\) lives on the union of the spectral clumps of \(f\). Since the Fourier transform is a unitary operation on \(\mathcal{L}^{2}(\mathbb{R},dx)\), the roles of \(f\) and \(\widehat{f}\) may obviously be interchanged in the statement of Theorem \(A\). Thus a one-sided spectral decay condition of \(f\) implies local integrability properties of \(\log|f|\) on the set where \(f\) lives. In this form, the result encourages us to extend it to tempered distributions. We shall do so in a moment. The integrand in (1.4) may seem a bit unnatural in the context of square-integrable functions \(f\). It is more natural in the context of functions of tempered growth appearing in Theorem \(C\). Anyhow, we note that one can prove that an estimate of the form \(\int_{x}^{\infty}|f(t)|^{2}\,dt=\mathcal{O}\big{(}e^{-c\sqrt{x}}\big{)}\) in fact implies (1.4) for some slightly smaller \(c\). We can prove also that the condition (1.4) on the decay of \(\rho_{f}\) appearing in Theorem \(A\) is close to optimal. We do so by exhibiting a non-zero function with rapid one-sided decay but sparse spectrum. **Theorem B**.: _For every \(b>0\), there exists a compact set \(E\subset\mathbb{R}\) contained in \([0,b]\) which contains no intervals, and a non-zero entire function \(f\in\mathcal{L}^{2}(\mathbb{R},dx)\) which satisfies_ \[\rho_{f}(x)=\mathcal{O}\big{(}e^{-x^{a}}\big{)},\quad x>0\] _for every \(a\in(0,1/2)\), and such that \(\sigma(f)\) is contained within \(E\)._ After an initial reduction, the proof of this result follows ideas of Khrushchev from [8]. Note that the function \(f\) appearing in Theorem \(B\) is entire by the virtue of having a spectrum \(\sigma(f)\) of compact support. More importantly, the condition on \(E\) implies that \(I\setminus E\) has positive Lebesgue measure for every interval \(I\), so we obtain \[\int_{I}\log|\widehat{f}(t)|\,dt=-\infty\] for every interval \(I\subset\mathbb{R}\). This is in contrast to the conclusion of Theorem \(A\). It follows that the exponent \(a=1/2\) in estimates of the form \(\rho_{f}(x)=\mathcal{O}\big{(}e^{-x^{a}}\big{)}\) is critical for the spectral clumping phenomenon. As mentioned above, clumping statements makes sense for objects in a class much wider than \(\mathcal{L}^{2}(\mathbb{R},dx)\). Here is our distrbutional result. **Theorem C**.: _Let \(f\) be a tempered distribution on \(\mathbb{R}\) which is a measurable function satisfying_ \[\int_{\mathbb{R}}\frac{|f(x)|}{(1+|x|)^{n}}\,dx<\infty\] _for some \(n>0\). If the distributional Fourier transform \(\widehat{f}\) is an integrable function on some interval \([A,\infty)\), and the estimate \(\rho_{\widehat{f}}(\zeta)=\mathcal{O}\big{(}e^{-c\sqrt{\zeta}}\big{)}\) holds for all sufficiently large positive \(\zeta\), then there exists an open set \(U\) such that \(f\) vanishes almost everywhere outside of \(U\), and for each \(x\in U\) there exists an interval \(I\) containing \(x\) satisfying_ \[\int_{I}\log|f(t)|\,dt>-\infty. \tag{1.5}\] For instance, the result shows that a function \(f\in\mathcal{L}^{1}(\mathbb{R},dx)\) which lives on a sparse set containing no intervals cannot satisfy even a one-sided spectral decay condition of the form \(\rho_{\widehat{f}}(\zeta)\lesssim e^{-c\sqrt{\zeta}}\). Note also that in this extended form, our result includes the trivial but important examples such as \(f=1\) and \(\widehat{f}=\delta_{0}\) (Dirac delta), the trigonometric functions and the polynomials. ### A converse result The Beurling-Malliavin theory implies a partial converse result. If \(f\) is a locally integrable function on \(\mathbb{R}\) which has a clump \(I\) as in (1.5), and a constant \(c>0\) is given, then a bounded multiplier \(m\) exists for which \(mf\) has a Fourier transform satisfying \(\rho_{\overline{m}}(\zeta)=\mathcal{O}\big{(}e^{-c\sqrt{\zeta}}\big{)}\) for \(\zeta>0\). To see this, recall that a smooth function \(g\) supported in \(I\) exists which satisfies the bilateral spectral decay \(|\widehat{g}(\zeta)|\leq e^{-c\sqrt{|\zeta|}}\), \(\zeta\in\mathbb{R}\) (this simpler version of the famous Beurling-Malliavin theorem is proved in [6, p. 276-277], and in fact we may ensure an even faster bilateral spectral decay of \(g\)). There exists also a bounded function \(h\in\mathcal{L}^{1}(\mathbb{R},dx)\) which satisfies \(\sigma(h)\subset(0,\infty)\) and \(|h(x)|=\min(|f(x)|,1)\) on \(I\) (we use the assumption that \(I\) is a clump for \(f\) and construct \(h\) as in (2.7) below). Then an argument similar to the one used in the proof of Proposition 4.2 below shows that the function \(\overline{h}g\) will satisfy the desired one-sided spectral decay, and clearly \(\overline{h}g=mf\) for some bounded function \(m\) supported in \(I\). ### Clumping in other parts of analysis The motivation for the research presented in this note was a desire to produce a self-contained exposition of the clumping phenomenon which was observed in two other contexts, both somewhat more esoteric than Fourier analysis on the real line. The first of these is a polynomial approximation problem in the unit disk \(\mathbb{D}:=\{z\in\mathbb{C}:|z|<1\}\). Here we are presented with a measure \[d\mu=G(1-|z|)dA(z)+w(z)dm(z),\] where \(dA\) and \(dm\) are the area and arc-length measures on \(\mathbb{D}\) and \(\mathbb{T}:=\partial\mathbb{D}=\{z\in\mathbb{C}:|z|=1\}\). The functions \(G\) and \(w\) are non-negative weights, and one would like to understand under which conditions _splitting_ occurs. Namely, when is the weighted space \(\mathcal{L}^{2}(\mathbb{T},w\,dm)\) contained in the closure of analytic polynomials in the \(\mathcal{L}^{2}\)-norm induced by the measure \(\mu\)? In the case that \(G(1-|z|)\) decays exponentially as \(|z|\to 1^{-}\), the necessary and sufficient condition is that \(w\) has no clumps, or in other words that the integral of \(\log w\) diverges over any arc on \(\mathbb{T}\). The lack-of-clumping condition was conjectured by Kriete and MacCluer in [9] and confirmed in [11]. Some of the techniques used in the proofs of the results in the present note are adaptations of the ideas from [11]. The other context is a circle of ideas surrounding the _Aleksandrov-Clark measures_ appearing in spectral theory, and spaces \(\mathcal{H}(b)\) defined by de Branges and Rovnyak, well-known to operator theorists. To any positive finite Borel measure \(\nu\) on \(\mathbb{T}\) we may associate a so-called _Clark operator_\(\mathcal{C}_{\nu}\) which takes a function \(g\in\mathcal{L}^{2}(\mathbb{T},d\nu)\) to the analytic function in \(\mathbb{D}\) given by the formula \[\mathcal{C}_{\nu}g(z):=\frac{\int_{\mathbb{T}}\frac{g(x)}{1-\overline{x}z}d \nu(x)}{\int_{\mathbb{T}}\frac{1}{1-\overline{x}z}d\nu(x)},\quad z\in\mathbb{D}.\] The operator \(\mathcal{C}_{\nu}\) maps \(\mathcal{L}^{2}(\mathbb{T},d\nu)\) onto a space of analytic functions denoted by \(\mathcal{H}(b)\), the symbol function \(b:\mathbb{D}\to\mathbb{D}\) itself being related to \(\nu\) by the formula \[\frac{1}{1-b(z)}=\int_{\mathbb{T}}\frac{1}{1-\overline{x}z}d\nu(x),\quad z\in \mathbb{D}\] in the case that \(\nu\) is a probability measure, with a similar formula in the general case. For many choices of \(\nu\) (or equivalently, choices of \(b\)), the space \(\mathcal{H}(b)\) is somewhat mysterious, with the distinctive feature of containing very few functions extending analytically to a disk larger than \(\mathbb{D}\). This extension property is characterized by the exponential decay of the Taylor series of the function, and the clumping of the absolutely continuous part of \(\nu\) is decisive for existence and density of functions in \(\mathcal{H}(b)\) which have a Taylor series decaying just a bit slower than exponentially. Results of this nature are contained in [12]. In fact, a Fourier series version of Theorem \(A\) is a consequence of the results in [12]. ### Other forms of the uncertainty principle The implication (1.3) has a well-known Fourier series version. If a function \(f\) defined on the circle \(\mathbb{T}:=\{z\in\mathbb{C}:|z|=1\}\) is integrable with respect to arc-length \(ds\) on \(\mathbb{T}\), and the negative portion of the Fourier series of \(f\) vanishes, then \(\int_{\mathbb{T}}\log|f|ds>-\infty\), unless \(f\) is the zero function.. Volberg derived the same conclusion from the weaker hypothesis of nearly-exponentially decaying negative portion of the Fourier series (see [15] and the exposition in [16]). Work of Borichev and Volberg [3] contains related results. The decay condition (1.4) on \(f\in\mathcal{L}^{2}(\mathbb{R},dx)\) prohibits \(\widehat{f}\) from living on a set \(S\) containing no intervals. Somewhat related are uniqueness statements in which one seeks to give examples of pairs of sets \((E,S)\) for which the following implication is valid: if \(f\) in a certain class lives on \(E\) and \(\widehat{f}\) lives on \(S\), then \(f\equiv 0\). One says that \((E,S)\) is then a _uniqueness pair_ for the corresponding class. A famous result of Benedicks presented in [2] (see also [1]) says that \((E,S)\) is a uniqueness pair for integrable \(f\) if both sets have finite Lebesgue measure, and the result holds not only for the real line \(\mathbb{R}\) but also for the \(d\)-dimensional Euclidean space \(\mathbb{R}^{d}\). Hedenmalm and Montes-Rodriguez worked with the hyperbola \(H=\{(x,y)\in\mathbb{R}^{2}:xy=1\}\) and th class of finite Borel measures \(\mu\) supported on \(H\) which are absolutely continuous with respect to arclength on \(H\). They proved in [7] that if \(\widehat{\mu}\) vanishes on certain types of discrete sets \(\Lambda\subset\mathbb{R}^{2}\), then \(\mu\equiv 0\), thus exhibiting interesting uniqueness pairs of the form \((H,\mathbb{R}^{2}\setminus\Lambda)\). Recent work of Radchenko and Viazovska on interpolation formulas for Schwartz functions in [13] gives examples of pairs of discrete subsets \(E\) and \(S\) of \(\mathbb{R}\) for which \((\mathbb{R}\setminus E,\mathbb{R}\setminus S)\) is a uniqueness pair for functions in the Schwartz class. Kulikov, Nazarov and Sodin exhibit similar interpolation formulas, and consequently new uniqueness pairs, in their recent work in [10]. ### Notation For a set \(E\subset\mathbb{R}\) and a measure \(\mu\) defined on \(\mathbb{R}\), the space \(\mathcal{L}^{p}(E,d\mu)\) denotes the usual Lebesgue space consisting of equivalence classes of functions living only on \(E\) and satisfying the integrability condition \(\int_{E}|f(x)|^{p}d\mu(x)<\infty\). The containment \(\mathcal{L}^{p}(E,d\mu)\subset\mathcal{L}^{p}(\mathbb{R},d\mu)\) is interpreted in the natural way. The symbols such as \(dx\), \(dt\) and \(d\zeta\) denote the usual Lebesgue measure of the real line, while \(d\lambda=dx/\sqrt{2\pi}\) will be the normalized version used in formulas involving Fourier transforms. If \(E\) is a subset of \(\mathbb{R}\), then \(|E|\) denotes its usual Lebesgue measure. The positive half-axis of \(\mathbb{R}\) is denoted by \(\mathbb{R}_{+}:=\{x\in\mathbb{R}:x\geq 0\}\), and we set also \(\mathbb{R}_{-}:=\mathbb{R}\setminus\mathbb{R}_{+}\). The notions of _almost everywhere_ and _of measure zero_ are always to be interpreted in the sense of Lebesgue measure on \(\mathbb{R}\). The indicator function of a measurable set \(E\) is denoted by \(\mathbb{1}_{E}\). Finally, we put \(\log^{+}(x):=\max(\log(x),0)\). ## 2. Preliminaries Our proofs will use Hilbert space techniques and the complex method. In particular, we will use the complex interpretation of the Hardy classes of functions on the line with positive spectrum. In this section, we recall those basic facts of the theory of the Hardy classes \(\mathcal{H}^{1}(\mathbb{R}),\mathcal{H}^{2}(\mathbb{R})\) and \(\mathcal{H}^{\infty}(\mathbb{R})\) which will be important in the coming sections. We discuss also properties of the shift operators \(f(t)\mapsto e^{its}f(t)\) on weighted spaces on the real line, and their invariant subspaces. ### Hardy classes For \(p\) equal to \(1\) or \(2\), we denote by \(\mathcal{H}^{p}(\mathbb{R})\) the subspace of \(\mathcal{L}^{p}(\mathbb{R},dx)\) consisting of those functions \(f\) for which the Fourier transform \(\widehat{f}\) vanishes on the negative part of the real axis: \[\mathcal{H}^{p}(\mathbb{R}):=\{f\in\mathcal{L}^{p}(\mathbb{R},dx):\widehat{f} |\mathbb{R}_{-}\equiv 0\}.\] It is a well-known fact that functions in the Hardy classes \(\mathcal{H}^{1}(\mathbb{R})\) and \(\mathcal{H}^{2}(\mathbb{R})\) admit a type of analytic extension to the upper half-plane \[\mathbb{H}:=\{x+iy\in\mathbb{C}:y>0\}.\] We recall what exactly is meant by this extension and how it can be constructed. The Poisson kernel of the upper half-plane \[\mathcal{P}(t,x+iy):=\frac{1}{\pi}\frac{y}{(x-t)^{2}+y^{2}},\quad y>0,\] admits a decomposition \[\mathcal{P}(t,z)=\operatorname{Re}\Bigg{(}\frac{1}{\pi i(t-z)}\Bigg{)}=\frac {1}{2\pi i}\Bigg{(}\frac{1}{t-z}-\frac{1}{t-\overline{z}}\Bigg{)},\quad z=x+ iy\in\mathbb{H}. \tag{2.1}\] Since \[\frac{1}{t-\overline{z}}=-i\int_{0}^{\infty}e^{-i\overline{z}s}e^{its}\,ds \tag{2.2}\] we may use Fubini's theorem to compute, in the case \(f\in\mathcal{H}^{1}(\mathbb{R})\), that \[\int_{\mathbb{R}}\frac{f(t)}{t-\overline{z}}\,dt=-i\int_{0}^{\infty}\Bigg{(} \int_{\mathbb{R}}f(t)e^{its}\,dt\Bigg{)}e^{-i\overline{z}s}ds=0, \tag{2.3}\] where the vanishing of the integral follows from \[\int_{\mathbb{R}}f(t)e^{its}\,ds=\widehat{f}(-s)=0,\quad s>0,\] which holds for any \(f\in\mathcal{H}^{1}(\mathbb{R})\) by the definition of the class. In the case \(f\in\mathcal{H}^{2}(\mathbb{R})\) this argument does not work, but what instead works is an application of Plancherel's theorem and Lemma 4.1 below to the first integral in (2.3), which again shows that this integral vanishes. Consequently, whenever \(f\in\mathcal{H}^{p}(\mathbb{R})\) for \(p=1,2\), the formula \[f(z):=\int_{\mathbb{R}}f(t)\mathcal{P}(t,z)\,dt=\int_{\mathbb{R}}\frac{f(t)}{ t-z}\frac{dt}{2\pi i},\quad z\in\mathbb{H}\] defines, by the second integral expression above, an analytic extension of \(f\) to \(\mathbb{H}\). By the first expression, and classical properties of the Poisson kernel (see [4, Chapter I]), this extension satisfies \[\lim_{y\to 0^{+}}f(x+iy)=f(x),\text{ for almost every }x\in\mathbb{R}. \tag{2.4}\] Moreover, we have \[\sup_{y>0}\int_{\mathbb{R}}|f(x+iy)|^{p}\,dx<\infty \tag{2.5}\] and \[\lim_{y\to 0^{+}}\int_{\mathbb{R}}|f(x+iy)-f(x)|^{p}\,dx=0. \tag{2.6}\] if \(f\in\mathcal{H}^{p}(\mathbb{R})\). The property (2.5) follows readily from the Poisson integral formula for the extension of \(f\) and Fubini's theorem. The property (2.6) is a bit tricky to establish, and is proved in [4, Chapter I, Theorem 3.1]. In fact, the above listed properties characterize the functions in the Hardy classes. **Proposition 2.1**.: _For \(p=1\) and \(p=2\), a function \(f\in\mathcal{L}^{p}(\mathbb{R},dx)\) is a member of \(\mathcal{H}^{p}(\mathbb{R})\) if and only if there exists an analytic extension of \(f\) to \(\mathbb{H}\) which satisfies the three properties in (2.4), (2.5) and (2.6)._ The proposition is not hard to derive from Proposition 2.3 below. Anyway, a careful proof can be found in [6, p. 172]. The following restriction on smallness of the modulus \(|f|\) of a function \(f\in\mathcal{H}^{1}(\mathbb{R})\) will be of crucial importance to us. **Proposition 2.2**.: _If \(f\in\mathcal{H}^{1}(\mathbb{R})\), then_ \[\int_{\mathbb{R}}\frac{\log|f(x)|}{1+x^{2}}dx>-\infty\] _unless \(f\) is the zero function._ A proof of the proposition can be found in [6, p. 35]. We shall also need to use the corresponding Hardy class of functions which are merely bounded on \(\mathbb{R}\), and not necessarily integrable or square-integrable on \(\mathbb{R}\). We use directly the complex interpretation of the class. Namely, we define \(\mathcal{H}^{\infty}(\mathbb{R})\) to consist of those functions \(f\in\mathcal{L}^{\infty}(\mathbb{R},dx)\) which can be realized as limits \[\lim_{y\to 0^{+}}f(x+iy):=f(x)\] for almost every \(x\in\mathbb{R}\), where \(f\) is bounded and analytic in \(\mathbb{H}\). It can be checked that such \(f\) has a distributional spectrum which vanishes on \(\mathbb{R}_{-}\). Another important point is that if \(f\in\mathcal{H}^{\infty}(\mathbb{R})\), then \[\frac{f(x)}{(i+x)^{2}}\in\mathcal{H}^{1}(\mathbb{R}),\] since we may apply Proposition 2.1 to the analytic function \[z\mapsto\frac{f(z)}{(i+z)^{2}},\quad z\in\mathbb{H}.\] A function \(h\in\mathcal{H}^{\infty}(\mathbb{R})\) of a given (bounded, measurable) modulus \(|h|=W\) on \(\mathbb{R}\) may be constructed by setting \[\log h(z):=\frac{1}{\pi i}\int_{\mathbb{R}}\Big{(}\frac{1}{t-z}-\frac{t}{1+t^ {2}}\Big{)}\log W(t)\,dt,\quad z\in\mathbb{H}, \tag{2.7}\] and \(h(z):=e^{\log h(z)}\). The integral above converges if \[\int_{\mathbb{R}}\frac{\log W(t)}{1+t^{2}}\,dt>-\infty\] which is a necessary condition for the construction to be possible. Then \[\log|h(z)|=\int\mathcal{P}(t,z)\log W(t)\,dt,\] so that the equality \(\lim_{y\to 0^{+}}|h(x+iy)|=|h(x)|=W(x)\) for almost every \(x\in\mathbb{R}\) is a consequence of the well-known properties of the Poisson kernel. ### A formula and an estimate for the Fourier transform of a Hardy class function If \(f\in\mathcal{H}^{1}(\mathbb{R})\), then the values of \(\widehat{f}(\zeta)\) may be computed using a formula different from (1.1). To wit, denote by \(f(z)\) the extension of \(f\) to \(\mathbb{H}\) which was discussed in Section 2.1. The function \[G_{\zeta}(z):=f(z)e^{-iz\zeta}=f(z)e^{-ix\zeta+y\zeta},\quad z=x+iy\in\mathbb{H}\] is analytic in \(\mathbb{H}\), and for this reason Cauchy's integral theorem implies \[\int_{R(\epsilon,y,a)}G_{\zeta}(z)dz=0, \tag{2.8}\] where \(dz\) denotes the complex line integral, \(\epsilon,y,a\) are all positive numbers, \(\epsilon<y\), and \(R(\epsilon,y,a)\) denotes the rectangular contour having as corners the four points with coordinates \((-a,\epsilon)\), \((a,\epsilon)\), \((a,y)\), \((-a,y)\), oriented counter-clockwise. Fix \(y>0\) and let \(S_{y}\) denote the horizontal strip in \(\mathbb{H}\) consisting of all complex numbers with imaginary part between \(0\) and \(y\). Then it follows from Fubini's theorem and (2.5) that \[\int_{S}|G_{\zeta}(z)|dA(z)\leq e^{y|\zeta|}\int_{\mathbb{R}}\int_{0}^{y}|f(x+ is)|dsdx<\infty,\] where \(dA(z)\) denotes the area measure on the complex plane. This expresses the integrability on \(\mathbb{R}\) of the continuous function \[x\mapsto\int_{0}^{y}|G_{\zeta}(x+is)|ds.\] Hence there exists a positive sequence \(\{a_{n}\}_{n}\) which satisfies \[\lim_{n\to\infty}a_{n}=+\infty\] and for which \[\lim_{n\to\infty}\int_{0}^{y}|G_{\zeta}(a_{n}+is)|ds+\int_{0}^{y}|G_{\zeta}(-a _{n}+is)|ds=0.\] This means that \[0 =\lim_{n\to\infty}\int_{R(\epsilon,y,a_{n})}G_{\zeta}(z)\,dz\] \[=-\int_{\mathbb{R}}G_{\zeta}(x+iy)dx+\int_{\mathbb{R}}G_{\zeta}(x +i\epsilon)dx\] Moreover, equation (2.6) quite easily implies \[\lim_{\epsilon\to 0^{+}}\int_{\mathbb{R}}G_{\zeta}(x+i\epsilon)dx=\int_{ \mathbb{R}}f(x)e^{-ix\zeta}dx=\sqrt{2\pi}\widehat{f}(\zeta).\] We have proven the following formula by combining the above two expressions. **Proposition 2.3**.: _For \(f\in\mathcal{H}^{1}(\mathbb{R})\) we may compute the Fourier transform \(\widehat{f}(\zeta)\) using the formula_ \[\widehat{f}(\zeta)=e^{y\zeta}\int_{\mathbb{R}}f(x+iy)e^{-ix\zeta}\,d\lambda(x)\] _for any choice of \(y>0\), where \(f(x+iy)\) denotes the values of the analytic extension of \(f\) to \(\mathbb{H}\)._ This formula has the following simple corollary which will be of critical importance below. **Corollary 2.4**.: _If \(h\in\mathcal{H}^{\infty}(\mathbb{R})\) has an analytic extension to \(\mathbb{H}\) which satisfies, for some constant \(c>0\), an estimate of the form_ \[\sup_{x\in\mathbb{R}}\,|h(x+iy)|\leq e^{c/y},\quad\text{ for all }y>0,\] _then the Fourier transform \(\widehat{h_{*}}\) of the function_ \[h_{*}(x):=\frac{h(x)}{(i+x)^{2}}\in\mathcal{H}^{1}(\mathbb{R})\] _satisfies_ \[|\widehat{h_{*}}(\zeta)|\leq\sqrt{\frac{\pi}{2}}e^{2\sqrt{c}\sqrt{c}\sqrt{c}},\quad\zeta>0.\] Proof.: It was mentioned in Section 2.1 that \(h_{*}\in\mathcal{H}^{1}(\mathbb{R})\). Therefore, we may use Proposition 2.3 to estimate \[|\widehat{h_{*}}(\zeta)| \leq e^{y\zeta}\int_{\mathbb{R}}\frac{|h(x+iy)|}{|i+x+iy|^{2}}\,d \lambda(x)\] \[\leq e^{y\zeta}\int_{\mathbb{R}}\frac{e^{c/y}}{1+x^{2}}\,d \lambda(x)\] \[=\sqrt{\frac{\pi}{2}}e^{y\zeta+c/y}.\] Since \(y>0\) can be freely chosen, we may now set it to \(y=\sqrt{c/\zeta}\) to obtain the desired estimate. ### A semigroup of operators and its invariant subspaces If \(w\in\mathcal{L}^{1}(\mathbb{R},dx)\) and \(s\in\mathbb{R}\), the operator \(U^{s}:\mathcal{L}^{2}(\mathbb{R},w\,dx)\to\mathcal{L}^{2}(\mathbb{R},w\,dx)\) given by \[U^{s}f(x):=e^{isx}f(x)\] is unitary on \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\). We shall be interested in subspaces of \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\) which are invariant for the operators in the semigroup \(\{U^{s}\}_{s>0}\). Given any element \(f\in\mathcal{L}^{2}(\mathbb{R},w\,dx)\), we denote by \([f]_{w}\) the smallest closed linear subspace of \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\) which contains \(f\) and also all the functions \(U^{s}f\), \(s>0\). **Proposition 2.5**.: _Let \(f\in\mathcal{L}^{2}(\mathbb{R},w\,dx)\) be a non-zero element which satisfies_ \[\int_{\mathbb{R}}\frac{\log\big{(}|f(x)|^{2}w(x)\big{)}}{1+x^{2}}dx=-\infty.\] _Then the subspace \([f]_{w}\) coincides with \(L^{2}(E,w\,dx)\), where \(E=\{x\in\mathbb{R}:|f(x)|>0\}\)._ **Remark 2.6**.: As usual, the set \(E\) above is defined in a bit imprecise way. Since \(f\) is, strictly speaking, merely a representative of an equivalence class of measurable functions in \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\), the set \(E\) is not well-defined pointwise. However, it is well-defined up to a set of Lebesgue measure zero, and so the initial choice of the representative is unimportant. Proof of Proposition 2.5.: Since the function \(f\) vanishes almost everywhere outside of the set \(E\), then so does \(U^{s}f\) for any \(s>0\). Consequently, \([f]_{w}\subset\mathcal{L}^{2}(E,w\,dx)\). Conversely, let us consider an element \(g\in\mathcal{L}^{2}(E,w\,dx)\) with the property that \[\int_{\mathbb{R}}U_{s}f(x)\overline{g(x)}w(x)dx=\int_{\mathbb{R}}e^{isx}f(x) \overline{g(x)}w(x)dx=0,\quad s>0.\] Setting \(h:=f\overline{g}w\in\mathcal{L}^{1}(\mathbb{R},dx)\), we note that the vanishing of the integrals above is equivalent to \(h\) being a member of the Hardy class \(\mathcal{H}^{1}(\mathbb{R})\). We note also that \[\int_{\mathbb{R}}\frac{\log|h(x)|}{1+x^{2}}dx=\frac{1}{2}\int_{\mathbb{R}} \frac{\log\big{(}|f(x)|^{2}w(x)\big{)}}{1+x^{2}}dx+\frac{1}{2}\int_{\mathbb{R }}\frac{\log\big{(}|g(x)|^{2}w(x)\big{)}}{1+x^{2}}dx.\] The above equality is to be interpreted in a generalized sense: the first integral on the right-hand side is divergent by our assumption, and so may the second, but their positive parts are certainly finite by the assumption that \(f,g\in\mathcal{L}^{2}(\mathbb{R},w\,dx)\). This implies that \[\int_{\mathbb{R}}\frac{\log|h(x)|}{1+x^{2}}=-\infty.\] Proposition 2.2 now shows that \(h=f\overline{g}w\) must be the zero function. Since \(|f(x)|w(x)>0\) on \(E\) and \(g\) vanishes outside of \(E\), this means that \(g\equiv 0\). So \([f]_{w}\) is a closed and dense subspace of \(\mathcal{L}^{2}(E,w\,dx)\), which means that the two spaces are equal. **Corollary 2.7**.: _If \(f\in\mathcal{L}^{2}(\mathbb{R},w\,dx)\) is also a member of \(\mathcal{L}^{p}(\mathbb{R},w\,dx)\) for some \(p>2\), and if_ \[\int_{\mathbb{R}}\frac{\log w(x)}{1+x^{2}}\,dx=-\infty,\] _then \([f]_{w}\) coincides with \(\mathcal{L}^{2}(E,w\,dx)\), where \(E=\{x\in\mathbb{R}:|f(x)|>0\}\)._ Proof.: To prove the corollary we need to verify the condition in Proposition 2.5. Note that, pointwise, we have \[\log\left(|f|^{2}w\right)=(2/p)\log\left(|f|^{p}w\right)+(1-2/p)\log w.\] The coefficients \(2/p\) and \(1-2/p\) are positive. The inequality \(\log(x)\leq x\) for \(x>0\) shows that \[\int_{\mathbb{R}}\frac{\log\left(|f(x)|^{p}w(x)\right)}{1+x^{2}}\,dx\leq\int_{ \mathbb{R}}|f(x)|^{p}w(x)dx<+\infty.\] Note that the integral on the left might very well be equal to \(-\infty\), but that is of no concern to us: we conclude from the assumption, and the pointwise inequality above, that \[\int_{\mathbb{R}}\frac{\log\left(|f(x)|^{2}w(x)\right)}{1+x^{2}}\,dx=-\infty\] and apply Proposition 2.5. ## 3. A product space and its Hardy subspace Let \(\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be a bounded, continuous, non-negative and decreasing function, and \(w\in\mathcal{L}^{1}(\mathbb{R},dx)\cap\mathcal{L}^{\infty}(\mathbb{R},dx)\) be a non-negative function. We consider the product space \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\oplus\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\). Inside of this space we embed the linear manifold \(\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\) in the following way: \[Jf:=(f,\widehat{f})\in\mathcal{L}^{2}(\mathbb{R},w\,dx)\oplus\mathcal{L}^{2}( \mathbb{R}_{+},\rho\,dx),\quad f\in\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H }^{2}(\mathbb{R}).\] The tuple \(Jf\) is well-defined as an element of the product space, since both \(f\) and \(\widehat{f}\) are members of \(\mathcal{L}^{2}(\mathbb{R},dx)\) and both \(\rho\) and \(w\) are bounded. We define the _Hardy subspace_\(\mathcal{H}(w,\rho)\) as the norm-closure of the linear manifold \[\{Jf:f\in\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\}\] inside of the product space \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\oplus\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\). Thus each tuple \((h,k)\in\mathcal{H}(w,\rho)\) has the property that there exists some sequence \(\{f_{n}\}_{n}\) of functions in \(\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\) such that \[h=\lim_{n\to\infty}f_{n}\] in the space \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\), and simultaneously \[k=\lim_{n\to\infty}\widehat{f_{n}}\] in the space \(\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\). We could have used a set of tuples \(Jf\) with \(f\in\mathcal{H}^{2}(\mathbb{R})\) in the definition of the Hardy subspace, and arrived at the same space. Indeed, we have the following proposition. **Proposition 3.1**.: _With \(w\) and \(\rho\) as above, the Hardy subspace \(\mathcal{H}(w,\rho)\) contains all tuples of the form \((f,\widehat{f})\), \(f\in\mathcal{H}^{2}(\mathbb{R})\). Moreover, tuples \(Jf\) where \(f\in\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\) and \(f\) extends analytically to a half-space \(\{z=x+iy\in\mathbb{C}:y>-\delta\}\), \(\delta=\delta(f)>0\), are norm-dense in \(\mathcal{H}(w,\rho)\)._ Proof.: Fix \(f\in H^{2}(\mathbb{R})\), and consider the functions \(f_{\epsilon}(x)\) defined by the formula \[f_{\epsilon}(x):=\frac{if(x+i\epsilon)}{\epsilon x+i},\quad x\in\mathbb{R}, \epsilon>0.\] These functions are contained in \(\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\) for each \(\epsilon>0\), and they are analytic in a half-space larger than \(\mathbb{H}\). Note that \[\left|\frac{i}{\epsilon x+i}\right|\leq 1,\quad x\in\mathbb{R}\] and that \[\lim_{\epsilon\to 0^{+}}\frac{i}{\epsilon x+i}=1.\] We readily see from Proposition 2.1 and the dominated convergence theorem that we have \[\lim_{\epsilon\to 0^{+}}\int_{\mathbb{R}}|f_{\epsilon}-f|^{2}dx=0.\] By Plancherel's theorem, we therefore also have \[\lim_{\epsilon\to 0^{+}}\int_{\mathbb{R}_{+}}|\widehat{f}_{\epsilon}-\widehat{f}|^{2 }d\zeta=0.\] A fortiori, we have \[\lim_{\epsilon\to 0^{+}}\int_{\mathbb{R}}|f_{\epsilon}-f|^{2}w\,dx=0\] and \[\lim_{\epsilon\to 0^{+}}\int_{\mathbb{R}_{+}}|\widehat{f}_{\epsilon}-\widehat{f}|^{ 2}\rho\,d\zeta=0.\] Thus, as \(\epsilon\to 0\), the tuples \(Jf_{\epsilon}\in\mathcal{H}(w,\rho)\) converge in the norm of the space to the tuple \((f,\widehat{f})\), which is therefore contained in \(\mathcal{H}(w,\rho)\). This proves the first statement of the proposition. The second has the same proof, we merely start with \(f\in\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\) and run the same argument. The shift operators \[U^{s}f(x)=e^{ixs}f(x),\quad f\in\mathcal{L}^{2}(\mathbb{R},w\,dx)\] are unitary. Using the convention that \(g(x)\equiv 0\) for \(x<0\) and \(g\in\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\), the translation operators \[\widehat{U^{s}}g(x):=g(x-s),\quad g\in\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\] are contractions on \(\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\), whenever \(s>0\). This fact is a consequence of the assumption that \(\rho\) is decreasing: \[\int_{\mathbb{R}_{+}}|g(x-s)|^{2}\rho(x)dx =\int_{s}^{\infty}|g(x-s)|^{2}\rho(x)dx\] \[\leq\int_{s}^{\infty}|g(x-s)|^{2}\rho(x-s)dx\] \[=\int_{\mathbb{R}_{+}}|g(x)|^{2}\rho(x)dx.\] We used that \(g(x-s)\) vanishes for \(x\in(0,s)\). Consequently, the operators \[U^{s}_{*}:=U^{s}\oplus\widehat{U^{s}},\quad s>0\] are bounded on the space \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\oplus\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\). Moreover, the Hardy subspace \(\mathcal{H}(w,\rho)\) is invariant for these operators. Indeed, if \(f\in\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\), then by the well-known property of the Fourier transform \(\widehat{U^{s}}\widehat{f}=\widehat{U^{s}}\widehat{f}\), we obtain \[U^{s}_{*}Jf=(U^{s}f,\widehat{U^{s}}\widehat{f})=(U^{s}f,\widehat{U^{s}}f)=JU^ {s}f.\] The function \(U^{s}f\) is contained in \(\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\), and so the above relation shows that a dense subset of \(\mathcal{H}(w,\rho)\) is mapped into \(\mathcal{H}(w,\rho)\) under each of the bounded operators \(U^{s}_{*}\). The mentioned invariance follows. ## 4. Strategy of the proofs This section outlines the strategy of the proofs of Theorem \(A\) and Theorem \(B\). ### Two easy computations In our strategy, we will need to use the results of the following two computations. **Lemma 4.1**.: _Let_ \[\psi_{z}(x):=\frac{i}{\sqrt{2\pi}(x-\overline{z})},\quad z\in\mathbb{H}.\] _The Fourier transform \(\widehat{\psi_{z}}\) equals_ \[\widehat{\psi_{z}}(\zeta)=e^{-i\overline{z}\zeta}\mathbbm{1}_{\mathbb{R}_{+}} (\zeta),\] _and the Fourier transform \(\widehat{\overline{\psi_{z}}}\) of the conjugate of \(\psi_{z}\) equals_ \[\widehat{\overline{\psi_{z}}}(\zeta)=e^{-iz\zeta}\mathbbm{1}_{\mathbb{R}_{-}} (\zeta).\] Proof.: It is perhaps easiest to apply the Fourier inversion formula to the asserted formula for \(\widehat{\psi_{z}}\). We readily compute \[\int_{\mathbb{R}}\widehat{\psi_{z}}(\zeta)e^{i\zeta x}\,d\lambda( \zeta) =\frac{1}{\sqrt{2\pi}}\int_{0}^{\infty}e^{(-i\overline{z}+ix)\zeta }\,d\zeta\] \[=\psi_{z}(x).\] The other formula follows from \(\widehat{\overline{\psi_{z}}}(\zeta)=\overline{\widehat{\psi_{z}}(-\zeta)}\), which is an easily established property of the Fourier transform. **Proposition 4.2**.: _Assume that \(f\in\mathcal{L}^{2}(\mathbb{R},dx)\) satisfies_ \[\rho_{\widehat{f}}(x)=\int_{x}^{\infty}|\widehat{f}(\zeta)|d\zeta=\mathcal{O} \big{(}e^{-c\sqrt{x}}\big{)},\quad x>0\] _for some \(c>0\), and let_ \[s(x):=\overline{\psi_{i}(x)}=\frac{-i}{\sqrt{2\pi}(x-i)}.\] _Then_ \[\big{|}\widehat{fs}(\zeta)\big{|}=\mathcal{O}\big{(}e^{-c\sqrt{\zeta}}\big{)},\quad\zeta>0.\] Proof.: Note that \(fs\in\mathcal{L}^{1}(\mathbb{R},dx)\), and recall that the Fourier transform \(\widehat{fs}\) is thus a continuous function given by the convolution of the Fourier transforms of \(f\) and \(s\). By Lemma 4.1, we obtain \[\widehat{s}(\zeta)=e^{\zeta}\mathbbm{1}_{\mathbb{R}_{-}}(\zeta),\] and so \[\widehat{fs}(\zeta)=\int_{\mathbb{R}}\widehat{f}(x)e^{\zeta-x}\mathbbm{1}_{ \mathbb{R}_{-}}(\zeta-x)\,d\lambda(x)=\int_{\zeta}^{\infty}\widehat{f}(x)e^{ \zeta-x}\,d\lambda(x).\] The exponential term in the last integral is bounded by \(1\). Therefore \(\big{|}\widehat{fs}(\zeta)\big{|}\leq\rho_{\widehat{f}}(\zeta)\), and the desired estimate follows from the decay assumption on \(\rho_{\widehat{f}}\). ### Strategy of the proof of Theorem \(A\) Given a function \(f\in\mathcal{L}^{2}(\mathbb{R},dx)\) we consider the set \[E:=\{x\in\mathbb{R}:|f(x)|>0\},\] which is well-defined up to a set of Lebesgue measure zero. Let \(\mathcal{F}\) denote the family of all finite open intervals \(I\) which satisfy \[\int_{I}\log|f(x)|\,dx>-\infty,\] and set \[U:=\cup_{I\in\mathcal{F}}I.\] Since \(\log|f|\equiv-\infty\) on \(I\setminus E\) for every interval \(I\), it follows that if \(\log|f|\) is integrable on \(I\), then the set difference \(I\setminus E\) must have measure zero. Consequently, since one can easily argue that we can express \(U\) as a _countable_ union of intervals \(I\) on which \(\log|f|\) is integrable, the Lebesgue measure of the set difference \(U\setminus E\) must be zero. However, the set difference \(E\setminus U\) might have positive measure. We set \[\operatorname{res}(f):=E\setminus U\] and call this set the _residual_ of \(f\). The residual is well-defined up to a set of Lebesgue measure zero. **Claim 1**.: Under the assumption that \(\rho_{\widehat{f}}(\zeta)=\mathcal{O}\big{(}e^{-c\sqrt{\zeta}}\big{)}\) for some \(c>0\), the set \(\operatorname{res}(f)\) has Lebesgue measure zero. Theorem \(A\) follows immediately from the above claim. Indeed, the roles of \(f\) and \(\widehat{f}\) may obviously be interchanged in the statement of Theorem \(A\), and the above claim implies that the open set \(U\) equals \(E\) up to an error of measure zero. Local integrability of \(\log|f|\) on the set \(U\) follows from its construction. We set \[w(x):=\min(|f(x)|^{2},1).\] Note that \(\operatorname{res}(w)=\operatorname{res}(f)\) and that \(w\in\mathcal{L}^{1}(\mathbb{R},dx)\). Our Claim 1 will follow from the next assertion. **Claim 2**.: Let \(\rho:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be a bounded, continuous, non-negative and decreasing function which satisfies \(\rho(x)=\mathcal{O}\big{(}e^{-d\sqrt{x}}\big{)}\) for some \(d>0\) and \(x>0\). Then, every tuple of the form \[(h,0)\in\mathcal{L}^{2}(\mathbb{R},w\,dx)\oplus\mathcal{L}^{2}(\mathbb{R}_{+ },\rho\,dx),\] where \(h\) is any function in \(\mathcal{L}^{2}(\mathbb{R},dx)\) which lives only on the set \(\operatorname{res}(w)\), is contained in the Hardy subspace \(\mathcal{H}(w,\rho)\). To prove Claim 1 from Claim 2 we will use a trick involving Plancherel's theorem. We set \(\rho(x)=e^{-c\sqrt{x}}\), where \(c>0\) is the constant appearing in Claim 1. Let \(h\) be as in Claim 2, and \[s(x):=\overline{\psi_{i}}(x)=\frac{-i}{\sqrt{2\pi}(x-i)}\] be as in Proposition 4.2. We will show that \[\int_{\mathbb{R}}h\overline{fs}\,dx=0.\] This implies, by the generality of \(h\), that \(fs\) is zero on the set \(\operatorname{res}(w)=\operatorname{res}(f)\). Since \(s\) is non-zero everywhere on \(\mathbb{R}\), in fact \(f\) is zero on \(\operatorname{res}(f)\). Since \(\operatorname{res}(f)\subset E=\{x\in\mathbb{R}:|f(x)|>0\}\), it follows that the residual has Lebesgue measure zero. Thus establishing the vanishing of the above integral is sufficient to prove Claim 1 from Claim 2. We do so next. Because \((h,0)\in\mathcal{H}(w,\rho)\), there exists a sequence \(\{g_{n}\}\) of functions \(g_{n}\in\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\) such that \(g_{n}\to h\) in the norm of \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\) and \(\widehat{g_{n}}\to 0\) in the norm of \(\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\). Consider the quantities \[\int_{\mathbb{R}}(h-g_{n})\overline{fs}\,d\lambda=\int_{E}(h-g_{n})\sqrt{w} \frac{\overline{fs}}{\sqrt{w}}d\lambda.\] We passed from domain of integration \(\mathbb{R}\) into \(E\), since \(f\) vanishes outside of \(E\) anyway (note also that \(w>0\) almost everywhere on \(E\)). By the Cauchy-Schwarz inequality, we obtain \[\bigg{|}\int_{\mathbb{R}}(h-g_{n})\overline{fs}\,d\lambda\bigg{|}\leq\sqrt{ \int_{\mathbb{R}}|h-g_{n}|^{2}w\,d\lambda}\sqrt{\int_{E}\frac{|f|^{2}}{w}|s|^{ 2}d\lambda}.\] Note that the first of the factors on the right-hand side of the inequality above converges to \(0\). The other factor is finite. Indeed, since \(|f|^{2}/w\equiv 1\) on the set where \(|f|<1\), and \(|f|^{2}/w=|f|^{2}\) on the set where \(|f|\geq 1\), we obtain \(\frac{|f|^{2}}{w}\leq|f|^{2}+1\), and consequently \[\frac{|f|^{2}}{w}|s|^{2}\leq|f|^{2}|s|^{2}+|s|^{2}\in\mathcal{L}^{1}(\mathbb{R },dx).\] This computation implies the formula \[\int_{\mathbb{R}}h\overline{f\overline{s}}\,d\lambda=\lim_{n\to\infty}\int_{ \mathbb{R}}g_{n}\overline{f\overline{s}}\,d\lambda=\lim_{n\to\infty}\int_{ \mathbb{R}_{+}}\widehat{g_{n}\overline{f\overline{s}}}\,d\lambda,\] where we used Plancherel's theorem in the last step. Now, recall that by Proposition 4.2 we may estimate \[\left|\int_{\mathbb{R}_{+}}\widehat{g_{n}\overline{f\overline{s}}}\,d\lambda \right|\leq A\int_{\mathbb{R}_{+}}|\widehat{g_{n}}(\zeta)|e^{-c\sqrt{\zeta}}\, d\lambda(\zeta)\] for some positive constant \(A\). By again using Cauchy-Schwarz inequality, we obtain \[\left|\int_{\mathbb{R}}\widehat{g_{n}\overline{f\overline{s}}}\,d\lambda \right|\leq A\sqrt{\int_{\mathbb{R}_{+}}|\widehat{g_{n}}(\zeta)|^{2}e^{-c\sqrt {\zeta}}\,d\lambda(\zeta)}\sqrt{\int_{\mathbb{R}_{+}}e^{-c\sqrt{\zeta}}\,d \lambda(\zeta)}.\] The second factor on the right-hand side is certainly finite. Since \(\rho(x)=e^{-c\sqrt{x}}\) and \(g_{n}\to 0\) in \(\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\), the first factor above converges to \(0\), as \(n\to\infty\). All in all, we have obtained that \[\int_{\mathbb{R}}h\overline{f\overline{s}}\,d\lambda=\lim_{n\to\infty}\int_{ \mathbb{R}_{+}}\widehat{g_{n}\overline{f\overline{s}}}\,d\lambda=0.\] By the earlier discussion, this is sufficient to establish Claim 1 from Claim 2. We need to prove Claim 2 in order to prove Theorem \(A\). We will do so in the coming sections. ### Strategy of the proof of Theorem \(B\) We will derive Theorem \(B\) from the following claim. **Claim 3**.: There exists a compact set \(E\subset\mathbb{R}\) of positive Lebesgue measure, and an increasing function \(M:\mathbb{R}_{+}\to\mathbb{R}_{+}\) which satisfies \[\lim_{x\to\infty}\frac{M(x)}{x^{a}}=\infty \tag{4.1}\] for every \(a\in(0,1/2)\), such that if \[\rho(x):=e^{-M(x)},\quad x>0,\] then the Hardy subspace \(\mathcal{H}(\mathbb{1}_{E},\rho)\) is properly contained in \(\mathcal{L}^{2}(E,\mathbb{1}_{E}\,dx)\oplus\mathcal{L}^{2}(\mathbb{R}_{+}, \rho\,dx)\). We proceed to show how one proves Theorem \(B\) from this claim. Let \(E\) and \(\rho=e^{-M}\) be as in Claim 3, and assume that the non-zero tuple \((h,k)\in\mathcal{L}^{2}(E,\mathbb{1}_{E}\,dx)\oplus\mathcal{L}^{2}(\mathbb{R}_ {+},\rho\,dx)\) is orthogonal to \(\mathcal{H}(\mathbb{1}_{E},\rho)\). We shall soon see that \(h\) in fact is non-zero. The function \(h\) lives only on the set \(E\), and we will show that it has the required spectral decay. Recall Lemma 4.1, let \(\psi_{z}\in\mathcal{H}^{2}(\mathbb{R})\) be as in that lemma, and let \(g\) be the inverse Fourier transform of \(k\rho\), so that \(\widehat{g}=k\rho\in\mathcal{L}^{2}(\mathbb{R}_{+},dx)\). In fact \(g\in\mathcal{H}^{2}(\mathbb{R})\), since its spectrum is positive. The orthogonality means that \[0 =\int_{E}h\overline{\psi_{z}}\,d\lambda+\int_{\mathbb{R}_{+}}k \overline{\overline{\psi_{z}}}\rho\,d\lambda\] \[=\int_{E}h\overline{\psi_{z}}\,d\lambda+\int_{\mathbb{R}}g \overline{\psi_{z}}\,d\lambda.\] We used Plancherel's theorem. The above relation shows that the function \(G:=h+g\in\mathcal{L}^{2}(\mathbb{R},dx)\) is orthogonal in \(\mathcal{L}^{2}(\mathbb{R},dx)\) to each of the functions \(\psi_{z}\). Let \(P:\mathcal{L}^{2}(\mathbb{R},dx)\to\mathcal{H}^{2}(\mathbb{R})\) be the orthogonal projection. In terms of Fourier transforms, we have \[\widehat{Ph}(\zeta)=\widehat{h}(\zeta)\mathbb{1}_{\mathbb{R}_{+}}(\zeta).\] Then \[G_{0}:=PG=Ph+g\] is orthogonal not only to \(\psi_{z}\), but also to \(\overline{\psi_{z}}\), since by Lemma 4.1 the functions \(\overline{\psi_{z}}\) have spectrum contained in \(\mathbb{R}_{-}\). But then the decomposition formula for the Poisson kernel in (2.1) shows that \[\int_{\mathbb{R}}G_{0}(t)\mathcal{P}(t,z)\,dt=0\] for each \(z\in\mathbb{H}\), and it is an elementary fact about the Poisson kernel that we must, in this case, have \(G_{0}\equiv 0\). So \(Ph=-g\). We can now argue that \(h\neq 0\). Indeed, if \(h=0\), then \(g=0\). Since \(\widehat{g}=k\rho\), that would mean \(k=0\), contradicting that the tuple \((h,k)\) is non-zero. Having established that \(h\neq 0\), we proceed by taking Fourier transforms to obtain \[\widehat{h}\mathds{1}_{\mathbb{R}_{+}}=\widehat{Ph}=-\widehat{g}=-k\rho.\] Using Cauchy-Schwarz inequality, we may now estimate \[\rho_{\widehat{h}}(x) =\int_{x}^{\infty}|k|\rho\,d\zeta\] \[\leq\sqrt{\int_{x}^{\infty}\rho\,d\zeta}\sqrt{\int_{x}^{\infty}|k |^{2}\rho\,d\zeta},\quad x>0.\] The second factor is finite, and the growth of \(M\) asserted in Claim 3 implies that for every fixed \(a\in(0,1/2)\) there exists a constant \(C(a)>0\) for which we have \[\rho(\zeta)=e^{-M(\zeta)}\leq e^{-C(a)\zeta^{a}},\quad\zeta>0.\] It follows that the integral inside the square root of the first factor above satisfies \[\int_{x}^{\infty}\rho\,d\zeta\leq\int_{x}^{\infty}e^{-C(a)\zeta^{a}}\,d\zeta= \mathcal{O}\big{(}e^{-C(a)x^{a}}\big{)},\quad x>0.\] Since \(a\in(0,1/2)\) was arbitrary, we conclude that \(\rho_{\widehat{h}}(x)=\mathcal{O}\big{(}e^{-x^{a}}\big{)}\) for every \(a\in(0,1/2)\) and \(x>0\). This easily implies Theorem \(B\). It follows that Theorem \(B\) is implied by Claim 3. ## 5. Proof of Theorem \(A\) Our proof is an adaptation to the half-plane setting of the authors' technique from [11], and in fact the two proofs are very similar. The problem studied in [11] is different, but in both the present work and in the reference, the main trick consists of constructing a highly oscillating sequence of functions which simultaneously obey appropriate spectral bounds. ### A sufficient construction We start by reducing our task to a construction of a certain sequence of bounded functions. Recall that \(w(x)=\min(|f(x)|^{2},1)\in\mathcal{L}^{1}(\mathbb{R},dx)\) and that \(\rho\) has the decay \(\rho(\zeta)=\mathcal{O}\big{(}e^{-d\sqrt{\zeta}}\big{)}\) for \(\zeta>0\) and some \(d>0\). Note that we may assume throughout that \[\int_{\mathbb{R}}\frac{\log w(x)}{1+x^{2}}\,dx=-\infty.\] Indeed, if on the contrary this integral converges, then \(\operatorname{res}(w)=\operatorname{res}(f)\) is void, and both Claim 2 of Section 4 and Theorem \(A\) (with \(f\) and \(\widehat{f}\) playing opposite roles) hold trivially. We may decompose \(\operatorname{res}(w)\) as \[\operatorname{res}(w)=\cup_{m\geq 1}F_{m}\] where \[F_{m}:=[-m,m]\cap\{x\in\mathbb{R}:w(x)>1/m\}\cap\operatorname{res}(w). \tag{5.1}\] The set equality above holds up to an error of measure zero. The sets \(F_{m}\) are bounded, and on each of them \(w\) is bounded from below. **Proposition 5.1**.: _In order to establish Claim \(2\), it suffices to construct, for any fixed \(m\in\mathbb{N}\) and \(c>0\), a sequence of functions \(\{h_{n}\}_{n}\) in \(\mathcal{H}^{\infty}(\mathbb{R})\) which has the following properties._ 1. _The analytic extensions of the functions_ \(h_{n}\) _to_ \(\mathbb{H}\) _satisfy the bound_ \(|h_{n}(x+iy)|\leq e^{\frac{a}{y}}\) _for_ \(y>0\)_,_ 2. \(\lim_{n\to\infty}h_{n}(x)=0\) _for almost every_ \(x\in F_{m}\)_,_ 3. \(\lim_{n\to\infty}h_{n}(z)=1\) _for every_ \(z\in\mathbb{H}\)_,_ 4. _there exists an_ \(A>0\) _and_ \(p>2\) _such that_ \(|h(x)|^{p}w(x)<A\) _for almost every_ \(x\in\mathbb{R}\) Proof.: Property \((i)\), together with Corollary 2.4, implies that the functions \(g_{n}(x)=\frac{h_{n}(x)}{(i+x)^{2}}\in\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^ {2}(\mathbb{R})\) obey the spectral bound \(|\widehat{g_{n}}(\zeta)|\leq\sqrt{\frac{\pi}{2}}e^{2\sqrt{\mathrm{e}}\sqrt{ \zeta}},\zeta>0\). Since \(\rho(\zeta)=\mathcal{O}\big{(}e^{-d\sqrt{\zeta}}\big{)}\) for some \(d>0\), the spectral bound implies that \[\sup_{n}\int_{\mathbb{R}_{+}}|\widehat{g_{n}}|^{2}\rho\,d\zeta<\infty\] if \(c\) is small enough. Together with \((iv)\), we see that \(Jg_{n}=(g_{n},\widehat{g_{n}})\) forms a bounded subset of the Hardy subspace of product space \(\mathcal{L}^{2}(\mathbb{R},w\,dx)\oplus\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\). We may thus assume, by passing to a subsequence, that \(\{Jg\}_{n}\) tends weakly in \(\mathcal{H}(w,\rho)\) to some tuple \((h,k)\). In fact, \((iv)\) implies that \(\{g_{n}\}_{n}\) is a sequence bounded in \(\mathcal{L}^{p}(\mathbb{R},w\,dx)\), so we have that \(h\in\mathcal{L}^{p}(\mathbb{R},w\,dx)\) for some \(p>2\). The fact that \(h\equiv 0\) on \(F_{m}\) is a consequence of the weak convergence of \(g_{n}\) to \(h\) and the condition \((ii)\), which implies \(\lim_{n\to\infty}g_{n}(x)=0\) for almost every \(x\in F_{m}\). Moreover, by the formula in Proposition 2.3, we have \[\widehat{g_{n}}(\zeta)=e^{y\zeta}\int_{\mathbb{R}}\frac{h_{n}(x+iy)}{(i+x+iy)^ {2}}e^{-ix\zeta}d\lambda(x)\] for any \(y>0\). The integrand converges pointwise to \(\frac{e^{-ix\zeta}}{(i+x+iy)^{2}}\) by \((iii)\), and it is dominated pointwise by the integrable function \[x\mapsto\frac{e^{\frac{e}{\theta}}}{1+x^{2}},\quad x\in\mathbb{R}.\] The dominated convergence theorem implies that \[\lim_{n\to\infty}\widehat{g_{n}}(\zeta)=e^{y\zeta}\int_{\mathbb{R}}\frac{1}{i +x+iy}e^{-ix\zeta}\,d\lambda(x)=\int_{\mathbb{R}}\frac{1}{(i+x)^{2}}e^{-ix \zeta}\,d\lambda(x)=\widehat{\Psi}(\zeta),\] where \(\Psi(x):=\frac{1}{(i+x)^{2}}\in\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2} (\mathbb{R})\). In the next-to-last equality we used Proposition 2.3 backwards. Weak and pointwise convergence implies, as previously, that \(k=\widehat{\Psi}\). Since \(J\Psi\in\mathcal{H}(w,\rho)\), we have that \((h,k)-J\Psi=(h-\Psi,0)\in\mathcal{H}(w,\rho)\). The function \(h-\Psi\) is non-zero almost everywhere on \(F_{m}\). Indeed, \(h\) vanishes on \(F_{m}\), and \(\Psi(x)\) is non-zero everywhere on \(F_{m}\). Also, since \(h\in\mathcal{L}^{p}(\mathbb{R},w\,dx)\), we have \(h-\Psi\in\mathcal{L}^{p}(\mathbb{R},w\,dx)\). The conditions to apply Corollary 2.7 are thus satisfied, and by the invariance of \(\mathcal{H}(w,\rho)\) under the operators \(U_{s}^{*}\) defined in Section 3, we conclude that \[\mathcal{L}^{2}(E,w\,dx)\oplus\{0\}\subset\mathcal{H}(w,\rho),\] where \(F_{m}\subset E:=\{x\in\mathbb{R}:|h(x)-\Psi(x)|>0\}\). Since \(m\) is arbitrary, we conclude that \[\mathcal{L}^{2}(\cup_{m}F_{m},w\,dx)\oplus\{0\}=\mathcal{L}^{2}(\mathrm{res}( w),w\,dx)\oplus\{0\}\subset\mathcal{H}(w,\rho).\] This is sufficient to conclude the validity of Claim 2. ### An estimate for Poisson integrals Let \(\mu\) be a finite real-valued measure on \(\mathbb{R}\). The _Poisson integral_ of \(\mu\) is the harmonic function \(\mathcal{P}_{\mu}:\mathbb{H}\to\mathbb{R}\) which is given by the formula \[\mathcal{P}_{\mu}(z):=\int_{\mathbb{R}}\mathcal{P}(t,z)\,d\mu(t)=\frac{1}{\pi} \int_{\mathbb{R}}\frac{y}{(x-t)^{2}+y^{2}}\,d\mu(t),\quad z=x+iy\in\mathbb{H}.\] By the triangle inequality, and an estimation of \(\mathcal{P}(t,z)\) by its supremum \(\frac{1}{\pi y}\) over \(\mathbb{R}\), we easily obtain the inequality \[\big{|}\mathcal{P}_{\mu}(z)\big{|}\leq\frac{|\mu|(\mathbb{R})}{\pi y},\quad z =x+iy\in\mathbb{H},\] and where \(|\mu|\) denotes the usual variation of the measure \(\mu\). We obtain a much better inequality for measures \(\mu\) which are oscillating rapidly. The following lemma is the half-plane version of an estimate in [11, Lemma 3.2]. **Lemma 5.2**.: _Let \(\mu\) be a finite real-valued measure on \(\mathbb{R}\) which has the following structure: there exists a finite sequence of disjoint intervals \(\{I_{j}\}_{j}\) of \(\mathbb{R}\), and a decomposition \(\mu=\sum_{j}\mu_{j}\), where _is a real-valued measure supported inside \(I_{j}\), \(\mu_{j}(I_{j})=0\), and \(|\mu_{j}|(I_{j})\leq C\) for some \(C>0\) which is independent of \(j\). Then_ \[\big{|}\mathcal{P}_{\mu}(x+iy)\big{|}\leq\frac{C}{\pi y},\quad x+iy\in\mathbb{H}.\] Proof.: Since \(-\mu\) satisfies the same conditions as \(\mu\), and so does any translation of \(\mu\), it suffices to prove that \(\mathcal{P}_{\mu}(y)\leq\frac{C}{\pi y}\) for any \(y>0\). We have \[\mathcal{P}_{\mu}(y)=\sum_{j}\frac{1}{\pi}\int_{\mathbb{R}}\frac{y}{t^{2}+y^{2 }}d\mu_{j}(t).\] If \(\mu_{j}=\mu_{j}^{+}-\mu_{j}^{-}\) is the decomposition of \(\mu_{j}\) into its positive and negative parts, then we have the estimate \[\mathcal{P}_{\mu}(y) \leq\frac{1}{\pi}\sum_{j}\sup_{t\in I_{j}}\frac{y}{t^{2}+y^{2}} \cdot\mu_{j}^{+}(I_{j})-\inf_{t\in I_{j}}\frac{y}{t^{2}+y^{2}}\cdot\mu_{j}^{-} (I_{j})\] \[\leq\frac{C}{2\pi}\sum_{j}\sup_{t\in I_{j}}\frac{y}{t^{2}+y^{2}} -\inf_{t\in I_{j}}\frac{y}{t^{2}+y^{2}}\] \[:=\frac{C}{2\pi}S\] In the second step we used that \(\mu_{j}(I_{j})=\mu_{j}^{+}(I_{j})-\mu_{j}^{-}(I_{j})=0\) and that \(|\mu_{j}|(I_{j})=\mu_{j}^{+}(I_{j})+\mu_{j}^{-}(I_{j})\leq C\). Since the intervals \(\{I_{j}\}_{j}\) are disjoint, and the function \(t\mapsto\frac{y}{t^{2}+y^{2}}\) is increasing for \(t<0\), decreasing for \(t>0\), and attains a maximum value of \(1/y\) at \(t=0\), the sum \(S\) in the above estimate cannot be larger than \(\frac{2}{y}\) (it is easily seen to be bounded by twice the height of the graph of \(t\mapsto\frac{y}{t^{2}+y^{2}}\)). The estimate follows. ### The construction In accordance with the earlier discussion in Section 5.1, we recall the decomposition (5.1) and assume below that \(F:=F_{m}\) is a bounded subset of \(\operatorname{res}(w)\cap\{x\in\mathbb{R}:w(x)>\delta\}\) for some \(\delta>0\). The set \(F\) inherits the following property from \(\operatorname{res}(w)\): if \(I\) is an interval, and \(|I\cap F|>0\), then \(\int_{I}\log w\,dx=-\infty\). **Lemma 5.3**.: _If \(|I\cap F|>0\) for some finite interval \(I\), then given any \(c>0\) and any \(p>0\), there exists \(D>0\) and a measurable subset \(E_{I}\subset I\) disjoint from \(F\) for which we have_ \[\int_{E_{I}}\min\left(p^{-1}\log^{+}(1/w),D\right)dx=c. \tag{5.2}\] Proof.: On the set \(F\), \(\log(w)\) is bounded from below by \(\log\delta\). Hence \(\int_{I\setminus F}\log w\,dx=-\infty\). Consequently, \[\lim_{D\to+\infty}\int_{I\setminus F}\min\left(p^{-1}\log^{+}(1/w),D\right)dx =\int_{I\setminus F}p^{-1}\log^{+}(1/w)\,dx=+\infty.\] So for \(D\) sufficiently large we will have \(\int_{I\setminus F}\min\left(p^{-1}\log^{+}(1/w),D\right)dx>c\), and then by the absolute continuity of the finite positive measure \(\min\left(p^{-1}\log^{+}(1/w),D\right)dx\) we may choose a set \(E_{I}\subset I\setminus F\) for which (5.2) holds. We will now construct the sequence in Proposition 5.1. Let \(p>2\), \(\{c_{n}\}_{n\geq 1}\) be some sequence of positive numbers which tends to \(0\) slowly enough so that \(c_{n}2^{n}\) tends to \(+\infty\), and let \(F\) be as above. Fix some integer \(n>0\), cover \(\mathbb{R}\) by a sequence of (say, half-open) disjoint intervals of length \(2^{-n}\) and let \(\{\ell_{k}\}_{k}\) be those intervals for which \(|\ell_{k}\cap F|>0\). Apply Lemma 5.3 with \(c=c_{n}\) to each of the intervals \(\ell_{k}\) to obtain a corresponding constant \(D_{k}>0\) and a set \(E_{k}:=E_{\ell_{k}}\subset\ell_{k}\) for which (5.2) holds. We set \(d\mu(t)=\log W(t)\,dt\), where \[\log W(t)=\sum_{k}\min\left(p^{-1}\log^{+}(1/w(t)),D_{k}\right)\mathbb{1}_{E_{ k}}(t)-\frac{c_{n}}{|\ell_{k}\cap F|}\mathbb{1}_{\ell_{k}\cap F}(t).\] Then \(\mu\) is an absolutely continuous real-valued measure with bounded density \(\log W\), \(\mu(\ell_{k})=0\) and \(|\mu|(\ell_{k})=2c_{n}\). We construct \(h_{n}\in\mathcal{H}^{\infty}(\mathbb{R})\) by letting the logarithm \(\log h_{n}(z)\) of its extension to \(\mathbb{H}\) be given by the right-hand side of the formula (2.7). Then, by Lemma 5.2, we have the inequalities \[|\mathcal{P}_{\mu}(x+iy)|\leq\frac{2c_{n}}{\pi y},\quad z=x+iy\in\mathbb{H}\] and consequently, since \(|h(z)|=e^{\mathcal{P}_{\mu}(z)}\), we have the bounds \[e^{-2c_{n}/\pi y}\leq|h_{n}(z)|\leq e^{2c_{n}/\pi y},\quad z=x+iy\in\mathbb{H}. \tag{5.3}\] Since \(c_{n}\to 0\), by multiplying \(h_{n}\) by an appropriate unimodular constant, we may assume by (5.3) that \[\lim_{n\to\infty}h_{n}(z)=1 \tag{5.4}\] for every \(z\in\mathbb{H}\) (possibly after passing to a subsequence). Also, for almost every \(x\in F\), we have \[|h_{n}(x)|=e^{-c_{n}/|\ell_{k}\cap F|}\leq e^{-c_{n}2^{n}}. \tag{5.5}\] Our assumption on \(c_{n}\) then implies that \(\lim_{n\to\infty}h_{n}(x)=0\) for almost every \(x\in F\). For almost every \(x\in\mathbb{R}\setminus F\), we have instead \[|h_{n}(x)|^{p}w(x)=e^{p\log W(x)}w(x)\leq e^{\log^{+}(1/w(x))}w(x)\leq 1. \tag{5.6}\] The equations (5.3), (5.4), (5.5) and (5.6) show that the sequence \(\{h_{n}\}_{n}\) satisfies the conditions stated in Proposition 5.1. Thus Claim 2 holds, and consequently the proof of Theorem \(A\) is complete. ## 6. Proof of Theorem \(B\) ### A bit of concave analysis Let \(M:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be an increasing and concave function which is differentiable for \(x>0\). We assume that \(M(0)=0\), and that \[M(x)\leq\sqrt{x},\quad x>0. \tag{6.1}\] For a function \(M\) with the above properties, the integrals \[I_{M}(y):=\int_{0}^{\infty}e^{M(x)-2yx}\,dx \tag{6.2}\] converge for every \(y>0\), and estimation of the growth of \(I_{M}(y)\) as \(y\to 0^{+}\) will be of importance in the proof of Theorem \(B\). To estimate \(I_{M}\), we define a function \(M_{*}\) in the following way. Since \(M\) is increasing, concave and differentiable, the derivative \(M^{\prime}(x)\) is defined for \(x>0\), and it is a positive and decreasing function. The concavity of \(M\) implies that \[M^{\prime}(x)\leq M(x)/x \tag{6.3}\] from which it follows by (6.1) that \[\lim_{x\to\infty}M^{\prime}(x)=0.\] It is only the asymptotic behaviour of \(M\) as \(x\to\infty\) that concerns us, so we will also assume for convenience that \(\lim_{x\to 0^{+}}M^{\prime}(x)=+\infty\). In this case, the inverse function \[K(y):=(M^{\prime})^{-1}(y),\quad y\in(0,\infty) \tag{6.4}\] is well-defined and positive. It is decreasing, and satisfies \(\lim_{y\to 0^{+}}K(y)=+\infty\). We set \[M_{*}(y):=M(K(y)),\quad y\in(0,\infty) \tag{6.5}\] The function \(M_{*}\) is decreasing, and satisfies \(\lim_{y\to 0^{+}}M_{*}(y)=+\infty\). The integrals \(I_{M}(y)\) can be estimated in terms of \(M_{*}\). **Proposition 6.1**.: _For \(M\) as above, we have_ \[I_{M}(y)\leq\frac{2e^{M_{*}(y)}}{y^{2}}\] _for all sufficiently small \(y>0\)._ Proof.: We will need the following observation. The inequality (6.3) together with (6.1) implies that \(M^{\prime}(x)\leq\frac{1}{\sqrt{x}}\) if \(x>0\) is sufficiently large. We want to show the inequality \[K(y)\leq\frac{1}{y^{2}},\] for sufficiently small \(y>0\). Set \(K(y)=x\) and \(\frac{1}{y^{2}}=x_{*}\). Then \[M^{\prime}(x)=y=\frac{1}{\sqrt{x_{*}}}\geq M^{\prime}(x_{*})\] if \(y>0\) is sufficiently small (and consequently \(x_{*}\) is sufficiently large). Since \(M^{\prime}\) is a decreasing function, the above inequality shows that \(x_{*}\geq x\), which is the same as the desired inequality. We split the integral (6.2) at \(x=K(y)\). Both pieces can be estimated very crudely. For the first piece, we have \[\int_{0}^{K(y)}e^{M(x)-2yx}\,dx \leq\int_{0}^{K(y)}e^{M(K(y))}\,dx\] \[=K(y)e^{M_{*}(y)}\] \[\leq\frac{e^{M_{*}(y)}}{y^{2}}.\] We used our initial observation in the last step. For the second piece, we note that \[\sup_{x>0}M(x)-yx=M(K(y))-yK(y)\leq M_{*}(y).\] Indeed, the supremum is attained at the point \(x\) where \(M^{\prime}(x)=y\), which by definition is \(x=K(y)\). Thus \[\int_{K(y)}^{\infty}e^{M(x)-2yx}\,dx \leq\int_{K(y)}^{\infty}e^{M_{*}(y)-xy}\,dx\] \[\leq e^{M_{*}(y)}\int_{0}^{\infty}e^{-yx}\,dx\] \[= \frac{e^{M_{*}(y)}}{y}\] \[\leq \frac{e^{M_{*}(y)}}{y^{2}}.\] In the last inequality we require that \(y\in(0,1)\). We obtain the desired estimate by combining the estimates for the two pieces of the integral. In the proof of Theorem \(B\), a point will come up where we will need integrability of \(M_{*}\) near the origin. The next proposition describes the functions \(M\) corresponding to \(M_{*}\) which are integrable in this way. **Proposition 6.2**.: _For \(M\) as above, the following two statements are equivalent._ 1. \(\int_{0}^{\delta}M_{*}(y)\,dy<\infty\) _for some_ \(\delta>0\)_._ 2. \(\int_{1}^{\infty}(M^{\prime}(x))^{2}\,dx<\infty\)_._ Proof.: We start with the integral in \((i)\) above and implement the change of variable \(y=M^{\prime}(x)\). Then \(dy=M^{\prime\prime}(x)\,dx\) and \(M_{*}(y)=M(x)\), and by next using integration by parts, we obtain \[\int_{0}^{\delta}M_{*}(y)\,dy =-\int_{K(\delta)}^{\infty}M(x)M^{\prime\prime}(x)\,dx\] \[=\lim_{R\to+\infty}-M(R)M^{\prime}(R)+M(K(\delta))M^{\prime}(K( \delta))\] \[+\lim_{R\to+\infty}\int_{K(\delta)}^{R}(M^{\prime}(x))^{2}\,dx.\] The assumptions (6.1) and (6.3) together imply that the quantity \(M^{\prime}(R)M(R)\) stays bounded, as \(R\to+\infty\). Thus the desired equivalence follows from the above computation. **Remark 6.3**.: We should point out that a concave function indeed exists which satisfies all of our conditions. For instance, note that any concave function \(M\) which for large positive \(x\) coincides with \[M(x):=\frac{\sqrt{x}}{\log x}\] satisfies the equivalent conditions of the above proposition. Indeed, one verifies by differentiation that the right-hand side is concave on some interval \([A,\infty)\) and that the integral in \((ii)\) of Proposition 6.2 converges. Such a function \(M\) can be easily chosen to satisfy all assumptions made in this section. Moreover, clearly it satisfies (4.1). ### A growth estimate for Hardy class functions **Proposition 6.4**.: _Let \(f\in\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\) have a Fourier transform \(\widehat{f}\) satisfying_ \[\int_{\mathbb{R}_{+}}|\widehat{f}|^{2}\rho\,d\zeta\leq C\] _for some constant \(C>0\) and \(\rho(\zeta)=e^{-M(\zeta)}\). Then there exist a positive constant \(\delta\) such that the analytic extension of \(f\) to \(\mathbb{H}\) satisfies the estimate_ \[|f(x+iy)|\leq\sqrt{2C}\frac{e^{M_{*}(y)}}{y},\quad y\in(0,\delta).\] Proof.: By developments of Section 2.1, we have \[f(z)=\int_{\mathbb{R}}\frac{f(t)}{t-z}\frac{dt}{2\pi i}=\int_{\mathbb{R}}f(t) \overline{\psi_{z}(t)}d\lambda(t),\quad z=x+iy\in\mathbb{H},\] and where \(\psi_{z}\in\mathcal{H}^{2}(\mathbb{R})\) is as in Lemma 4.1. By Plancherel's theorem and the lemma, we obtain \[f(z) =\int_{\mathbb{R}_{+}}\widehat{f}(\zeta)e^{iz\zeta}\,d\lambda(\zeta)\] \[=\int_{\mathbb{R}_{+}}\widehat{f}(\zeta)\sqrt{\rho(\zeta)}\frac{ e^{-y\zeta+ix\zeta}}{\sqrt{\rho(\zeta)}}\,d\lambda(\zeta)\] An application of Cauchy-Schwarz inequality leads to \[|f(z)|\leq\sqrt{C}\sqrt{\int_{0}^{\infty}e^{M(\zeta)-2y\zeta}\,d\zeta}. \tag{6.6}\] Now Proposition 6.1 applies to obtain the desired estimate. ### Construction of the compact set If \(M\) satisfies the equivalent conditions of Proposition 6.2, then the logarithm of the right-hand side in the inequality of Proposition 6.4, namely \[H(y):=\frac{\log(2C)}{2}+M_{*}(y)-\log y,\quad y\in(0,\delta), \tag{6.7}\] is, for small enough \(\delta>0\), positive and integrable over the interval \(y\in(0,\delta)\). To \(H\) and any \(A>0\) we will associate a Cantor-type compact set \(E\) contained in \([0,A]\) which contains no intervals and for which the integral \[\int_{[0,A]\setminus E}H(\operatorname{dist}(x,E))\,dx \tag{6.8}\] converges. Here \(\operatorname{dist}(x,E)\) denotes the distance from the point \(x\in[0,A]\) to the closed set \(E\). Let \(\mathcal{U}=\{\ell\}\) be the system of maximal disjoint open intervals, union of which constitutes the complement of \(E\) within \((0,A)\). The convergence of the integral above is easily seen to be equivalent to the convergence of the sum \[\sum_{\ell\in\mathcal{U}}\int_{0}^{|\ell|}H(x)\,dx \tag{6.9}\] where \(|\ell|\) denotes the length of the interval \(\ell\). To construct \(E\), we choose a sequence of numbers \(\{L_{n}\}_{n\geq 1}\) which is so quickly decreasing that \[\sum_{n=1}^{\infty}2^{n}\int_{0}^{L_{n}}H(x)\,dx<\infty \tag{6.10}\] and \[\sum_{n=1}^{\infty}2^{n}L_{n}\leq A/2. \tag{6.11}\] From such a sequence, we construct \(E\) as in the classical Cantor set construction. We set \(E_{0}=[0,A]\), and recursively define a compact set \(E_{n+1}\) contained in \(E_{n}\). The set \(E_{n+1}\) consists of \(2^{n+1}\) closed intervals in \([0,A]\) which we obtain by removing from the \(2^{n}\) closed intervals \(\{E_{n,i}\}_{i=1}^{2^{n}}\) constituting \(E_{n}\) an open interval of length \(L_{n}\) lying in the middle of \(E_{n,i}\). Thus splitting each \(E_{n,i}\) into two new closed intervals. The above summation condition (6.11) ensures that \(|E_{n}|>A/2\), and so \(E:=\cap_{n=0}^{\infty}E_{n}\) has positive Lebesgue measure which is not less than \(A/2\). The integral condition (6.8) holds by its equivalence to (6.9) and by (6.10). Clearly \(E\) contains no intervals. ### Collapse of the Fourier transforms We are now ready to prove Claim 3. We will do so by showing that \(\mathcal{H}(\mathbb{1}_{E},\rho)\) does not contain any non-zero tuple of the form \((0,k)\), \(k\in\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\), where \(M\) is as in Remark 6.3, for instance, and where \(E\) is as in Section 6.3. We set \(\rho=e^{-M}\). **Lemma 6.5**.: _Let \(E\), \(M\)and \(\rho\) be chosen as above. Assume that \(\{f_{n}\}_{n}\) is a sequence of functions in \(\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\), each of which has an analytic extension to a half-space larger than \(\mathbb{H}\). If \(\lim_{n\to\infty}f_{n}=0\) in the norm of \(\mathcal{L}^{2}(E,\mathbb{1}_{E}\,dx)\) and the sequence of Fourier transforms \(\widehat{f_{n}}\) satisfies_ \[\sup_{n}\int_{\mathbb{R}_{+}}|\widehat{f_{n}}|^{2}\rho\,d\zeta<\infty,\] _then we have_ \[\lim_{n\to\infty}f_{n}(z)=0,\quad z\in\mathbb{H}.\] _The convergence is uniform on compact subsets of \(\mathbb{H}\)._ In the proof of Lemma 6.5 given below we will use a technique of Khrushchev from [8] for estimating harmonic measures on certain domains. For general background on the theory of harmonic measures, see [5] or [14]. Let \(\mathcal{U}=\{\ell\}\) be the collection of finite open intervals complementary to \(E\), and let \[T_{\ell}:=\{x+iy\in\mathbb{H}:x\in\ell,y\leq\operatorname{dist}(x,E)\}\] be a triangle with base at \(\ell\). We define \(\Omega=R\setminus\big{(}\cup_{\ell\in\mathcal{U}}T_{\ell}\big{)}\) to be the bounded domain in \(\mathbb{H}\) which consists of a rectangle \(R\), with a base being the shortest closed interval containing the set \(E\), with the triangles \(T_{\ell}\) removed from \(R\). See Figure 1. An observation that Khrushchev made regarding this type of domains is the following property of their harmonic measure. **Lemma 6.6**.: _Let \(E\), \(M\) and \(\rho\) be chosen as above, and let \(H\) be given by (6.7). Let \(\Omega\) be the domain described above. If \(\omega_{z}\) is the harmonic measure of the domain \(\Omega\) at any point \(z\in\Omega\), then_ \[\int_{\partial\Omega\cap\mathbb{H}}H(\operatorname{Im}t)\,d\omega_{z}(t)<\infty.\] We emphasize that \(\partial\Omega\cap\mathbb{H}\) equals \(\partial\Omega\setminus\mathbb{R}\). Proof.: The proof is very similar to the one given by Khrushchev in [8], only minor details differ. If \(\ell=(a,b)\) is one of the finite intervals complementary to \(E\), and \(T_{\ell}\) is the triangle standing on top of it, then we denote by \(A(s)\) the part of the boundary of \(T_{\ell}\) which lies above the interval \((a,a+s)\subset\mathbb{R}\), \(0<s<|\ell|/2\). If \(u\) is the harmonic measure in \(\mathbb{H}\) of the interval \((a,a+s)\), then it is easy to see from the explicit formula \[u(x+iy)=\frac{1}{\pi}\int_{a}^{a+s}\frac{y}{(x-t)^{2}+y^{2}}\,dt,\quad x+iy\in \mathbb{H}\] that \(u(x+iy)\geq\frac{1}{2\pi}\) for \(x+iy\in A(s)\). Since \(u\) is harmonic and continuous in the closure of \(\Omega\) except possibly at the two points \(a\) and \(a+s\), the reproducing formula \(\int_{\partial\Omega}u\,d\omega_{z}=u(z)\) holds, and so \[\omega_{z}(A(s))=\int_{A(s)}d\omega_{z}\leq\int_{\partial\Omega}2\pi u\,d \omega_{z}=2\pi u(z)\leq\frac{2s}{y},\quad z=x+iy\in\mathbb{H}.\] We have used the positivity of \(u\) and \(\omega_{z}\) in the first inequality, and the second one is an easy consequence of the explicit formula for \(u\) above. Set \(A_{1}:=A(|\ell|/2)\), which is the left side of the boundary of \(T_{\ell}\), and further set \(A_{n}:=A(|\ell|/2^{n})\), \(n\geq 1\). Then, since \(H\) is decreasing, \[\int_{\partial T_{\ell}\cap\mathbb{H}}H(\operatorname{Im}t)d \omega_{z}(t) =2\int_{A_{1}}H(\operatorname{Im}t)\,d\omega_{z}(t)\] \[\leq 2\sum_{n=1}^{\infty}\int_{A_{n}-A_{n+1}}H(|\ell|/2^{n+1})\, d\omega_{z}(t)\] \[\leq 2\sum_{n=1}^{\infty}H(|\ell|/2^{n+1})\frac{2|\ell|}{2^{n}y}\] \[\leq\frac{16}{y}\int_{0}^{|\ell|}H(t)\,dt.\] Now the desired claim follows from (6.9). Proof of Lemma 6.5.: Note that it is sufficient to establish the claim that the sequence \(\{f_{n}\}_{n}\) contains a subsequence which converges pointwise in \(\Omega\) to \(0\). Indeed, the proof of Proposition 6.4 shows that our assumption on the Fourier transforms \(\widehat{f}\) implies pointwise boundedness of the sequence \(\{f_{n}\}_{n}\) on each half-plane \(\{x+iy\in\mathbb{H}:y>\delta\}\), \(\delta>0\). Hence the sequence \(\{f_{n}\}_{n}\) forms a normal family on \(\mathbb{H}\). If we establish the above claim, then every subsequences of \(\{f_{n}\}_{n}\) contains a further subsequence convergent to \(0\) in \(\mathbb{H}\). This is equivalent to convergence of the entire initial sequence \(\{f_{n}\}_{n}\) to \(0\). Fix \(z\in\Omega\). Since \(\log|f_{n}|\) is a subharmonic function and \(\max(-N,\log|f_{n}|)\) is a bounded continuous function on \(\partial\Omega\), we obtain by the maximum principle for subharmonic functions that \[\log|f_{n}(z)|\leq\int_{\partial\Omega}\max(-N,\log|f_{n}(t)|)\,d\omega_{z}(t).\] Figure 1. The domain \(\Omega\) in the proof of Lemma 6.5. There is a triangular tent between \(\Omega\) and each complementary interval of \(E\), and \(E\) lives on \(\mathbb{R}\) inbetween the tents. We let \(N\to+\infty\) and, by the monotone convergence theorem, obtain \[\log|f_{n}(z)| \leq\int_{\partial\Omega}\log|f_{n}(t)|\,d\omega_{z}(t)\] \[=\int_{\partial\Omega\cap\mathbb{H}}\log|f_{n}(t)|\,d\omega_{z}(t) +\int_{E}\log|f_{n}(t)|\,d\omega_{z}(t)\] The assumption in the lemma, Proposition 6.4, the definition of \(H\) in (6.7) and Lemma 6.6 show that \[\int_{\partial\Omega\cap\mathbb{H}}\log|f_{n}(t)|\,d\omega_{z}(t)\leq\int_{ \partial\Omega\cap\mathbb{H}}H(\operatorname{Im}t)d\omega_{z}(t)<A\] where \(A\) is some positive constant which is independent of \(n\). By Egorov's theorem, we may pass to a subsequence (the same subsequence for each \(z\in\Omega\)) and assume that \(f_{n}\) converge uniformly to \(0\) on some subset \(E^{\prime}\) of \(E\) which is of positive Lebesgue measure. On \(E\setminus E^{\prime}\) we have the estimate \[\int_{E\setminus E^{\prime}}\log|f_{n}(t)|\,d\omega_{z}(t)\leq\int_{E\setminus E ^{\prime}}|f_{n}(t)|^{2}\,d\omega_{z}(t)\leq\frac{1}{\pi\operatorname{Im}z} \int_{E\setminus E^{\prime}}|f_{n}(t)|^{2}\,dx.\] The last inequality follows from monotonicity of the harmonic measure with respect to domains (see [14, Corollary 4.3.9]), which applied to \(\Omega\subset\mathbb{H}\) leads to the inequality \[\omega_{z}(B)\leq\int_{B}\mathcal{P}(t,z)\,dt\leq\frac{|B|}{\pi\operatorname{ Im}z}\] for any Borel subset \(B\) of \(E\). Thus \(d\omega_{z}\leq\frac{dx}{\pi\operatorname{Im}z}\), attesting the integral inequality above. By the convergence of \(f_{n}\) to \(0\) in the norm of \(\mathcal{L}^{2}(E,\mathbb{1}_{E}\,dx)\), the integrals \(\int_{E\setminus E^{\prime}}|f_{n}|^{2}dx\) are uniformly bounded by some constant \(C>0\), and so the above inequalities give \[\log|f_{n}(z)|\leq A+C+\int_{E^{\prime}}\log|f_{n}(t)|d\omega_{z}(t).\] But \(\omega_{z}(E^{\prime})>0\), since the harmonic measure and the arc-length measure on the rectifiable curve \(\partial\Omega\) are mutualy absolutely continuous (see [5, Theorem 1.2 of Chapter VI]), so since \(|f_{n}|\) converge uniformly to \(0\) on \(E^{\prime}\), the integral on the right-hand side above converges to \(-\infty\) as \(n\to\infty\). Thus \(|f_{n}(z)|\to 0\), and since \(z\in\Omega\) was arbitrary, the desired claim follows. The following proposition implies Claim 3 of Section 4, and so it also implies Theorem \(B\). **Proposition 6.7**.: _Let \(E\), \(M\)and \(\rho\) be chosen as above. If a tuple of the form \((0,k)\in\mathcal{L}^{2}(E,\mathbb{1}_{E}\,dx)\oplus\mathcal{L}^{2}(\mathbb{R}_ {+},\rho)\) is contained in the Hardy subspace \(\mathcal{H}(\mathbb{1}_{E},\rho)\), then \(k\equiv 0\)._ Proof.: By the containment \((0,k)\in\mathcal{H}(\mathbb{1}_{E},\rho)\) and Proposition 3.1, there exists a sequence \(\{f_{n}\}_{n}\) of functions in \(\mathcal{H}^{1}(\mathbb{R})\cap\mathcal{H}^{2}(\mathbb{R})\) extending analytically across \(\mathbb{R}\) and for which the tuples \(Jf_{n}=(f_{n},\widehat{f}_{n})\) converge in the norm of the product space \(\mathcal{L}^{2}(\mathbb{R},\mathbb{1}_{E}\,dx)\oplus\mathcal{L}^{2}(\mathbb{ R}_{+},\rho\,dx)\) to \((0,k)\). By passing to a subsequence, we may assume that the Fourier transforms \(\widehat{f}_{n}\) converge pointwise almost everywhere on \(\mathbb{R}_{+}\) to \(k\). One might attempt to prove the proposition by using the formula in Proposition 2.3, and observing that \[k(\zeta)=\lim_{n\to\infty}\widehat{f_{n}}(\zeta)=\lim_{n\to\infty}e^{y\zeta} \int_{\mathbb{R}}f_{n}(x+iy)e^{-ix\zeta}\,d\lambda(x)\] holds for almost every \(\zeta\in\mathbb{R}_{+}\) and for any \(y>0\). By Lemma 6.5 the integrand converges pointwise to \(0\). However, an appeal to the usual convergence theorems for integrals is not justified, and we have to proceed more carefully. Note that since \(k\in\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\) and \(\rho\) is bounded from below on compact subsets of \(\mathbb{R}_{+}\), in fact \(k\) is locally integrable on \(\mathbb{R}\). It follows that we can interpret \(k\) as a distribution on \(\mathbb{R}_{+}\). Thus to show that \(k\equiv 0\), it suffices to establish that \(\int_{\mathbb{R}_{+}}k\phi\,d\lambda=0\) for every smooth function \(\phi\) which is compactly supported in \(\mathbb{R}_{+}\). Let \(\phi\) be as above. Since we have that \(\widehat{f_{n}}\to k\) in \(\mathcal{L}^{2}(\mathbb{R}_{+},\rho\,dx)\) and \(\rho\) is bounded from below on compact subsets of \(\mathbb{R}_{+}\), we obtain \[\int_{\mathbb{R}_{+}}k\phi\,d\lambda=\lim_{n\to\infty}\int_{\mathbb{R}_{+}} \widehat{f_{n}}\phi\,d\lambda. \tag{6.12}\] Fix some small \(y>0\). By Proposition 2.3, we get that \[\widehat{f_{n}}(\zeta)=e^{y\zeta}\int_{\mathbb{R}}f_{n}(x+iy)e^{-ix\zeta}\,d \lambda(x).\] Plugging this formula into (6.12) and noting that the use of Fubini's theorem is permitted, we obtain \[\int_{\mathbb{R}_{+}}k\phi\,d\lambda =\lim_{n\to\infty}\int_{\mathbb{R}}\Big{(}\int_{\mathbb{R}_{+}} \phi(\zeta)e^{y\zeta}e^{-ix\zeta}d\lambda(\zeta)\Big{)}f_{n}(x+iy)\,d\lambda(x)\] \[=\lim_{n\to\infty}\int_{\mathbb{R}}D(x)f_{n}(x+iy)\,d\lambda(x) \tag{6.13}\] where \[D(x):=\int_{\mathbb{R}_{+}}\phi(\zeta)e^{y\zeta}e^{-ix\zeta}d\lambda(\zeta)\] is the Fourier transform of the compactly supported smooth function \(\zeta\mapsto\phi(\zeta)e^{y\zeta}\). As such, \(D\) is certainly integrable on \(\mathbb{R}\). By Lemma 6.5, we have \(\lim_{n\to\infty}f_{n}(x+iy)=0\), and \[\sup_{n}\,\sup_{x\in\mathbb{R}}|f_{n}(x+iy)|<\infty\] holds by Proposition 6.4. Therefore, this time, the dominated convergence theorem applies to (6.13), and we conclude that \[\int_{\mathbb{R}_{+}}k\phi\,d\lambda=0.\] Thus \(k\) is the zero distribution on \(\mathbb{R}_{+}\), and therefore \(k\equiv 0\). ## 7. Clumping for tempered distributions In this last section, we indicate how one can derive Theorem \(C\) from Theorem \(A\). We will skip most of the details of the necessary computations, which are in any case standard. Let \(f\) be a function which satisfies \[\int_{\mathbb{R}}\frac{|f(x)|}{(1+|x|)^{n}}\,dx<\infty \tag{7.1}\] for some positive integer \(n\). Then \(f\) can be interpreted as a tempered distribution on \(\mathbb{R}\) in the usual way, and so \(f\) has a distributional Fourier transform \(\widehat{f}\). Our hypothesis is that \(\widehat{f}\) is an integrable function on some half-axis \([\zeta_{0},\infty)\) and that \[\rho_{\widehat{f}}(\zeta)=\mathcal{O}\big{(}e^{-c\sqrt{\zeta}}\big{)},\quad \zeta>\zeta_{0}. \tag{7.2}\] We may assume that \(\zeta_{0}=0\). In order to prove Theorem \(C\), we will construct an appropriate multiplier \(m:\mathbb{R}\to\mathbb{C}\) with the property that \(mf\) is a function to which Theorem \(A\) applies. In particular, the following properties will be satisfied by \(m\): 1. \(m(x)\) is a bounded function of \(x\in\mathbb{R}\) which is non-zero for almost every \(x\in\mathbb{R}\). 2. \(mf\in\mathcal{L}^{2}(\mathbb{R},dx)\), 3. \(\rho_{\widehat{mf}}(\zeta)=\mathcal{O}\big{(}e^{-c\sqrt{\zeta}}\big{)}\) for some \(c>0\) and \(\zeta>0\), 4. \(\int_{I}\log|m|\,dx>-\infty\) for every interval \(I\subset\mathbb{R}\). If we construct such a multiplier \(m\), then \((ii)\), \((iii)\) and Theorem \(A\) imply that \(\log|mf|\) is locally integrable on an open set \(U\) which coincides, up to a set of measure zero, with \(\{x\in\mathbb{R}:|f(x)m(x)|>0\}\). By \((i)\), \(U\) differs from \(\{x\in\mathbb{R}:|f(x)|>0\}\) at most by a set of measure zero. Moreover, the formula \(\log|f|=\log|fm|-\log|m|\) and \((iv)\) show that \(\log|f|\) is locally integrable on \(U\). This proves Theorem \(C\), as a consequence of existence of a multiplier satisfying the above conditions. We now show how to construct such a multiplier. We set \[\Phi(x):=\frac{(-i)^{n}n!}{\sqrt{2\pi}(x-i)^{n}}\] and let \(h\) be defined by the equation (2.7), with \[\log|h(x)|=\log\min(1,|f(x)|^{-1}),\quad x\in\mathbb{R}.\] The condition (7.1) ensures that \(h\) is well-defined, and it is a member of \(\mathcal{H}^{\infty}(\mathbb{R})\). We put \[h_{*}(x):=\frac{h(x)}{(x+i)^{2}},\quad x\in\mathbb{R}.\] and finally \[m(x):=\Phi(x)\overline{h_{*}(x)},\quad x\in\mathbb{R}.\] Clearly, \(m\) is bounded. Since \(|f|\) is locally integrable on \(\mathbb{R}\), the set \(\{x\in\mathbb{R}:|f|=\infty\}\) has measure zero. Consequently, \(|h_{*}|>0\) almost everywhere in \(\mathbb{R}\), and so the desired property \((i)\) of \(m\) holds. The choice of \(h_{*}\) and \(\Phi\) ensures that \(mf\) is both bounded and integrable on \(\mathbb{R}\), implying \(mf\in\mathcal{L}^{2}(\mathbb{R},dx)\), so that \((ii)\) above holds. Property \((iv)\) holds by Proposition 2.2, since \(h_{*}\in\mathbb{H}^{1}(\mathbb{R})\). So the critical property left to be verified is the spectral estimate of \(mf\) in \((iii)\) above. **Lemma 7.1**.: _With notation and definitions as above, the Fourier transform \(\widehat{mf}\) satisfies_ \[\rho_{\widehat{mf}}(\zeta)=\mathcal{O}\big{(}e^{-c\sqrt{\zeta}}\big{)},\quad \zeta>0\] _for some \(c>0\)._ Proof.: A standard argument shows that \(\widehat{f\Phi}\) must coincide on \(\mathbb{R}_{+}\) with the convolution \(\widehat{f}*\widehat{\Phi}\) (which, note, is a function on \(\mathbb{R}_{+}\)). Indeed, let \(s\) be a Schwartz function which has a Fourier transform \(\widehat{s}\) supported on some compact interval \([a,b]\), \(0<a<b\). Note that the function \(\Phi s\) is also of Schwartz class. It follows immediately from the integral definition of the Fourier transform (1.1) that \(\widehat{\overline{\Phi}s}=\widehat{\overline{\Phi}}*\widehat{s}\), and that \(\widehat{\overline{\Phi}s}\) is supported on the interval \([a,\infty)\). Hence, by the definition of the distributional Fourier transform, we obtain \[\int_{\mathbb{R}_{+}}\widehat{f\Phi}\,\overline{\widehat{s}}\,d\lambda=\int_ {\mathbb{R}}f\Phi\overline{s}\,d\lambda=\int_{\mathbb{R}}f\overline{\overline{ \Phi}s}\,d\lambda=\int_{\mathbb{R}_{+}}\widehat{f(\widehat{\overline{\Phi}}* \widehat{s})}\,d\lambda.\] Fubini's theorem and the computational rule \(\widehat{\overline{\Phi}}(x)=\overline{\widehat{\Phi}}(-x)\) shows that the last integral above equals \[\int_{\mathbb{R}_{+}}(\widehat{f}*\widehat{\Phi})\,\overline{\widehat{s}}\,d\lambda,\] proving our claim about the structure of \(\widehat{f\Phi}\) on \(\mathbb{R}_{+}\). Hence \(\widehat{f\Phi}\) is a bounded continuous function which coincides with \[\widehat{f\Phi}(\zeta)=\int_{\mathbb{R}}\widehat{f}(x)\widehat{\Phi}(\zeta-x )\,d\lambda(x)\] for \(\zeta>0\). By a computation similar to the one in the proof of Lemma 4.1 one sees that \(\Phi\) has the Fourier transform \[\widehat{\Phi}(\zeta)=|\zeta|^{n}e^{\zeta}\mathbb{1}_{\mathbb{R}_{-}}(\zeta).\] For such \(\zeta\), we estimate \[\big{|}\widehat{f\Phi}(\zeta)\big{|} \leq\int_{\mathbb{R}}|\widehat{f}(x)||\zeta-x|^{n}e^{\zeta-x} \mathbb{1}_{\mathbb{R}_{-}}(\zeta-x)\,d\lambda(x)\] \[=\int_{\zeta}^{\infty}|\widehat{f}(x)||\zeta-x|^{n}e^{\zeta-x}\, d\lambda(x)\] \[=\sum_{k=0}^{\infty}\int_{\zeta^{2k}}^{\zeta^{2k+1}}|\widehat{f} (x)||\zeta-x|^{n}e^{\zeta-x}\,d\lambda(x).\] We now make the rather rough estimate \[|\zeta-x|^{n}e^{\zeta-x}\leq\zeta^{2(k+1)n},\quad x\in[\zeta^{2k},\zeta^{2k+1 }],\] which gives \[\left|\widehat{f\Phi}(\zeta)\right|\leq\sum_{k=0}^{\infty}\rho_{\widehat{f}}( \zeta 2^{k})\zeta^{2(k+1)n}\leq A\sum_{k=0}^{\infty}e^{-c\sqrt{\zeta}\sqrt{2}^{k}} \zeta^{2(k+1)n}\] for some \(A>0\). The above sum can be readily estimated to be of order \(\mathcal{O}\big{(}e^{-d\sqrt{\zeta}}\big{)}\) for some \(d>0\) slightly smaller than \(c\). Since \(fm\) is the product of two integrable functions \(f\Phi\) and \(\widehat{h_{*}}\), we have \[\widehat{fm}=\widehat{f\Phi}\ast\widehat{\widehat{h_{*}}},\] where \(\widehat{\widehat{h_{*}}}(\zeta)=\widehat{\widehat{h_{*}}(-\zeta)}\) is non-zero only for \(\zeta<0\). Note that \(h_{*}\) is integrable on \(\mathbb{R}\), and so \(\widehat{h_{*}}\) is bounded. We obtain \[\widehat{|fm}(\zeta)| \leq\int_{\mathbb{R}}\left|\widehat{f\Phi}(x)\overline{\widehat{ h_{*}}(x-\zeta)}\right|d\lambda(x)\] \[\leq B\int_{\zeta}^{\infty}e^{-d\sqrt{x}}\,d\lambda(x)=\mathcal{ O}\big{(}e^{-d\sqrt{\zeta}}\big{)},\] where \(B\) is some positive constant. The desired estimate on \(\rho_{\widehat{fm}}\) follows readily from this estimate. By the above discussion, we have proved Theorem \(C\).
2309.08767
A Control Approach for Nonlinear Stochastic State Uncertain Systems with Probabilistic Safety Guarantees
This paper presents an algorithm to apply nonlinear control design approaches in the case of stochastic systems with partial state observation. Deterministic nonlinear control approaches are formulated under the assumption of full state access and, often, relative degree one. We propose a control design approach that first generates a control policy for nonlinear deterministic models with full state observation. The resulting control policy is then used to build an importance-like probability distribution over the space of control sequences which are to be evaluated for the true stochastic and state-uncertain dynamics. This distribution serves in the sampling step within a random search control optimization procedure, to focus the exploration effort on certain regions of the control space. The sampled control sequences are assigned costs determined by a prescribed finite-horizon performance and safety measure, which is based on the stochastic dynamics. This sampling algorithm is parallelizable and shown to have computational complexity indifferent to the state dimension, and to be able to guarantee safety over the prescribed prediction horizon. A numerical simulation is provided to test the applicability and effectiveness of the presented approach and compare it to a certainty equivalence controller.
Mohammad S. Ramadan, Mohammad Alsuwaidan, Ahmed Atallah, Sylvia Herbert
2023-09-15T21:16:39Z
http://arxiv.org/abs/2309.08767v1
A Control Approach for Nonlinear Stochastic State Uncertain Systems with Probabilistic Safety Guarantees ###### Abstract This paper presents an algorithm to apply nonlinear control design approaches in the case of stochastic systems with partial state observation. Deterministic nonlinear control approaches are formulated under the assumption of full state access and, often, relative degree one. We propose a control design approach that first generates a control policy for nonlinear deterministic models with full state observation. The resulting control policy is then used to build an importance-like probability distribution over the space of control sequences which are to be evaluated for the true stochastic and state-uncertain dynamics. This distribution serves in the sampling step within a random search control optimization procedure, to focus the exploration effort on certain regions of the control space. The sampled control sequences are assigned costs determined by a prescribed finite-horizon performance and safety measure, which is based on the stochastic dynamics. This sampling algorithm is parallelizable and shown to have computational complexity indifferent to the state dimension, and to be able to guarantee safety over the prescribed prediction horizon. A numerical simulation is provided to test the applicability and effectiveness of the presented approach and compare it to a certainty equivalence controller. ## I Introduction The challenge with output feedback control in stochastic systems is the interaction between control and estimation. Stochastic control deals with this interaction by introducing the concept of the information state [12, 19], which addresses the propagation of uncertainty together with the state estimate, instead of regarding this estimate as the true state value as in certainty equivalence control. The introduction of the information state results in a significant increase in the complexity and dimensionality of the control problem and leads to algorithms which are prohibitive computationally. On the other hand, to maintain computational tractability, nonlinear control design approaches are typically formulated for deterministic and full-state feedback systems [11]. This difference in the problem formulation makes it difficult to use nonlinear design approaches in stochastic settings. In this paper, we propose a random search algorithm to solve the stochastic nonlinear optimal control problem. For computational efficiency, we first employ a nonlinear controller designed for a surrogate deterministic dynamic model. The nonlinear controller and the surrogate dynamics are used as a constructor of an importance-like distribution which reduces the searching space and guides the effort of sampling finite-horizon control sequences. These sequences are then evaluated with respect to a prescribed finite-horizon performance and safety measure when applied to the true stochastic dynamics. The first control action, of the sequence with the highest performance and safety measure is applied at the current time in a receding-horizon fashion. Out of the many nonlinear design approaches to test the proposed algorithm, this paper uses control barrier functions (CBF) to generate the deterministic control policy. CBFs are considered the "dual" of control Lyapunov functions, but they enforce safety rather than stability. Existing literature about CBFs has limited answers to system uncertainties or output feedback, two characteristics that often exist in real-world systems. A method addressing stochastic systems within the CBF framework have been developed in [7], but for the full state-feedback case. For the case of partial state observation, or output feedback, the CBF-based approaches are typically limited in assumptions to linear systems [20]. Furthermore, time-discretization, and possibly state and input space discretization, are employed while relying on guarantees achieved in continuous settings. This weakens the validity of the inherited guarantees and leads to the conservativeness in the adjusting safe sets by additional safety margins [3]. The key idea of this paper is to use the policy generated by the deterministic CBF controller to guide a random search optimization for the stochastic nonlinear control problem. This bears resemblance to importance sampling, an important technique in the field of Monte Carlo integration [13]. This technique approximates an integral by sampling the importance distribution: a chosen probability distribution that is ideally concentrated in regions of high contribution to the integral in-hand. The analogy between our proposed algorithm, and importance distribution, is in constructing a distribution in the control space that is concentrated in regions likely to be of low cost and guaranteed safety. This is similar to sampling methods which rely on an information theoretic construction of the importance distribution [21] for full state-feedback systems. However, our approach does not require full state-feedback. Moreover, in contrast to sampling approaches based on Monte Carlo integration, our approach does not rely on the laws of large numbers; a finite number of samples/scenarios is enough to guarantee safety with high probability. ## II Problem Formulation Consider the following discrete-time state-space model \[x_{k+1} =f(x_{k},u_{k},w_{k}), \tag{1a}\] \[y_{k} =g(x_{k},v_{k}), \tag{1b}\] where the state is \(x_{k}\in\mathbb{R}^{r_{x}}\), control input \(u_{k}\in\mathbb{R}^{r_{u}}\), and exogenous disturbances \(w_{k}\) and \(v_{k}\). The initial state \(x_{0}\) has distribution \(p_{0}\) represented by a set of \(L\) equiprobable particles \(\Xi=\{x_{0|0,j}\mid j=1,\ldots,L\}\). The stochastic disturbance processes, \(\{w_{k}\}\) and \(\{v_{k}\}\), are white, possess known densities \(\mathcal{W}\) and \(\mathcal{V}\), and are independent from each other and each from \(x_{0}\). We assume we are given the following cost function \[J(p_{0},\{u_{k}\}_{k=0}^{N-1})=\mathbb{E}\,\left(\gamma^{N}\ell_{N}(x_{N})+\sum_ {k=0}^{N-1}\gamma^{k}\ell_{k}(x_{k},u_{k})\right),\] then, the objective is to find the control sequence \(\{u_{k}\}_{k}^{N-1}\) that solves the following optimization problem: \[\min_{\{u_{k}\}_{k=0}^{N-1}} J(p_{0},\{u_{k}\}_{k=0}^{N-1}),\] (2) subject to, for all \[k,\] \[\mathbb{P}(x_{k}\in\mathcal{C})\geq 1-\epsilon,\,u_{k}\in\mathbb{U},\] where: \(\gamma\in(0,1]\) is the discount factor; \(N\) is the prediction horizon; the expectation \(\mathbb{E}\,\) is with respect to a probability space of: elements \((x_{0},\{w_{k}\}_{k=0}^{N-1})\), the suitable product Borel \(\sigma-\)field, and the probability measure \(\mathbb{P}\); \(\ell_{k}\)s are the stage cost functions and \(\mathbb{E}\left|\ell_{k}(x_{k},u_{k})\right|<\infty\)1, for all \(u_{k}\); \(\epsilon\in[0,1)\) is the acceptable constraint violation probability and is typically small \(\epsilon\approx 0\); the set \(\mathcal{C}\) can be seen as the safe set and is not to be violated in a probability less than \(1-\epsilon\). Footnote 1: Possible assumptions to satisfy this: \(\ell_{k}\)s are bounded; or for each \(u_{k}\), \(\ell_{k}(\cdot,u_{k})\) is positive and bounded by a quadratic function in \(x_{k}\), the covariances of \(w_{k}\), \(x_{0}\) are finite and \(f\) is uniformly Lipschitz. In the case of partial state observation as in (1b), where in general \(y_{k}\neq x_{k}\), the concept of the information state appears [12], which is the conditional state density. Designing a control law over the function of all possible densities was concluded very early in the control literature to be prohibitive [19]. One of the common approaches is to ignore the state uncertainty and use an estimate of the state as the true state in a feedback control law. This approach is valid only in simple examples like linear-quadratic-Gaussian (LQG) control, where the separation principle holds [2]. In this paper, our approach to solve (2) takes into account the state uncertainty and the disturbances, and provides probabilistic guarantees for the original system (1). ## III Prerequisites Our methodology consists of two major components: A CBF controller (for the surrogate deterministic dynamics) and a particle filter to track the state conditional density. This section gives a brief introduction of each, before we introduce the algorithm in the next section. ### _Control Barrier Functions_ Although the dynamics used in the CBF and safety literature are mostly in continuous-time, the discrete-time dynamics can be obtained by some integrator or simply a zero-order-hold. Thus, we first assume deterministic dynamics and full state accessibility to derive a CBF controller through a surrogate continuous-time formulation. Then we apply a zero-order-hold, accounting for partial observations and exogenous disturbances, to get back the formulation (1). The following continuous-time nonlinear dynamic system conforms to assumptions concerning the existence and uniqueness of its solutions, \[\dot{x}=F(x,u), \tag{3}\] where \(x\in\mathbb{R}^{r_{x}}\) is the state and \(u\in\mathbb{U}\subset\mathbb{R}^{r_{u}}\) is the control input. This system is to be made safe, in the sense of remaining in a control invariant set \(\mathcal{C}\subset\mathbb{R}^{r_{x}}\) that is a subset of the state constraints. Control barrier functions are real-valued functions over the state space that encode both the state constraints and long-term effects of the dynamic system. A CBF has two key properties relevant to safety: its value at a given state provides a measure of safety, and its gradient informs the set of control actions that will preserve safety [1, 6]. Let \(h(x)\) be a smooth real-valued function such that \(\mathcal{C}\) is the superlevel set of \(h\), that is, \(\mathcal{C}=\{x\in\mathbb{R}^{r_{x}}\mid h(x)\geq 0\}\), and let \(\alpha:\mathbb{R}\rightarrow\mathbb{R}\) be a class \(\mathcal{K}\) function [11, p. 144]. Then, the function \(h\) is called a CBF if \[\exists u\in\mathbb{U}\text{ such that }\frac{\partial h}{\partial x}\cdot F(x,u)= \dot{h}\geq-\alpha(h),\,\forall x. \tag{4}\] The above condition enforces that the system remains within the control invariant set \(\mathcal{C}\), and therefore preserves non-negative safety values. The CBF is typically paired with a background performance controller that is designed according to a different safety-blind objective [4, 6]. Online, the CBF acts as a safety filter to minimally modifies the performance control in order to maintain safety: \[u^{\star}(x)=\operatorname*{arg\,min}_{u\in\mathbb{U}}\|u-u_{0} \|^{2},\] (5) \[u\text{ satisfies the inequality in (\ref{ 1. The time update step, in which the set of particles \(\{x_{k|k,j}\}_{j=1}^{L}\) are propagated through the state equation (1a), with a chosen \(u_{k}\), using \(L\) realisations of the disturbance \(w_{k,j}\sim\mathcal{W}\). The resulting particles are denoted \(\{x_{k+1|k,j}\}_{j=1}^{L}\). 2. The measurement update step, wherein if \(y_{k+1}\) becomes available, consists of computing the importance weights \(\{\Omega_{k+1,j}\}_{j=1}^{L}\) by: \[\Omega_{k+1,j}=\frac{p(y_{k+1}\mid x_{k+1|k,j})}{\sum_{j=1}^{L}p(y_{k+1}\mid x _{k+1|k,j})},\,j=1,\ldots,L.\] In this paper, we implement a resampling step after Step-(2), for every time step. The resulting equiprobable particles are denoted \(\{x_{k+1|k+1}\}_{j=1}^{L}\). The conditional mean at any time \(k\) can be found by evaluating the sample average \[\mathbb{E}\left[x_{k}\mid y_{0},\ldots,y_{k}\right]\approx\frac{1}{L}\sum_{j= 1}^{L}x_{k|k,j}.\] ## IV Control importance-like distribution for random search Our random search approach to solve (2) is influenced by importance sampling in Monte Carlo literature, which prioritizes sampling regions in the integration space of higher contribution. We construct an importance-like distribution \(\mathcal{U}\) of control sequences \(\{u_{k}\}_{k=0}^{N-1}\), that is ideally concentrated at regions with high confidence to contain feasible, and possibly optimal solutions of (2). Sampling the distribution \(\mathcal{U}\) is done through a simulation-based algorithm. We first simulate the sequence \(\{x_{k}^{\prime}\}_{k}\) characterized by an initial state \(x_{0}\sim p_{0}\) and a CBF controller applied in feedback \(u_{k}^{*}=\pi(x_{k})\) obtained from (5), in \[x_{k+1}^{\prime}=f(x_{k}^{\prime},\pi(x_{k}^{\prime}),w_{k}^{\prime}),\,x_{0} ^{\prime}\sim p_{0},\,w_{k}^{\prime}\sim\mathcal{W}, \tag{6}\] where the disturbance \(\{w_{k}^{\prime}\}_{k}\) is an i.i.d. realization of \(\{w_{k}\}_{k}\). This sampling procedure defines a distribution of the state sequence \(\{x_{k}^{\prime}\}_{k=0}^{N-1}\) which in turn defines a distribution of the control sequence \(\{u_{k}^{*}\}_{k=0}^{N-1}=\{\pi(x_{k}^{\prime})\}_{k=0}^{N-1}\). We call this the importance distribution \(\mathcal{U}\). Notice that this distribution is a function of: the original dynamics (1a), the control law \(\pi(\cdot)\) (which is based on the chosen surrogate dynamics (3)), and the initial state density \(p_{0}\). Our Control Importance Distribution Algorithm (CIDA), shown in Algorithm 1, outlines3 this random search procedure we propose to solve (2). Footnote 3: The function \(1(\cdot)\), used in Algorithm 1, is the indicator function and returns \(1\) if \(\cdot\) is true and \(0\) if false. ``` Input \(p_{0}\) current time, \(k=0\), particle filter's density described by equiprobable particles \(\Xi\) \(\pi\) CBF safe controller (5) of the surrogate deterministic dynamics defined in (3) M number of simulations per control sequence \(\alpha\) acceptable statistical violation rate, \(\alpha\in[0,\epsilon)\) R rollouts number, i.e. the number of control sequence random search trials Output \(u^{\#}\) current time control input for\(i=1,2,\ldots,R\)do Using \(p_{0}\) and \(\pi\), sample \(\{x_{k}^{\prime}\}_{k=0}^{N-1}\) using (6), then record \(\{u_{k}^{*}\}_{k=0}^{N-1}=\{\pi(x_{k}^{\prime})\}_{k=0}^{N-1}\); using \(\{u_{k}^{*}\}_{k=0}^{N-1}\), \(M-\)(idid) simulations of \(\{w_{k}\}_{k=0}^{N-1}\), denoted \((\{w_{k,q}\}_{k=0}^{N-1})_{q=1}^{M}\), and the state equation (1a), record the \(M-\)trajectories \((\{x_{k,q}\}_{k=0}^{N})_{q=1}^{M}\) such that \[x_{k+1,q}=f(x_{k,q},u_{k}^{*},w_{k,q});\] if \[a_{k}=\frac{1}{M}\sum_{q=1}^{M}1(x_{k,q}\in\mathcal{C})\geq 1-\alpha,\forall k\] (7) then \(\{u_{k}^{*}\}_{k=0}^{N-1}\) is feasible, and calculate \(J_{i}\) \[J_{i}=\frac{1}{M}\sum_{q=1}^{M}\left(\gamma^{N}\ell_{N}(x_{N,q})+\sum_{k=0}^{N-1 }\gamma^{k}\ell_{k}(x_{k,q},u_{k}^{*})\right);\] else \(\{u_{k}^{*}\}_{k=0}^{N-1}\) is infeasible, and \(J_{i}\leftarrow\infty\); endif endfor If finite, find \(\min_{i}J_{i}\) and its corresponding control sequence \(\{u_{k}^{*}\}_{k=0}^{N-1}\), then set \(u^{\#}\gets u_{0}^{*}\); Propagate particle filter to next time and reset \(k\gets 0\); ``` **Algorithm 1** Control Importance Distribution Algorithm (CIDA) ### _Computational complexity of CIDA_ Notice that in the original problem formulation (2), the safety condition \(\{x_{k}\in\mathcal{C}\}\) has to be satisfied with probability \(\geq 1-\epsilon\). However, Algorithm 1 imposes the safety condition statistically for the \(M-\)samples, and with rate \(\geq 1-\alpha\). Relying only on the laws of large numbers for guarantees demands \(M\rightarrow\infty\). Similar to [17, 14], and the randomized sample/scenario framework [5], we can provide acceptable safety guarantees with relatively small4 number of samples. Footnote 4: In the numerical simulations section we use \(M=150\). **Lemma 1**.: _(**Hoeffding's inequality [10]**). For independent random variables \(Z_{q},\,q=1,\ldots,\bar{M}\), \(\mathbb{P}(Z_{q}\in[a_{q},b_{q}])=1\), \(a_{q}\leq b_{q}\), for all \(t\geq 0\)_ \[\mathbb{P}\left(\sum_{q=1}^{\bar{M}}(Z_{q}-\mathbb{E}\ Z_{q})\geq t\bar{M} \right)\leq\exp\left(-\frac{2\bar{M}^{2}t^{2}}{\sum_{q=1}^{\bar{M}}(b_{q}-a_{q} )^{2}}\right)\] **Theorem 1**.: _For any \(\delta\in(0,1)\), if_ \[M\geq\frac{1}{2(\epsilon-\alpha)^{2}}\log\left(\frac{1}{\delta}\right), \tag{8}\] _then, a feasible control sequence \(\{u_{k}\}_{k=0}^{N-1}\) w.r.t. Algorithm 1, i.e. satisfies the inequalities (7), is feasible with respect to the original optimization problem (2), with probability \(1-\delta\) Proof.: Given an initial density \(p_{0}\), fix a control sequence \(\{u_{k}\}_{k=0}^{N-1}\), where \(u_{k}\in\mathbb{U}\), and let \(\{x_{k}\}_{k=0}^{N}\) be the resulting process characterized by (1a). Let \((\{x_{k,q}\}_{k=0}^{N-1})_{q=1}^{M}\) be defined as in Algorithm 1, using this \(\{u_{k}\}_{k=0}^{N-1}\). Suppose \(\{u_{k}\}_{k=0}^{N-1}\) is infeasible with respect to (2). Then \(\exists m\in\{1,\ldots,N\}\), a minimizer of \(G_{k}=\mathbb{P}(x_{k}\in\mathcal{C})\) and hence \(G_{m}=\mathbb{P}(x_{m}\in\mathcal{C})<1-\epsilon\). Define \(Y_{q}=1(x_{l,q}\in\mathcal{C})\). Then \[\mathbb{E}\,Y_{q}=\mathbb{P}(x_{l,q}\in\mathcal{C})=\mathbb{P}(x_{l}\in \mathcal{C})<1-\epsilon\] Since \(x_{l,q}\) is an iid sample of \(x_{l}\). Now we check the statistical feasibility of this control sequence, according to Algorithm 1, \[\mathbb{P}\left(\left\{\frac{1}{M}\sum_{q=1}^{M}1(x_{k,q}\in \mathcal{C})\geq 1-\alpha,\,\forall k\right\}\right),\] \[\leq\mathbb{P}\left(\left\{\frac{1}{M}\sum_{q=1}^{M}1(x_{l,q}\in \mathcal{C})\geq 1-\alpha\right\}\right),\] \[\leq\mathbb{P}\left(\frac{1}{M}\sum_{q=1}^{M}\left(Y_{q}-\mathbb{ E}\,Y_{q}\right)\geq-1+\epsilon+1-\alpha\right),\] \[\leq\mathbb{P}\left(\sum_{j=1}^{M}\left(Y_{q}-\mathbb{E}\,Y_{q} \right)\geq M(\epsilon-\alpha)\right),\] \[\leq\exp\left(-2M(\epsilon-\alpha)^{2}\right),\] where the second inequality follows from the definition of \(Y_{q}\), and the last inequality from Hoeffding's (Lemma 1). If we pick \(\delta\in(0,1)\), and pick an \(M\) that satisfies (8) with strict inequality, we get \[\mathbb{P}\left(\left\{\frac{1}{M}\sum_{q=1}^{M}1(x_{k,q}\in \mathcal{C})\geq 1-\alpha,\,\forall k\right\}\right),\] \[\leq\exp\left(-2M(\epsilon-\alpha)^{2}\right)<\delta.\] Therefore, if \(\{u_{k}\}_{k=0}^{N-1}\) is infeasible w.r.t. (2), then it is infeasible w.r.t. Algorithm 1, with probability \(\geq 1-\delta\). The contrapositive of this statement proves this theorem. Algorithm 1 has a computational complexity of \(\mathcal{O}(MR)\), per time-step. By choosing \(R\) to be of \(\mathcal{O}(L)\), where \(L\) represents the number of particles in the particle filter, the overall complexity of the algorithm, combined with the particle filter, becomes \(\mathcal{O}(L\log L)\) per time-step. This algorithm is parallelizable across two dimensions: \(i\), the random search trial number; and \(q\), the index of the \(M\) resulting trajectories. Using a graphics processing unit (GPU) can potentially reduce the required computation time. **Remark 1**.: _If \(x_{0}^{\prime}=\mathbb{E}\,x_{0}\) and \(w_{k}^{\prime}=0\) are used in (6), the resulting control sequence, call it \(\{\bar{u}_{k}\}_{k}\), is the certainty equivalence control [2]. Simply augmenting this sequence as one of the rollouts in Algorithm 1 guarantees that this algorithm is as good or better, in performance and safety over the prediction horizon, than the typically used certainty equivalence control._ **Remark 2**.: _The distribution of \(w_{k}^{\prime}\) determines the "breadth" of the searching space around the certainty equivalence control. Although \(w_{k}^{\prime}\) is presented as an i.i.d. sample of \(w_{k}\), it can be sampled according to a different distribution. When there is no modeling mismatch: i.e. \(w_{k}=0\) and \(x_{0}\) is known, we expect the certainty equivalence control (with \(w_{k}^{\prime}=0\)) to be optimal w.r.t. (2). The more the uncertainties and modeling mismatches, the more the optimal solution will possibly depart from the certainty equivalence one, the more breadth we need for the distribution of \(w_{k}^{\prime}\), such that the optimal solution is still in the support of \(\mathcal{U}\)._ ## V Numerical simulation: autonomous vehicle navigation and obstacle avoidance An autonomous vehicle is modelled as a discrete-time stochastic unicycle model, with partial state observation: \[\xi_{k+1} =f(\xi_{k},\omega_{k},q_{k}),\] \[=\xi_{k}+\tau\begin{bmatrix}V\operatorname{sinc}(\frac{\omega_{k} \tau}{2})\cos(\theta_{k}+\frac{\omega_{k}\tau}{2})\\ V\operatorname{sinc}(\frac{\omega_{k}\tau}{2})\sin(\theta_{k}+\frac{\omega_{k} \tau}{2})\\ \omega_{k}\end{bmatrix}+w_{k}, \tag{9}\] \[z_{k} =g(\xi_{k},v_{k})=\begin{bmatrix}x_{k}\\ y_{k}\end{bmatrix}+v_{k}, \tag{10}\] where \(\tau=0.2\,s\) is the time interval of one step, \(\operatorname{sinc}(\cdot)=\sin(\cdot)/\cdot\) is the sinc function, \(\xi_{k}=(x_{k},y_{k},\theta_{k})^{T}\) is the state vector, \(\theta_{k}\) is the heading angle of the vehicle, counter clockwise from the positive \(x-\)axis, and \(\omega_{k}\in\mathbb{U}=[-\pi,\pi]\,s^{-1}\) is the control input, which is the rate of change of heading angle \(\theta_{k}\). We assume a constant vehicle speed \(V=5\,m/s\). The measurement can be seen as a noisy position measurement. The random variables \(\zeta_{0},w_{k}\) and \(v_{k}\) follow the same assumptions for (1), and: \(\mathcal{W}=\mathcal{N}(0,\text{diag}(.2,.,1)),\,\mathcal{V}=\mathcal{N}(0, \text{diag}(.1,.,1.))\) and the particles \(\Xi\) representing \(p_{0}\), the density of \(\zeta_{0}\), are sampled according to a density \(\mathcal{N}((10,0,-\pi/2)^{T},\text{diag}(.2,.,2.))\)5. Footnote 5: The notation: \(\mathcal{N}(\mu,\Sigma)\) represents a multivariate Gaussian with mean \(\mu\) and covariance \(\Sigma\), \(\text{diag}(e)\) is the square diagonal matrix with diagonal elements \(e\). This system has the objective to follow a circular orbit while avoiding multiple obstacles with probability \(1-\epsilon\), where \(\epsilon=15\%\). A CBF controller is well-suited to this objective for a deterministic system. However, the stochastic nature of the dynamics and the partial access to the states hinder the immediate implementation of a CBF-based control. Instead, we pick a surrogate deterministic dynamics model \[\dot{p}_{x}=V_{x},\,\dot{p}_{y}=V_{y}, \tag{11}\] where \(p_{x}=p_{x}(t)\) and \(p_{y}=p_{y}(t)\) are the coordinates of the vehicle, \(V_{x}\) and \(V_{y}\) are the velocity inputs. ### _Level 1: baseline control \(u_{0}\) for asymptotic behavior_ Suppose we use a baseline safety-agnostic controller, \(u_{0}\), based on a vector field navigation approach [15]. The vectors of this field represent the desired heading angle to follow an orbit. In this example, the path is a clockwise circular orbit with radius \(r=10\,m\) and a center at the origin. The baseline controller is then derived according to this vector field. The desired heading angle is defined as \[\theta^{d}=\gamma-\frac{\pi}{2}-\tan^{-1}\left(k(d-r)\right), \tag{12}\] where \(d\) and \(\gamma\) are, respectively, the distance and the angular position of the vehicle with respect to the orbit's center, \(\gamma=\text{atan}2(p_{y},p_{x})\). The angular position angle is measured from the positive \(x-\)axis, and in a counter-clockwise direction. We pick a gain \(k=0.3\). The resulting vector field is visualized in Figure 1. The baseline control can be defined as \(u_{0}=(V\cos(\theta^{d}),V\sin(\theta^{d}))\), where \(V=5\,m/s\) and \(\theta^{d}\) as in (12). ### _Level 2: CBF control \(u^{\star}\) for obstacle avoidance_ While following the orbit, the vehicle has to avoid three obstacles, represented as circular objects with centers \(\{(9,-5),(-10,-9),(-7,10)\}\,m\) and radii \(\{3,4,3\}\,m\), respectively. For the multiple obstacles, we rely on the concept of the Boolean compositional nonsmooth barrier function [9]. The quadratic program, analogous to the one in Proposition 3 therein, \[\begin{split} u^{\star}(p_{x},p_{y})=\underset{u}{\arg\min}& \|u-u_{0}\|^{2},\\ \text{s.t.:}&\frac{\partial h_{m}(p_{x},p_{y})}{ \partial(p_{x},p_{y})}^{T}u\geq-\alpha(h_{m}(p_{x},p_{y})),\,m=1,2,3.\end{split} \tag{13}\] The functions \(h_{m}(x,y)\) correspond to each obstacle, and defined as \[h_{m}(p_{x},p_{y})=(p_{x}-p_{x}^{m})^{2}+(p_{y}-p_{y}^{m})^{2}-(r^{m})^{2},\] where \((p_{x}^{m},p_{y}^{m}),r^{m}\) are the coordinates and radius of obstacle \(m\). We pick the class \(\mathcal{K}\) function \(\alpha(h_{m})=0.05h_{m}\) to induce a gradual variation in the vector field around these obstacles. The quadratic program is then solved for a grid over \([-15,15]\times[-15,15]\) in the state-space. The resulting \(u^{\star}=(u^{\star}(1),u^{\star}(2))^{T}\) is mapped back to a desired heading angle, we denote it \(\theta^{\star}=\text{atan}2(u^{\star}(2),u^{\star}(1))\). The resulting vector field is shown in Figure 2. This angle, since acquired using (13), is a function of the vehicle's position, i.e. \(\theta^{\star}=\theta^{\star}(p_{x},p_{y})\). ### _Level 3: CIDA control \(u^{\#}\) for the stochastic system_ For the state-space system (9),(10), let \(\hat{\zeta}_{k}=(\hat{x}_{k},\hat{y}_{k},\hat{\theta}_{k})^{T}\) denote the sample average of the particle filtered density at time \(k\). The number of particles in the filter we use is \(L=1000\). We define the _certainty equivalence controller_ (CE), using the error signal between \(\theta^{\star}(\hat{x}_{k},\hat{y}_{k})\) and \(\hat{\theta}_{k}\), by \(\omega_{k}^{\text{CE}}=\text{sat}\left(5(\theta^{\star}-\hat{\theta}_{k}),-\pi,\pi\right)\). This function saturates the first argument above at \(\pi\) and below at \(-\pi\), such that \(\omega_{k}^{\text{CE}}\in\mathbb{U}=[-\pi,\pi],\,\forall k\). The result of applying this controller is depicted in Figure 3, which shows the resulting true simulated trajectory that violates the constraints \(54\) times over \(750\) time-steps. Next, we apply our algorithm CIDA with: \(\omega^{\#}=\pi(\zeta_{0}^{\prime})\) where \(\pi(\zeta_{k}^{\prime})=\text{sat}\left(5(\theta^{\star}(x_{k}^{\prime},y_{k }^{\prime})-\hat{\theta}_{k}^{\prime}),-\pi,\pi\right)\), \(M=R=150,\,N=10,\,\gamma=1\), and statistical violation tolerance \(\alpha=5\%\). According to Theorem 1, these values enforce a probabilistic violation tolerance \(\leq\epsilon=15\%\), and with confidence of at least \(1-\delta=95\%\). To avoid running into infeasibility issues, hard constraints are replaced by soft ones and the control sequence \(\{u_{k}^{\star}\}_{k}\) Fig. 1: The vector field defined by a constant magnitude (speed) and an angle given by (12). The red curve is a circular orbit of \(10\,m\) radius. Fig. 3: The true simulated vehicle’s position for \(750\) time-steps, when the certainty equivalence control is applied in feedback. Fig. 2: The vector field defined by \(\theta^{\star}\), induced from the quadratic program (13). The red curve is a circular orbit of \(10\,m\) radius, and the blue disks are the obstacles to be avoided by the vehicle. \(\max_{k}a_{k}\) is chosen. Figure 4 illustrates a simulated trajectory using \(\omega_{k}^{\#}\) in feedback. Compared to the certainty equivalence control (Figure 3), CIDA takes into account the original dynamics (1) and their stochastic nature, and hence, is more capable of enforcing safety. With CIDA, the true simulated state violated the constraints for \(15\) times compared to \(54\) in CE, resulting in a controller that is \(3.5\) times safer than the standard method.6 Footnote 6: The associated code: [https://github.com/msramada/Control-Importance-Distribution-Algorithm-CIDA-](https://github.com/msramada/Control-Importance-Distribution-Algorithm-CIDA-) The simulations were implemented using Python, an interpretted programming language, on an M1-chip 2021 MacBook Pro with 16.00 GB of RAM. The average running time for each time-step of CIDA is \(\approx 7\) seconds, compared to 0.07 seconds for CE. This can be vastly reduced via optimized parallel computing. ## VI Conclusion This paper presents a method for generating probabilistically safe control for nonlinear stochastic state uncertain systems. Though generally intractable, we show that by limiting the search space based on a surrogate deterministic controller, a safe controller over a prediction horizon can be efficiently generated and guaranteed by a predefined margin \(1-\epsilon\). We demonstrate the tradeoff between computational speed and safety compared with a standard certainty equivalence method. Algorithm 1 (CIDA) bears a passing resemblance to both model reference adaptive control [16], and importance sampling from Monte Carlo integration literature [8]. This is due to the use of focused sampling of the control space, guided by a surrogate model that is both: "close enough" to the original stochastic dynamics and "simple enough" to admit nonlinear control design such as a CBF. However, these notions of closeness/simplicity have to be further investigated and, possibly, made quantifiable. Moreover, the recursive feasibility/safety condition is not discussed here; this is challenging in general for scenario/sampling-based methods, even with linearity and convexity assumptions adopted therein [5]. More investigations of this condition is still required to extend the guarantees offered by these methods.
2309.07454
Quantum vacuum effects in non-relativistic quantum field theory
Nonlinearities in the dispersion relations associated with different interactions designs, boundary conditions and the existence of a physical cut-off scale can alter the quantum vacuum energy of a nonrelativistic system nontrivially. As a material realization of this, we consider a 1D-periodic rotating, interacting non-relativistic setup. The quantum vacuum energy of such a system is expected to comprise two contributions: a fluctuation-induced quantum contribution and a repulsive centrifugal-like term. We analyze the problem in detail within a complex Schoedinger quantum field theory with a quartic interaction potential and perform the calculations non-perturbatively in the interaction strength by exploiting the nonlinear structure of the associated nonlinear Schroedinger equation. Calculations are done in both zeta-regularization, as well as by introducing a cut-off scale. We find a generic, regularization-independent behavior, where the competition between the interaction and rotation can be balanced at some critical ring-size, where the quantum vacuum energy has a maxima and the force changes sign. The inclusion of a cut-off smoothes out the vacuum energy at small distance but leaves unaltered the long distance behavior. We discuss how this behavior can be tested with ultracold-atoms.
Matthew Edmonds, Antonino Flachi, Marco Pasini
2023-09-14T06:27:16Z
http://arxiv.org/abs/2309.07454v1
# Quantum vacuum effects in non-relativistic quantum field theory ###### Abstract Nonlinearities in the dispersion relations associated with different interactions designs, boundary conditions and the existence of a physical cut-off scale can alter the quantum vacuum energy of a nonrelativistic system nontrivially. As a material realization of this, we consider a 1D-periodic rotating, interacting non-relativistic setup. The quantum vacuum energy of such a system is expected to comprise two contributions: a fluctuation-induced quantum contribution and a repulsive _centrifugal-like_ term. We analyze the problem in detail within a complex Schodinger quantum field theory with a quartic interaction potential and perform the calculations non-perturbatively in the interaction strength by exploiting the nonlinear structure of the associated nonlinear Schrodinger equation. Calculations are done in both zeta-regularization, as well as by introducing a cut-off scale. We find a generic, regularization-independent behavior, where the competition between the interaction and rotation can be balanced at some critical ring-size, where the quantum vacuum energy has a maxima and the force changes sign. The inclusion of a cut-off smoothes out the vacuum energy at small distance but leaves unaltered the long distance behavior. We discuss how this behavior can be tested with ultracold-atoms. ## I Introduction In quantum field theory, the canonical quantization scheme does not fix the order of non-commuting operators in the Hamiltonian, leaving a residual divergent "_zero-point energy_" contribution to the energy density (in natural units): \[\mathscr{E}=\frac{1}{2}\sum_{n}\omega_{n}, \tag{1}\] with \(\omega_{n}\) representing the frequencies of the quantum fluctuations. Wick's normal ordering is then used to enforce a specific order of operators' products, resulting in the subtraction of this infinite shift from the vacuum expectation value (vev) of the Hamiltonian that will then vanish. This has the consequence that the quantum vacuum, _so defined_, does not carry energy, linear or angular momentum. Such a procedure is usually justified by saying that a constant shift in the energy cannot be measured, although this view is not entirely tenable as any finite energy is, in principle, measurable due to its gravitational effect. In relativistic quantum field theory, a better justification follows from the fact that the expectation value of the Hamiltonian in the _noninteracting_ vacuum (i.e., in absence of external fields or interactions) must vanish for the Hamiltonian, \(a\) generator of the Poincare group, to satisfy the correct commutation rules. Then, the usual notion of a noninteracting vacuum as a state devoid of energy follows, justifying the use of normal ordering [1; 2]. Even without calling gravity into question [3], a variety of quantum vacuum phenomena, most notably the Casimir effect [4], clearly demonstrates some level of inadequacy of the above definition of an _empty_ physical vacuum _tout court_. In the original version of the Casimir effect, for example, this was evident owing to the imposition of boundary conditions on the quantum fluctuations of the electromagnetic field in the presence of perfectly conducting, parallel plates, resulting in an attractive force between the plates. More general (and realistic) situations are not different, as boundary conditions result from quantum fields existing in interaction with other fields, and modify the spectrum of the quantum fluctuations, thus changing the zero-point energy. These arguments converge into Casimir's definition of the energy of the quantum vacuum \(E_{vac}\) as the difference between the zero-point energies in the presence, \(E\left[\partial B\right]\), and in absence, \(E\left[\emptyset\right]\), of boundaries: \[E_{vac}=E\left[\partial B\right]-E\left[\emptyset\right]. \tag{2}\] Such a definition is compatible with the vanishing of the vev of the Hamiltonian in the noninteracting vacuum (i.e., no boundary) and gives a calculable recipe (within any regularization scheme) of the quantum vacuum energy in response to changes in external conditions [2; 5; 6]. This view on the complexity of the vacuum has been vindicated during the past quarter of a century by many successful experiments starting with [7; 8] (see also Ref. [9] for a recent additional list of examples of applications to nanophotonics, nanomechanics, and chemistry). A less explored question concerns the quantum va cuum energy in non-relativistic systems (see, for some discussions, Refs.[10; 11; 12; 13; 14; 15]). The answer might seem simple, since in a non-relativistic context there is no issue associated with antiparticles or the ordering of the operators, suggesting that the zero-point energy can be safely ignored. However, this is not the case in general. Even from the vantage point of the original Casimir effect, the story remains subtle because the quantum vacuum energy emerges from deformations of the electromagnetic quantum fluctuations, and no simple non-relativistic limit can be taken: the photon is massless and propagates at the speed of light. However, in a non-relativistic set-up one can imagine emergent degrees of freedom, constrained by boundaries, and how these could give rise to non-trivial quantum vacuum phenomena and a number of works have explored such question, particularly in the context of quantum liquids and Bose-Einstein condensation where (1) contributes to the zero temperature thermodynamic potential (on top of the classical ground state contribution); see, for example, Refs. [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26]. There are at least two reasons why in a non-relativistic setting the situation is far from obvious. The first is that any time we are in the presence of interactions and non-trivial boundary conditions, the frequencies \(\omega_{n}\) in (1) develop a non-trivial dependence on the ground state of the system. This can be seen using the background field method (see Ref. [11]), although computing the frequencies within this framework becomes a hard task. Earlier calculations relying on a perturbative expansion around small coupling exist [27; 28; 29] and more recently Ref. [30] has developed a way to compute the quantum vacuum energy for a relativist \(1+1\) dimensional scalar field theory without relying on expansions in powers of the interaction strength (see also Refs. [31; 32; 33; 34]). The second reason has to do with the regularization. In the relativistic case, the quantum vacuum energy emerges from the summation of the entire spectrum as in (1); this summation is divergent and must be regularized. A subtlety with this is due to the existence of a physical cut-off that may alter the spectral sum in (1). Within a lattice approach this should be possible (see Ref. [12]), however it is not at all obvious how to do this within an effective field theory approach. It is certainly an interesting question to ask whether any remnant of the quantum vacuum energy remains in the non-relativistic limit. When the model under consideration is nonlinear, the difference in the dispersion relation due to the presence of interactions, the presence of external forcing (e.g., rotation), a physical cut-off scale and boundary conditions are all factors that together conjure to induce intricate behaviors in the quantum vacuum energy. Here we look at the above questions within the paradigmatic nonlinear Schrodinger equation. Our approach to compute the quantum vacuum energy exploits the integrability structure of the nonlinear Schrodinger equation associated with our problem. The calculations are done both using zeta-regularization including the contribution from the whole spectrum, as well as a more physical regularization scheme where the spectral sums are modulated by a frequency dependent window-function that suppresses the contribution of the high-energy modes, leaving a dependence on a physical cross-over scale. As we shall see the two methods lead to compatible results, with the only expected consequence of the cut-off being that of regularizing the vacuum energy at short distance. In conclusion, we will describe how our predictions can, in principle, be measured experimentally with cold-atom rings. ## II Non-relativistic Schrodinger model We shall consider a system of non-relativistic interacting bosons, described by a complex Schrodinger quantum field \(\Phi=(\phi_{1}+i\phi_{2})/\sqrt{2}\), with \(\phi_{1},\phi_{2}\in\mathbb{R}\), confined to a 1D ring of radius \(R\), rotating with constant angular velocity \(\Omega\). We assume that the periodicity of the ring is externally broken by the presence of a barrier that we describe by imposing Dirichlet boundary conditions at one point on the ring. The Lagrangian density is \[\begin{split}&\mathcal{L}=\frac{i}{2}\left(\Phi^{\dagger}\dot{ \Phi}-\Phi\dot{\Phi}^{\dagger}\right)+\frac{i}{2}\Omega\left(\Phi^{\dagger} \Phi^{\prime}-\Phi\Phi^{\dagger\prime}\right)-\\ &\frac{1}{2mR^{2}}\Phi^{\dagger\prime}\Phi^{\prime}-\frac{\lambda }{4}\left(\Phi^{\dagger}\Phi\right)^{2},\end{split} \tag{3}\] where \(0\leq\varphi\leq 2\pi\), \(x=R\varphi\), \(\dot{}=d/dt\) and \({}^{\prime}=d/d\varphi\). We adopt units of \(\hbar=1\). Expression (3) represents the Lagrangian density of an observer co-rotating with the ring. In this reference frame, boundary conditions for the co-rotating observer are time-independent [35; 36; 37]. The following nonlinear Schrodinger equation can be derived from (3): \[i\dot{\Phi}=-i\Omega\Phi^{\prime}-\frac{1}{2mR^{2}}\Phi^{\prime\prime}+\frac{ \lambda}{2}\left|\Phi\right|^{2}\Phi. \tag{4}\] The normal mode decomposition can be carried out by looking for stationary solutions of the form \[\Phi\left(t,\varphi\right)=e^{-i\omega_{p}t}f_{p}\left(\varphi\right). \tag{5}\] This allows us to write the original equation (4) as \[0=\frac{1}{2mR^{2}}f_{p}^{\prime\prime}+i\Omega f_{p}^{\prime}-\left(\frac{ \lambda}{2}\left|f_{p}\right|^{2}-\omega_{p}\right)f_{p}. \tag{6}\] To solve (6) we decompose \(f_{p}\) as \[f_{p}\left(\varphi\right)=\rho\left(\varphi\right)e^{i\alpha\left(\varphi \right)},\qquad\text{with }\rho\left(\varphi\right),\alpha\left(\varphi\right)\in\mathbb{R}, \tag{7}\] that leads to \[0 = \frac{\rho^{\prime\prime}}{2mR^{2}}+\left(\omega_{p}-\frac{\alpha^{ \prime\,2}}{2mR^{2}}-\Omega\alpha^{\prime}\right)\rho-\frac{\lambda}{2}\rho^{3}, \tag{8}\] \[0 = \frac{\alpha^{\prime\prime}\rho}{2mR^{2}}+\frac{1}{mR^{2}}\alpha^ {\prime}\rho^{\prime}+\Omega\rho^{\prime}. \tag{9}\] The above system of equations can be solved analytically, first obtaining \(\alpha^{\prime}\) in terms of \(\rho\) from Eq. (9), \[\alpha^{\prime}=\beta=\frac{C}{\rho^{2}}-mR^{2}\Omega, \tag{10}\] (\(C\) is an integration constant) and then substituting \(\alpha^{\prime}\) in Eq. (8); this gives rise to a cubic nonlinear equation in \(\rho\) that can be solved in terms of Jacobi elliptic functions. Imposition of the boundary conditions selects the solution as a Jacobi sn function and leads to the following quantization conditions for the eigenfrequencies. The procedure is straightforward but lengthy. For completeness we give all the details in the Appendix and refer the reader to Refs. [38; 39; 40; 41; 42] for further details on elliptic equations. The solution can be written as \[\Phi\left(t,\varphi\right)=A_{n}\,e^{-i\omega_{n}t}e^{-i\left(mR^{2}\Omega \varphi-\pi/4\right)}\mathbf{sn}\left(q_{n}\varphi,k_{n}\right), \tag{11}\] with the normalization factor \(A_{n}\) expressed in terms of elliptic integrals of first and second kind, \(K(z)\) and \(E(z)\) respectively, \[A_{n}^{2}=\frac{k_{n}^{2}}{2\pi R\left(1-E\left(k_{n}\right)/K\left(k_{n} \right)\right)}. \tag{12}\] The momentum \(q_{n}\) and the elliptic modulus \(k_{n}\) are quantized according to the following relations \[q_{n} = \frac{n}{\pi}K\left(k_{n}\right),\quad n\in\mathbb{N}, \tag{13}\] \[\lambda mR\frac{\pi}{4n^{2}} = K\left(k_{n}\right)\left(K\left(k_{n}\right)-E\left(k_{n}\right) \right), \tag{14}\] where (13) comes from the periodicity of the solution and (14) is derived from the first integral of the equation of motion. Finally, the eigenfrequencies are given by \[\omega_{n}=\left(1+k_{n}^{2}\right)q_{n}^{2}/(2mR^{2})-mR^{2}\Omega^{2}/2. \tag{15}\] Details on how to derive Eqs. (11), (12), (13), (14) and (15) are given in the Appendix. ## III Quantum vacuum energy and spectral asymptotics In the following, we illustrate how to compute the quantity (1) for the present case. A non-renormalized expression for the quantum vacuum energy can be written as follows (see Ref. [10; 43]): \[\mathscr{E}_{r}(s)=\frac{\mu^{s}}{2}\sum_{n}\omega_{n}^{1-s}, \tag{16}\] where \(s\in\mathbb{C}\) is a complex-valued regularization parameter and \(\mu\) is a renormalization scale with dimension of energy. The index \(r\) is a reminder that (16) refers to the co-rotating frame. The eigenvalues \(\omega_{n}\) are given in terms of the nonlinear, coupled algebraic equations (13), (14) and (15). The regularization of (16) is done by finding a representation that converges in some region of the complex-\(s\) plane, followed by analytical continuation to the physical value \(s\to 0\). Here, we use the spectral asymptotics of the eigenvalues and express (16) as \[\mathscr{E}_{r}(s)=\Delta+\bar{\mathscr{E}}_{r}(s), \tag{17}\] where \[\bar{\mathscr{E}}_{r}(s)=\frac{\mu^{s}}{2}\sum_{n}\left(\omega_{n}^{(a)} \right)^{1-s}, \tag{18}\] and \[\Delta=\frac{1}{2}\sum_{n}\left(\omega_{n}-\omega_{n}^{(a)}\right). \tag{19}\] The quantity \(\omega_{n}^{(a)}\) represents the asymptotic expansion of the eigenvalues \(\omega_{n}\) as a function of the quantum number \(n\). If the asymptotic expansion includes all terms up to \(O(1/n^{2})\) as we shall do here, then \(\Delta\sim O(1/n^{2})\), i.e. (19), and thus converges for \(s\to 0\) (in formula (19) we have already set \(s\to 0\)). Such a procedure simply confines the divergences to \(\bar{\mathscr{E}}_{r}(s)\) that will need explicit regularization. The first step of the process is to obtain the asymptotic behavior of the eigenvalues. This can be obtained nu Figure 1: (color online) Comparison between (15) and the eigenvalues computed numerically. Panel (a) shows the absolute difference between the asymptotic and exact nonlinear eigenvalues, while (b) shows individual datasets for fixed \(R\). Coloured data in (b) were obtained numerically, while the grey dashed lines were calculated from Eq.(15). Here \(\Omega Rml/\hbar=0.5\) and \(\lambda=ml/\hbar^{2}=0.5\) throughout. merically, but it is not difficult to find its analytical form. Since the left hand side of Eq.(14) converges to zero for \(n\to\infty\), while the right hand side, as a function of \(k_{n}\), goes to zero only in the limit \(k_{n}\to 0\), while decreasing monotonically for increasing \(k_{n}>0\), the right hand side of (14) for small \(k_{n}\) gives the relevant limit to capture the large \(n\) asymptotic behavior \(k_{n}^{2}\approx 2\lambda mR/(\pi n^{2})\). This result used in conjunction with (13) and (15) allows us to readily extract the leading asymptotic behavior of \(\omega_{n}\): \[\omega_{n}^{(a)}=n^{2}/\eta^{2}+\rho^{2}+O\left(1/n^{2}\right), \tag{20}\] where \(\eta^{2}=8mR^{2}\) and \(\rho^{2}=3\lambda/(8\pi R)-mR^{2}\Omega^{2}/2\). Fig.1 shows a comparison between the eigenvalues computed numerically and their asymptotic counterpart. The large-\(n\) scaling of the eigenvalues is consistent with Weyl's law that in the present case predicts a leading large-\(n\) behavior of the \(\omega_{n}\) scaling as \(n^{2}\) and independent of \(\lambda\)[44]. Using (20), \[\tilde{\mathcal{E}}_{r}(s)=\frac{\left(\mu\eta^{2}\right)^{s}}{2\eta^{2}} \sum_{n}\left(n^{2}+\eta^{2}\rho^{2}\right)^{1-s}, \tag{21}\] and the Chowla-Selberg representation (See Refs. [45; 46]), \[\sum_{n=1}^{\infty}\left(n^{2}+\gamma^{2}\right)^{-z}=-\frac{ \gamma^{-2z}}{2}+\frac{\sqrt{\pi}}{2}\frac{\Gamma(z-1/2)}{\Gamma(z)}\gamma^{ 1-2z} \tag{22}\] \[+\frac{2\pi^{z}}{\Gamma(z)}\gamma^{-z+1/2}\sum_{p=1}^{\infty}p^{ z-1/2}K_{z-1/2}\left(2\pi p\gamma\right),\] from which the limit \(s\to 0\) can be taken to arrive at the following regularized expression \[\tilde{\mathcal{E}}_{r}=\lim_{s\to 0}\tilde{\mathcal{E}}_{r}(s)=-\frac{1}{4} \left(\frac{3\lambda}{8\pi R}-\frac{mR^{2}\Omega^{2}}{2}\right). \tag{23}\] Thus, the total quantum vacuum energy in the co-rotating frame is given by \(E_{r}=\tilde{\mathcal{E}}_{r}+\Delta\). To get the energy in the laboratory frame \(E_{s}\) one can use \(E_{s}-E_{r}=\mathbf{\Omega}\mathbf{\mathrm{L}}\), where \(L=-\partial E_{r}/\partial\Omega\) is the angular momentum [31; 35; 36; 47]: \[E_{s}=\Delta-\Omega\frac{\partial\Delta}{\partial\Omega}-\frac{3\lambda}{32 \pi R}-\frac{mR^{2}\Omega^{2}}{8}. \tag{24}\] The resulting force \(F_{s}=-\partial E_{s}/\partial R\) is \[F_{s}=-\frac{\partial\Delta}{\partial R}+\Omega\frac{\partial^{2}\Delta}{ \partial\Omega R}-\frac{3\lambda}{32\pi R^{2}}+\frac{mR\Omega^{2}}{4}. \tag{25}\] Ignoring for the time being the contributions from \(\Delta\), the above expression comprises a contribution proportional to \(-\lambda/R\) that vanishes for \(\lambda\to 0\) and scales as the inverse of the ring size: this is an attractive "Casimir-like" contribution. The other contribution \(E_{\mathcal{I}}=\frac{1}{2}\mathcal{I}_{\mathcal{R}}\Omega^{2}\) is proportional to the moment of inertia of the ring \(\mathcal{I}_{R}=mR^{2}\) with radius \(R\). The vanishing behavior for \(\lambda\to 0\) and \(\Omega\to 0\) is consistent with the fact that the quantum vacuum energy should vanishes in the absence of interactions and boundary condtions. The angular velocity appears as the square of \(\Omega\), and this is again consistent with the fact that our model does not include parity breaking terms, thus the energy should be symmetric wrt \(\Omega\leftrightarrow-\Omega\). The force vanishes at the critical radius \[R_{crit}\approx\sqrt[3]{\frac{3\lambda}{8\pi m\Omega^{2}}}, \tag{26}\] with its sign changing from negative-attractive for \(R<R_{crit}\) to positive-repulsive for \(R>R_{crit}\). Interestingly, also the way the force scales with the ring size changes with the angular velocity: it scales linearly in the regime of fast rotation, while it scales as the inverse square of the ring size for slow rotation. The symbol "\(\approx\)" in (26) indicates that the contribution of \(\Delta\) has been ignored. Units of \(\hbar\) are restored in the numerics. Fig. 2 shows the quantum vacuum energy (panels a and c) as a function of radius \(R\) and rotation strength \(\Omega\) respectively, while the two lower panels (b and d) show the corresponding force associated with each dataset from (a) and (c). The Figure 2: (color online) Quantum vacuum energy and force. (a) and (c) show the quantum vacuum energy, Eq. (24) for the same \(\lambda\) (colour groups) and varying \(\Omega\) (panel a) or \(R\) (panel c) (see text labels). (b) and (d) show the corresponding force, Eq. (25) for each data set. The light grey shading indicates the parameter regions where the force changes sign from attractive to repulsive. Throughout the paper, the quantity \(l\) represents a generic unit-scale. grey shaded region shows the parameter regime where the force is repulsive. Fig. 3 shows heatmaps of Eq. (26) in the \((R,\Omega)\) and \((R,\lambda)\) parameter spaces, (a) and (b) respectively. In panel (a) the interaction strength is \(\lambda ml/\hbar^{2}=10\) while the rotation strength is \(\Omega ml^{2}/\hbar=5\) in (b). The solid blue lines in both panels show the border between the repulsive regime and the causality limit defined by \(\Omega Rml/\hbar=1\). The red dashed line indicates where the force changes sign, obtained from Eq. (26). The red data point in each panel corresponds to the point \((R_{\circ},\Omega_{\circ})\) in (a) and \((R_{\circ},\lambda_{\circ})\) in (b) where Eq. (26) and the causality limit coincide, and \[\left(R_{\circ},\Omega_{\circ}\right)=\left(\frac{3}{8\pi}\frac{\lambda ml^{2 }}{\hbar^{2}},\frac{8\pi}{3}\frac{\hbar^{2}}{m^{2}l^{3}}\frac{1}{\lambda} \right)\!. \tag{27}\] The point defined by Eq. (27) in Fig. 3 shows the maximum rotation strength where repulsive solutions are obtained, then the model of Eq. (4) is expected to support a causal repulsive force in the region \(\Omega_{c}<\Omega<R^{-1}(ml/\hbar)\) and \(R>R_{\circ}\). Likewise for panel (b) the causal repulsive regime is defined between \(0<\lambda<\lambda_{c}\) and \(0<R<R_{\circ}\). An analysis, qualitatively similar to Fig. 3(b), can be done for the \((R,\Omega)\) parameter space with constant \(\lambda\). A subtle point has to do with how the above results will change in the presence of a cut-off scale associated with a minimal length scale (e.g., the inter-atomic separation scale). We address this question by modifying the regularization procedure to include a frequency dependent _window function_. This is implemented by defining \[\tilde{\mathcal{E}}_{r}=\frac{1}{2}\sum_{n}\omega_{n}^{(a)}\sigma_{n}(\ell_{c }), \tag{28}\] and the residual \(\Delta\) as \[\Delta=\frac{1}{2}\sum_{n}\left(\omega_{n}-\omega_{n}^{(a)}\right)\sigma_{n}( \ell_{c}). \tag{29}\] Here, we choose the window function as follows: \[\sigma_{n}(\ell_{c})=\exp\left(-\ell_{c}n^{2}/(8mR^{2})\right), \tag{30}\] with the argument of the exponential set by the leading large-\(n\) asymptotics of the spectrum. The cut-off scale \(\ell_{c}\) determines how high-frequency modes are suppressed. The limit \(\ell_{c}\to 0\) of (28-29) returns the non-regularized expression for \(\tilde{\mathcal{E}}_{r}\) discussed earlier. While the choice of \(\sigma_{n}(\ell_{c})\) is arbitrary, (30) allows to write (28) as \[\tilde{\mathcal{E}}_{r}= -\frac{\rho^{2}}{4}+\frac{\rho^{2}}{4}\theta_{3}\left(\frac{\ell _{c}}{\pi\eta^{2}}\right)-\frac{1}{4\pi\eta^{2}}\theta_{3}^{\prime}\left( \frac{\ell_{c}}{\pi\eta^{2}}\right). \tag{31}\] where \(\theta_{3}\) is the following Jacobi _thetanull_ function [48], \[\theta_{3}(x)=\sum_{n=-\infty}^{\infty}e^{-\pi xn^{2}}. \tag{32}\] This choice of regularization has the advantage that the first term of (31) corresponds to the fully resummed result and the effect of the cut-off is encoded in the latter two terms of (31). Proving the consistency of the two approaches, with and without the cut-off, requires care since in the limit \(\ell_{c}\to 0\) the theta function diverges and requires regularization. The theta function can be regularized by requiring that the cut-off dependent contribution in (31) vanishes in the limit of \(\ell_{c}\to 0\), corresponding to a subtraction of the divergent contribution. To compute the finite \(\ell_{c}\to 0\) limit of (31) we use the modular transformation \[\theta_{3}\left(x\right)=\sqrt{1/x}\ \theta_{3}\left(1/x\right) \tag{33}\] along with the small \(x\) expansion of the theta function [48], leading to \[\theta_{3}\left(x\right)\approx 1/\sqrt{x}+O(\exp(-\pi/x)/\sqrt{x}). \tag{34}\] Using this expression in (31) and removing the divergent part, consistently with the regularization of the theta function, gives the expected fully resummed result. The corrections due to the cut-off near \(R\sim R_{crit}\) to the fully resummed result can be estimated assuming that \(\ell_{c}\ll R_{crit}\) and can be computed including higher order corrections in the expansion of the theta function: \[\theta_{3}\left(x\right)-1/\sqrt{x}\approx 2\sigma/\sqrt{x}+2\sigma^{4}/ \sqrt{x}+O\left(\sigma^{9}/\sqrt{x}\right), \tag{35}\] where \(\sigma=\exp\left(-\pi/x\right)\). Using (35) in (31) implies that corrections to the fully resummed result are exponentially small, that is the behavior of the vacuum energy is robust against the inclusion of a cut-off smaller than the critical radius for large enough ring size. For small \(R\) Figure 3: (color online) Quantum fluctuation induced-force heatmaps. (a) shows Eq. (25) in the limit \(\Delta=0\) in the \((\Omega,R)\) parameter space for fixed \(\lambda ml/\hbar^{2}=10\). The solid blue line indicates the border between the repulsive and noncausal regions, while the dashed red line indicates the point at which the force changes sign. (b) shows the magnitude of the force in the \((\lambda,R)\) parameter space for fixed \(\Omega ml^{2}/\hbar=5\). we expect the cut-off to regulate the diverging \(1/R\) behavior of leading term in Eq. (31). Expanding the theta functions in (31) for \(\ell_{c}/R^{2}\) large, gives at leading order \[\tilde{\mathscr{E}}_{r}\approx\frac{\rho^{2}-\eta^{-2}}{2}e^{-\ell_{c}/\eta^{2}} \xrightarrow[R\to 0]{}0, \tag{36}\] which can be contrasted with the \(\rho^{2}\sim R^{-2}\) behavior of the vacuum energy as obtained by full resummation. ## IV Conclusions The behavior of the quantum vacuum energy of an interacting non-relativistic system is far from trivial. Here we have looked at an example of this using a nonlinear Schrodinger quantum field theory and computed the quantum vacuum energy and force without resorting to any perturbative expansion in the coupling constant, simply relying on the exact integrability of the nonlinear problem. The novel results are summarized in the "phase diagram" of Fig. 3 which shows how the fluctuation-induced force as a function of rotation and interaction strength separates into a noncausal region plus an attractive-repulsive region. This behavior arises from the stabilization between an attractive Casimir-like component and a repulsive centrifugal one. An interesting potential connection seems evident between our quantum field theoretical set-up and the area of ultracold atoms. A possibly relevant example is the set-up of Ref. [49], which consists a \({}^{23}\)Na BEC confined in a ring of size \(R\sim 20\mu\)m. Considering a quasi-1D approximation the interaction strength \(\lambda\) can be expressed in terms of the scattering length \(a_{s}\) and the transverse length scale \(l\) as \(\lambda=g/(\pi l^{2})\) (here \(g=4\pi\hbar^{2}a_{s}/m\) defines the atomic interaction with \(a_{s}=50a_{0}\) for \({}^{23}\)Na) [50]. Taking \(l\sim 2\mu\)m to ensure that \(l\ll R\) and assuming a condensate of \(N=2\times 10^{3}\) atoms (species other than \({}^{23}\)Na, e.g. \({}^{87}\)Rb, can have larger atom numbers and different scattering lengths), allows us to arrive at a dimensionless interaction strength \(\lambda ml/\hbar^{2}\sim 4a_{s}N/l\), which, using the above parameter values, gives \(4a_{s}N/l\sim 10\), a value close to that used in Fig. (3-a). The force \(F_{s}\) can also be estimated in a similar manner, using Eq. (25) and the above definitions we obtain a dimensionless force \[F_{s}ml^{3}/\hbar^{2}=-(3a_{s}Nl/8\pi R^{2})+m^{2}l^{3}R\Omega^{2}/4\hbar^{2}. \tag{37}\] Using a rotation speed of \(\Omega\)\(\sim\)2\(\pi\times\)25Hz from the experiment of Ref. [51], we obtain \(F_{s}ml^{3}/\hbar^{2}\sim 0.1\), modest but potentially large enough to be observable in a future experiment. Using these values a ring of size \(R\sim 20\mu\)m would fall in the causal repulsive region of Fig. 3(a), and \(\Omega ml^{2}/\hbar\sim 0.2\) favoring lower rotation frequencies. In this work Dirichlet boundary conditions have been used, which could be simulated using a weak-link as realized in the BEC ring experiments of Refs. [52; 53]. The physical system described in this work has potential applications in atomtronics [54], facilitating an additional opportunity to explore the fundamental physics associated with the quantum vacuum. Extensions to systems of fermions [55] or with multiply-connected geometries [56] offers additional avenues to explore the effects described in this work in uncharted scenarios. ## Acknowledgements A.F.'s research was supported by the Japanese Society for the Promotion of Science Grant-in-Aid for Scientific Research (KAKENHI, Grant No. 21K03540). M.E.'s research was supported by the Australian Research Council Centre of Excellence in Future Low-Energy Electronics Technologies (Project No. CE170100039) and funded by the Australian Government, and by the Japan Society of Promotion of Science Grant-in-Aid for Scientific Research (KAKENHI Grant No. JP20K14376). A.F. and M.E. acknowledge support from the University of Queensland (SMP Accelerator Grant). We would like to thank G. Marmorini for discussions. ## Appendix A Derivation of the solutions and of the spectrum ### Solutions In this appendix, we will illustrate how to solve the system (9)-(10) that we re-write here for convenience: \[\frac{1}{2mR^{2}}\rho^{\prime\prime}+\left(\omega_{p}-\frac{{\alpha^{\prime}} ^{2}}{2mR^{2}}-\Omega{\alpha^{\prime}}\right)\rho-\frac{\lambda}{2}\rho^{3}= \tag{38}\] \[\frac{1}{2mR^{2}}{\alpha^{\prime\prime}}\rho+\frac{1}{mR^{2}}{ \alpha^{\prime}}\rho^{\prime}+\Omega\rho^{\prime}=0. \tag{39}\] The second equation can be solved by separation of variables, replacing \({\alpha^{\prime}}\left(\varphi\right)=\beta\left(\varphi\right)\), \[\beta^{\prime}\rho+2\beta\rho^{\prime}+2mR^{2}\Omega\rho^{\prime}=0. \tag{40}\] Integrating yields \[\int\frac{d\beta}{2\beta+2mR^{2}\Omega}=-\int\frac{d\rho}{\rho}, \tag{41}\] leading to \[\alpha^{\prime}=\beta=\frac{C}{\rho^{2}}-mR^{2}\Omega, \tag{42}\] where \(C\) is an integration constant. Substituting (42) allows to express (38) as \[\rho^{\prime\prime}+F_{1}\frac{1}{\rho^{3}}+F_{2}\rho+F_{3}\rho^{3}=0. \tag{43}\] where we have defined \[\begin{cases}F_{1}=-C^{2}\\ F_{2}\equiv\epsilon_{p}=2mR^{2}\left(\omega_{p}+\frac{m}{2}R^{2}\Omega^{2}\right) \\ F_{3}=-\lambda mR^{2}\end{cases}\] (A.44) The first integral can be obtained by multiplying both sides of (A.43) by \(\rho^{\prime}\) and integrating \[\rho^{\prime\,2}-F_{1}\frac{1}{\rho^{2}}+F_{2}\rho^{2}+\frac{F_{3}}{2}\rho^{4 }+H=0,\] (A.45) where \(H\) is an integration constant. Finally, multiplying both sides for \(\rho^{2}\) and changing variables, \[\begin{cases}s=\rho^{2}\\ \frac{s^{\prime}}{2}=\rho\rho^{\prime},\end{cases}\] (A.46) gives \[\left(s^{\prime}\right)^{2}=4F_{1}-4Hs-4F_{2}s^{2}-2F_{3}s^{3}.\] (A.47) The above equation corresponds to the differential equation of an undamped quadratic anharmonic oscillator, whose canonic form can be obtained by differentiating with respect to \(\varphi\) and dividing by \(2s^{\prime}\), \[s^{\prime\prime}=-2H-4F_{2}s-3F_{3}s^{2}.\] (A.48) Following Ref. [38] (see also Refs. [39; 40; 41; 42]), we can write Eq. (A.47) \[\left(s^{\prime}\right)^{2} = d\left(s-\alpha_{1}\right)\left(s-\alpha_{2}\right)\left(s- \alpha_{3}\right)\] (A.49) \[= -d\alpha_{1}\alpha_{2}\alpha_{3}+d\left(\alpha_{1}\alpha_{2}+ \alpha_{2}\alpha_{3}+\alpha_{3}\alpha_{1}\right)s-\] \[-d\left(\alpha_{1}+\alpha_{2}+\alpha_{3}\right)s^{2}+ds^{3}\] with \(\alpha_{1},\alpha_{2},\alpha_{3}\) being roots of RHS polynomial, and \[\begin{cases}\alpha_{1}+\alpha_{2}+\alpha_{3}=-2\frac{F_{2}}{F_{3}}\\ \alpha_{1}\alpha_{2}+\alpha_{2}\alpha_{3}+\alpha_{3}\alpha_{1}=2\frac{H}{F_{3}} \\ \alpha_{1}\alpha_{2}\alpha_{3}=2\frac{F_{1}}{F_{3}}.\end{cases}\] (A.50) The advantage of expressing Eq. (A.47) as Eq. (A.49) is that the latter is one of the standard nonlinear ordinary differential equation, whose solutions can be expressed in terms of Jacobi elliptic functions as: \[s\left(\varphi\right)=\alpha_{3}-\left(\alpha_{3}-\alpha_{2}\right)\mathtt{ sn}^{2}\left(q\varphi,k\right)\] (A.51) with \[k=\sqrt{\frac{\alpha_{3}-\alpha_{2}}{\alpha_{3}-\alpha_{1}}},\] (A.52) being the elliptic modulus and \[q=\sqrt{\frac{F_{3}}{2}\left(\alpha_{3}-\alpha_{1}\right)}.\] (A.53) Throughout the paper, we adopt for the Jacobian elliptic functions the notation of NIST Digital Library of Mathematical Functions [57], according to which \[\mathbf{dn}^{2}\left(x,m\right)+m^{2}\,\mathtt{sn}^{2}\left(x,m \right)=1\] (A.54a) \[E\left(m\right)=\int_{0}^{\frac{\pi}{2}}\sqrt{1-m^{2}\sin^{2} \left(t\right)}dt\] (A.54b) \[K\left(m\right)=\int_{0}^{\frac{\pi}{2}}\frac{1}{\sqrt{1-m^{2} \sin^{2}\left(t\right)}}dt;\] (A.54c) In the above expressions, \(\mathtt{dn}\) and \(\mathtt{sn}\) define, respectively, the Jacobi delta amplitude and elliptic sine, and \(K\left(m\right)\) and \(E\left(m\right)\) define, respectively, the complete elliptic integrals of the first and second kind. Using (A.46) and (A.51), the general solution can be expressed as follows \[\rho\left(\varphi\right)=\sqrt{\alpha_{3}-\left(\alpha_{3}-\alpha_{2}\right) \mathtt{sn}^{2}\left(q\varphi,k\right)}.\] (A.55) The solution depends on the physical parameters \(m,R,\Omega,\lambda,\omega\) and the integration constants \(C,H\) through the roots \(\alpha_{1},\alpha_{2},\alpha_{3}\) and through \(k\), and \(q\). #### Boundary conditions, normalization and spectrum Having to deal with the solution (A.55) in all its algebraic complexity (the explicit dependence of the solutions on the physical parameters is intricate) can be bypassed by directly imposing the boundary conditions. In this paper, we shall focus on the case of Dirichlet boundary conditions, \[\Phi\left(t,0\right)=\Phi\left(t,2\pi R\right)=0,\] (A.56) which implies (using Eqs. (6) and (8) in the main text) \[\rho\left(0\right)=0.\] (A.57) Exploiting the fact that \[\mathtt{sn}\left(0,k\right)=0,\ \ \ \ \forall k\in\mathbb{R}\] (A.58) we can write \[\rho\left(0\right)=\sqrt{\alpha_{3}-\left(\alpha_{3}-\alpha_{2}\right) \mathtt{sn}^{2}\left(0,k\right)}=0,\] (A.59) which implies \[\alpha_{3}=0.\] (A.60) This simplifies the solution to \[\rho\left(\varphi\right)=\sqrt{\alpha_{2}}\,\mathtt{sn}\left(\sqrt{-\frac{F_{3}}{2 }\alpha_{1}}\varphi,\sqrt{\frac{\alpha_{2}}{\alpha_{1}}}\right).\] (A.61) Notice that the constraint \(\alpha_{3}=0\) also fixes the value of the integration constant \(C\): specifying \(\alpha_{3}=0\) in equation (A.49) gives \[\left(s^{\prime}\right)^{2}=d\left(\alpha_{1}\alpha_{2}\right)s-d\left( \alpha_{1}+\alpha_{2}\right)s^{2}+ds^{3}\] (A.62) where no constant terms appear. Comparing the above equation with Eq. (A.47) implies that the constant term proportional to \(F_{1}\) in Eq. (A.47) has to vanish: \[F_{1}\equiv-C^{2}=0.\] (A.63) This simplifies considerably the system of equations (A.50), which become \[\begin{cases}\alpha_{1}+\alpha_{2}=-2\frac{F_{2}}{F_{3}}\\ \alpha_{1}\alpha_{2}=2\frac{H}{F_{3}}.\end{cases}\] (A.64) Imposing the remaining boundary condition at \(\varphi=2\pi\), i.e., \[\rho\left(2\pi\right)=0\] (A.65) implies that \[\mathtt{sn}\left(\sqrt{-\frac{F_{3}}{2}\alpha_{1}}\varphi,\sqrt{\frac{ \alpha_{2}}{\alpha_{1}}}\right)=0,\] (A.66) from which, using the the property of the Jacobi \(\mathtt{sn}\) function \[\mathtt{sn}\left(2nK\left(m\right)+2ilK\left(1-m\right),m\right)=0\qquad \forall n,l\in\mathbb{Z},\] we arrive at \[-\frac{F_{3}}{2}\alpha_{1}=\frac{n}{\pi}K\left(\sqrt{\frac{\alpha_{2}}{\alpha _{1}}}\right).\] (A.67) \(K\left(x\right)\) represents the elliptic integral of the first kind as defined in (A.54c). Summarizing, we have \[\alpha_{1}+\alpha_{2}=-2\frac{F_{2}}{F_{3}}\] (A.68a) \[\alpha_{1}\alpha_{2}=2\frac{H}{F_{3}}\] (A.68b) \[-\frac{F_{3}}{2}\alpha_{1}=\frac{n}{\pi}K\left(\sqrt{\frac{ \alpha_{2}}{\alpha_{1}}}\right)\] (A.68c) that, together with the normalization condition, give us 4 independent equations for 4 variables \(\left(\alpha_{1},\alpha_{2},H,\omega\right)\). In practice, the above set of equations defines the quantization condition of the eigenvalues \(\omega_{p}\). It is possible to show that the system admits a unique solution but it is easier to proceed in an alternative and faster way. Using the condition \(F_{1}=0\) directly in equation (A.43) leads to \[\rho^{\prime\prime}+F_{2}\rho+F_{3}\rho^{3}=0.\] (A.69) This step allows us to write the solution in a simpler form \[\rho\left(\varphi\right)=A\,\mathtt{sn}\left(q\varphi,k\right).\] (A.70) Using (A.70) in Eq. (A.69) gives \[2k^{2}q^{2}\mathtt{sn}^{3}\left(q\varphi,k\right)-\left(1+k^{2} \right)q^{2}\mathtt{sn}\left(q\varphi,k\right)=\] \[-F_{2}\,\mathtt{sn}\left(q\varphi,k\right)-F_{3}A^{2}\,\mathtt{ sn}\left(q\varphi,k\right).\] (A.71) From which we obtain, by matching the coefficients of the like powers of \(\mathtt{sn}\left(q\varphi,k\right)\), the following relations \[\begin{cases}F_{3}=-\frac{2k^{2}q^{2}}{A^{2}}\\ F_{2}=\epsilon_{p}=\left(1+k^{2}\right)q^{2}.\end{cases}\] (A.72a) It is interesting to notice that the quantization condition (A.68c) becomes \[q_{n}=\frac{n}{\pi}K\left(k\right),\quad n\in\mathbb{N}.\] (A.73) (The same quantization conditions would have arisen imposing the boundary conditions directly on the simpler solutions, confirming the validity of the procedure). Solving the remaining conditions (A.72a),(A.72b) for \(q,k\) we obtain the following relations \[q_{n}^{2} =\frac{A_{n}^{2}F_{3}+2F_{2}}{2}\] (A.74a) \[k_{n}^{2} =-\frac{A_{n}^{2}F_{3}}{A_{n}^{2}F_{3}+2F_{2}}\] (A.74b) \[q_{n} =\frac{n}{\pi}K\left(k_{n}\right),\] (A.74c) where we have defined \(q\to q_{n}\), \(k\to k_{n}\) and \(A\to A_{n}\) to make the dependence on \(n\) explicit. The computation of the "normalization" coefficients \(A_{n}\) is carried out using the non-relativistic normalization condition, \[\left\langle\phi|\phi\right\rangle=\int_{V}\,dV\phi^{\star}\left(x\right)\phi \left(x\right)=1,\] (A.75) which gives \[\begin{split} 1&=A_{n}^{2}R\int_{0}^{2\pi}\mathtt{sn}^{2} \left(q_{n}\varphi,k_{n}\right)d\varphi\\ &=2\pi\frac{A_{n}^{2}R}{k_{n}^{2}}-\frac{A_{n}^{2}R}{k_{n}^{2}} \int_{0}^{2\pi}\mathtt{dn}^{2}\left(q_{n}\varphi,k_{n}\right)d\varphi\end{split} \tag{100}\] where we have used the Jacobi identity (101a). Using the definition of the Jacobi epsilon function \[\mathcal{E}\left(x,k\right)=\int_{0}^{x}\mathtt{dn}^{2}\left(t,k\right)\,dt \tag{101}\] and the following relation (which comes from a combination of quasi-addition and quasi-periodic formulas [57]), \[\mathcal{E}\left(nK\left(k\right),k\right)=nE\left(k\right) \tag{102}\] it is possible to simplify the normalization condition as \[\begin{split} 1&=\frac{A_{n}^{2}R}{k_{n}^{2}}\left(2\pi- \frac{\pi}{nK\left(k_{n}\right)}\mathcal{E}\left(2nK\left(k_{n}\right),k_{n} \right)\right)\\ &=\frac{2\pi A_{n}^{2}R}{k_{n}^{2}}\left(1-\frac{E\left(k_{n} \right)}{K\left(k_{n}\right)}\right)\end{split} \tag{103}\] or equivalently \[A_{n}^{2}=\frac{k_{n}^{2}}{2\pi R\left(1-\frac{E\left(k_{n}\right)}{K\left(k_ {n}\right)}\right)}. \tag{104}\] Using relations (100a), (100b) and (100c), it takes simple steps to arrive at the following relation \[-\frac{F_{3}}{R}\frac{\pi}{4n^{2}}=K\left(k_{n}\right)\left(K\left(k_{n} \right)-E\left(k_{n}\right)\right) \tag{105}\] that, along with (101b), closes the quantization condition for \(\omega_{n}\), \(k_{n}\) and \(q_{n}\). #### Complete solution In order to find the complete solution, \[f_{p}\left(\varphi\right)=\rho\left(\varphi\right)e^{i\alpha\left(\varphi \right)}, \tag{106}\] we shall need to find the phase \(\alpha\left(\varphi\right)\). Using (100a) and (101) it is easy to arrive at the following expression \[\alpha\left(\varphi\right)=-mR^{2}\Omega\varphi+\Xi \tag{107}\] where \(\Xi\) is an integration constant. Using (107), (100), along with Eqs. (6), and (8) from the main text, we arrive at \[\Phi\left(t,\varphi\right)=A_{n}\,e^{-i\omega_{n}t}e^{-i\left(mR^{2}\Omega \varphi-\Xi\right)}\mathtt{sn}\left(q_{n}\varphi,k_{n}\right). \tag{108}\] The quantity \(\Xi=\pi/4\) is a phase and the factor \(\exp(i\Xi)\) corresponds to a rotation of the phase \(\alpha\), leaving the EOM unaltered. #### Noninteracting limit The noninteracting limit \(\lambda\to 0\) simplifies Eq. (14) from the main text to \[K\left(k_{n}\right)\left(K\left(k_{n}\right)-E\left(k_{n}\right) \right)=0. \tag{109}\] Using the fact that \(K\left(m\right)>0\) for any \(m\in\left[0,1\right)\), the above condition further simplifies to \(K\left(m\right)=E\left(m\right)\), hence \(m=0\), i.e. \(k_{n}\to 0\) for \(\lambda\to 0\). Setting \(k_{n}=0\) in the equation for the eigenvalues and using the properties \(\mathtt{sn}\left(x,0\right)=\sin\left(x\right)\) and \(K\left(0\right)=\pi/2\) yields \[\omega_{n}=\frac{n^{2}}{4}\frac{1}{2mR^{2}}-\frac{m}{2}R^{2}\Omega^{2}. \tag{110}\] Taking similar steps in the eigenfunctions, leads to \[\Phi\left(t,\varphi\right)=A_{n}\,e^{-i\omega_{n}t}e^{-i\left(mR^{2}\Omega \varphi-\Xi\right)}\sin\left(\frac{n}{2}\varphi\right), \tag{111}\] which reduce to ordinary plane waves for \(\Omega\to 0\). For comparison, see Refs. [15; 35; 36]. #### Solutions in the laboratory frame The solutions in the stationary-laboratory frame can be obtained by performing the inverse coordinate transformation: \(t\to t\) and \(\varphi\rightarrow\varphi+\Omega t\), leading to \[\Phi\left(t,\varphi\right)\ =\ A_{n}\,e^{-i\omega_{n}t}e^{-i\left(mR^{2}\Omega \left[\varphi+\Omega t\right]_{2\pi}-\Xi\right)}\mathtt{sn}\left(q_{n}\left[ \varphi+\Omega t\right]_{2\pi},k_{n}\right),\] with \(\left[u\right]_{2\pi}\equiv u\left[\mathrm{mod}(2\pi)\right]\), which encodes the \(2\pi\) periodicity of the solutions.
2309.06992
On the Intelligent Proportional Controller Applied to Linear Systems
We analyze in this paper the effect of the well known intelligent proportional controller on the stability of linear control systems. Inspired by the literature on neutral time delay systems and advanced type systems, we derive sufficient conditions on the order of the control system, under which, the used controller fails to achieve exponential stability. Furthermore, we obtain conditions, relating the system s and the control parameters, such that the closed-loop system is either unstable or not exponentially stable. After that, we provide cases where the intelligent proportional controller achieves exponential stability. The obtained results are illustrated via numerical simulations, and on an experimental benchmark that consists of an electronic throttle valve.
Mohamed Camil Belhadjoudja, Mohamed Maghenem, Emmanuel Witrant
2023-09-13T14:40:52Z
http://arxiv.org/abs/2309.06992v1
# On the Intelligent Proportional Controller Applied to Linear Systems ###### Abstract We analyze in this paper the effect of the well-known _intelligent proportional controller_ on the stability of linear control systems. Inspired by the literature on neutral time-delay systems and advanced-type systems, we derive sufficient conditions on the order of the control system, under which, the used controller fails to achieve exponential stability. Furthermore, we obtain conditions, relating the system's and the control parameters, such that the closed-loop system is either unstable or not exponentially stable. After that, we provide cases where the intelligent proportional controller achieves exponential stability. The obtained results are illustrated via numerical simulations, and on an experimental benchmark that consists of an electronic throttle valve. ## I Introduction Model-free control (MFC) aims to regulate control systems with unknown dynamical equations. The MFC that we consider here has been introduced in [1, 2]; see also [3, 4] for more recent formulations. Generally speaking, this approach consists of relating the input and the output by an equation, known as _the ultra-local form_, involving the output (and its time derivatives), the input, and an unknown function lumping whatever is unknown in the system [1]. As a result, the control input is composed of two parts: a first part designed to compensate for the unknown function, and a second part that consists of a classical linear controller, usually, a PID controller. The resulting controller is known as the _intelligent PID_ controller. This class of controllers has been tested both numerically and experimentally on different classes of systems, such as automotive engines [5], automated vehicles [6] and fault accommodation in greenhouses [7]. This being said other types of MFC techniques are available in the literature; see [8]. Due to its easy implementation, as opposed to more advanced control strategies, MFC using intelligent PIDs is increasingly applied. However, despite this growing popularity, the rigorous analysis of these controllers is still at its early stage, to the best of our knowledge. Indeed, the stability guarantees for the resulting closed-loop system remain, mostly, unexplored. Some results along this direction have been obtained, for example, in [9], where links between the sampled intelligent PID controller and the sampled classical PID controller in velocity form are established. In [10], the robustness of intelligent PIDs is studied via sensitivity analysis. In [11], the discretized closed loop using MFC is shown to coincide with the Euler forward approximation of a certain class of systems. The stability of the latter class of systems is then analyzed. However, these conclusions do not necessarily extend to the original closed-loop system under MFC. On the other hand, in some works, MFC and its intelligent linear controllers have been reinforced via different control techniques. For example, in [12], MFC is combined with model predictive control. In [13], a controller combining MFC and sliding mode control is proposed. Despite their proven efficiency, these techniques are more complex to implement, as opposed to the intelligent PID controller. In this paper, we prove the efficiency and show the limitations of the intelligent proportional controller (iP) when applied to linear control systems. That is, we prove that applying an intelligent proportional controller to a linear control system reduces to applying a PD controller to a neutral delay system. Hence, the intelligent proportional controller inherits some of the limitations of the classical PD controller. More precisely, we derive sufficient conditions on the order of the system, under which, the origin fails to be exponentially stable. Furthermore, we derive sufficient conditions, on the system's parameters and the control gains, under which, the origin is either unstable or fails to be exponentially stable. Then, based on existing results on neutral delay systems, we derive sufficient conditions for exponential stability, which illustrates situations where the iP controller guarantees better results than just asymptotic stability. We illustrate our theoretical results via numerical examples and via an experimental benchmark. In the latter, we solve the the angle-tracking problem for an electronic throttle valve. The paper is organized as follows. The problem statement is in Section II. Then, some preliminary results are presented in Section III. Our main results are in Section IV. Numerical examples are in Section V. Finally, the experimental results are in Section VI. **Notation.** We denote by \(\mathbb{R}^{n}\) the set of \(n\)-tuples of real numbers, by \(\mathbb{R}_{>0}\) the set of positive real numbers and by \(\mathbb{R}_{\geq 0}\) the set of nonnegative real numbers. We let \(\mathbb{N}:=\{0,1,2,...\}\), \(\mathcal{Z}:=\{0,\pm 1,\pm 2,...\}\), and \(\mathbb{C}\) be the set of complex numbers. Given \(\tau\in\mathbb{R}_{>0}\) and a time-varying function \(y\), we write \(y_{\tau}(t):=y(t-\tau)\). Given \(a\in\mathbb{N}\), we denote the \(a^{th}\) derivative of \(y\) by \(y^{(a)}\), the first derivative by \(\dot{y}\), and the second derivative by \(\ddot{y}\). Given a matrix \(A\in\mathbb{R}^{n\times n}\), we denote by \(||A||\) its \(2\)-norm, by \(s(A)\) its spectral abscissa, by \(\rho(A)\) its spectral radius and by \(\mu(A)\) its logarithmic norm with respect to \(||.||\). We denote by \(I_{n}\) the identity matrix of dimension \(n\), and by \(0_{nm}\) the zero matrix of dimension \(n\times m\).
2310.03751
A Simple Illustration of Interleaved Learning using Kalman Filter for Linear Least Squares
Interleaved learning in machine learning algorithms is a biologically inspired training method with promising results. In this short note, we illustrate the interleaving mechanism via a simple statistical and optimization framework based on Kalman Filter for Linear Least Squares.
Majnu John, Yihren Wu
2023-09-22T00:01:02Z
http://arxiv.org/abs/2310.03751v1
# A Simple Illustration of Interleaved Learning ###### Abstract Interleaved learning in machine learning algorithms is a biologically inspired training method with promising results. In this short note, we illustrate the interleaving mechanism via a simple statistical and optimization framework based on Kalman Filter for Linear Least Squares. keywords: Interleaved learning, Kalman Filter, Linear Least Squares ## 1 Introduction Interleaved learning (IL) is a type of biological learning phenomenon observed in brain regions such as neocortex, and has inspired machine learning algorithms. IL is one of the mechanisms expounded by Complementary Learning Systems Theory (McClelland, McNaughton and O'Reilly, 1995; Marr, 1971) on how successful learners such as human beings mitigate effects of 'catastrophic interference' while learning. Recent illustrations of IL using neural networks include Saxena, Shobe and McNaughton, 2022, who exhibited that if the new information is similar to a subset of old items, then deep neural networks can learn the new information rapidly and with the same level of accuracy by interleaving the old items in the subset. A similar insight was presented in McClelland, McNaughton and Lampinen, 2020, where it was shown that for artificial neural networks, information consistent with prior knowledge can sometimes be integrated very quickly. Another recent paper (Ban and Xie, 2021) formulated interleaved machine learning as a multi-level optimization problem, and developed an efficient differentiable algorithm to solve the interleaving learning problem with application to neural architecture search. A closely related biological concept is interleaved replay which also has been empirically validated in the literature (Gepperth and Karaoguz, 2016; Kemker and Kanan, 2018). Over the past couple of decades, ideas inspired by biological IL have been utilized in a wide array of online learning methods as well, especially to prevent catastrophic forgetting. See, for example Wang _et. al._, 2015, a comprehensive and recently updated survey on continual, lifelong learning. All the applications and illustrations of IL to machine learning so far have used complex models such as neural networks. In this short paper, we aim to present a simple illustration of IL by adapting a framework from traditional optimization and statistics literature, namely, the Kalman-Filter (KF) approach for linear least squares (LLS). Understanding IL in this relatively straightforward framework may help in future with proving theoretical convergence properties, and then with hopefully extending to similar results for more complex models and gradient descent type algorithms. To the best of our knowledge, an illustration of IL based on Kalman filter for linear least squares, has not yet appeared in the literature (e.g., no mention of such an approach in the comprehensive survey by Wang _et al._, 2015). ## 2 Kalman Filter for Linear Least Squares (KF4LLS) To better understand the concepts and notation later on, we briefly review KF4LLS by closely following the exposition provided in section 3.2 in Bertsekas and Tsitsiklis, 1996. Note that although KF is widely considered as an estimation method associated with dynamical systems, the one employed for linear least squares is a specialized case where the states of the underlying dynamical system stays constant (Bertsekas, 1996). Consider fitting a linear model to the set of input-output pairs \((\mathbf{y}_{1},\mathbf{X}_{1}),\ldots,(\mathbf{y}_{m},\mathbf{X}_{m})\). Here \(\mathbf{y}_{i}\in\mathbb{R}^{n_{i}}\), \(\mathbf{X}_{i}\) is an \(n_{i}\times q\) matrix and each \((\mathbf{y}_{i},\mathbf{X}_{i})\) is a given data block. Model fitting corresponds to minimizing the following quadratic cost function \[\mathcal{C}(\mathbf{r})=\sum_{i=1}^{m}||\mathbf{y}_{i}-\mathbf{X}_{i}\mathbf{ r}||^{2},\ \ \text{for}\ \ \mathbf{r}\in\mathbb{R}^{q}.\] KF4LLS is an incremental version of Gauss-Newton method, which cycles through the data blocks. Specifically, the solution given by KF4LLS to the above minimization program is \(\boldsymbol{\psi}_{m}\) which can be obtained recursively by the algorithm \[\boldsymbol{\psi}_{i}=\boldsymbol{\psi}_{i-1}+\mathbf{H}_{i}^{-1}\mathbf{X}_{ i}^{t}(\mathbf{y}_{i}-\mathbf{X}_{i}\boldsymbol{\psi}_{i-1}),\ \ \mathbf{H}_{i}=\mathbf{H}_{i-1}+\mathbf{X}_{i}^{t}\mathbf{X}_{i},\ \ i=1,\ldots,m,\] where \(\boldsymbol{\psi}_{0}\) is an arbitrary vector and \(\mathbf{H}_{0}=\mathbf{0}\). We assume that \(\mathbf{X}_{1}^{t}\mathbf{X}_{1}\) is positive definite and that makes all \(\mathbf{H}_{i}\)'s (except \(\mathbf{H}_{0}\)) positive definite as well. KF4LSS has been well-studied in the literature (see, for example, papers citing Bertsekas, 1996). A derivation of the algorithm is given in section 3.2 in Bertsekas and Tsitsiklis, 1996 and convergence analysis is presented in Bertsekas, 1996. ## 3 Interleaving KF4LLS Consider data blocks from two different 'populations' \[(\mathbf{b}_{1},\mathbf{U}_{1}),\ldots,(\mathbf{b}_{m},\mathbf{U}_{m})\ \ \text{and}\ \ (\mathbf{f}_{1},\mathbf{V}_{1}),\ldots,(\mathbf{f}_{m},\mathbf{V}_{m}).\] To fix ideas, it may help to think in terms of an example provided in McClelland, McNaughton and O'Reilly, 1995, where the two populations are birds and fish. In our notation, we may think of columns of \(\mathbf{U}_{i}\)'s and \(\mathbf{V}_{i}\)'s as features related to birds and fish, respectively, and similarly \(\mathbf{b}_{i}\)'s and \(\mathbf{f}_{i}\)'s the corresponding target variables. In previous papers that mentioned this example, the target variables were class variables but in this paper, for convenience and simplicity, we focus on continuously distributed target variables. We also consider a third population which is a'mixture' of the first two populations in terms of data characteristics. In our running example, the third population will be penguins. In terms of features, we think of penguins as an admixture of birds and fish - they have wings like a bird and they can swim like a fish! We denote the data blocks from the penguin population as \[({\bf p}_{1},{\bf Z}_{1}),\ldots,({\bf p}_{m},{\bf Z}_{m}).\] For all populations, we assume the relationship between the corresponding target variables and features data to be linear models: \[{\bf b}_{i}={\bf U}_{i}{\bf r}_{b}+\varepsilon_{b},\ \ {\bf f}_{i}={\bf V}_{i}{\bf r }_{f}+\varepsilon_{f},\ \ {\bf p}_{i}={\bf Z}_{i}{\bf r}_{p}+\varepsilon_{p},\ i=1, \ldots,m,\] where \(\varepsilon_{b},\varepsilon_{f}\) and \(\varepsilon_{p}\) are _i.i.d._ mean zero error variables, and for convenience, we assume \(m=2k\) (_i.e._, an even number). We are interested in knowing whether we can train a model by interleaving data blocks from birds and fish, and then use the trained model to predict target variables related to penguins (_i.e._ the \({\bf p}_{i}\)'s) using features data from penguins (_i.e._ the \({\bf Z}_{i}\)'s). That is, the goal is to train a model via interleaving, using only data from birds and fish, but then test the model using only features and target variables from penguins. In this short note, we assume \[{\bf Z}_{i}=\alpha{\bf U}_{i}+(1-\alpha){\bf V}_{i} \tag{1}\] and \[{\bf r}_{p}=\alpha{\bf r}_{b}+(1-\alpha){\bf r}_{f},\ \mbox{for some}\ \alpha\in[0,1]. \tag{2}\] In words, each feature matrix \({\bf Z}_{i}\) of penguins is a weighted average of \({\bf U}_{i}\) and \({\bf V}_{i}\), the corresponding feature matrices of birds and fish. Similarly, the weight-parameters in the model for penguins, \({\bf r}_{p}\), connecting the features to the target variable is a weighted average of the weight-parameters in the models for birds and fish. If instead of (1), we assumed the distribution of \({\bf Z}_{i}\)'s to be a mixture of distributions of \({\bf U}_{i}\)'s and \({\bf V}_{i}\)'s we observed similar results as the ones presented later in this short note, but to focus the presentation we will just work with the assumption made in (1). In this case, our interleaved algorithm is as follows. **Interleaved KF4LLS algorithm:** _Step 0(a)_**:** Center all data blocks, including the target variables, individually by subtracting the corresponding column means. Thus, in the following step, all \({\bf b}_{i}\)'s, \({\bf f}_{i}\)'s, and all columns of \({\bf U}_{i}\)'s and \({\bf V}_{i}\)'s are mean-zero vectors. _Step 0(b)_**:** Set \({\bf H}_{0}^{(\alpha)}={\bf 0}\) and \[{\bf U}_{i}^{(\alpha)}=\sqrt{\alpha}\,{\bf U}_{i},\ {\bf b}_{i}^{(\alpha)}= \sqrt{\alpha}\,{\bf b}_{i};\ \ \ {\bf V}_{i}^{(\alpha)}=\sqrt{1-\alpha}\,{\bf V}_{i},\ {\bf f}_{i}^{(\alpha)}=\sqrt{1-\alpha}\,{\bf f}_{i},\ i=1, \ldots,m.\] _Step 1_**:**\({\bf H}_{1}^{(\alpha)}={\bf H}_{0}^{(\alpha)}+({\bf U}_{1}^{(\alpha)})^{t}({\bf U }_{1}^{(\alpha)});\ {\bf\psi}_{1}={\bf\psi}_{0}+({\bf H}_{1}^{(\alpha)})^{-1}({\bf U }_{1}^{(\alpha)})^{t}({\bf b}_{1}^{(\alpha)}-{\bf U}_{1}^{(\alpha)}{\bf\psi}_{0})\). _Step 2_**:**\({\bf H}_{2}^{(\alpha)}={\bf H}_{1}^{(\alpha)}+({\bf V}_{1}^{(\alpha)})^{t}({\bf V }_{1}^{(\alpha)});\ {\bf\psi}_{2}={\bf\psi}_{1}+({\bf H}_{2}^{(\alpha)})^{-1}({\bf V }_{1}^{(\alpha)})^{t}({\bf f}_{1}^{(\alpha)}-{\bf V}_{1}^{(\alpha)}{\bf\psi}_{1})\). _Step 3_**:**\({\bf H}_{3}^{(\alpha)}={\bf H}_{2}^{(\alpha)}+({\bf U}_{2}^{(\alpha)})^{t}({\bf U }_{2}^{(\alpha)});\ {\bf\psi}_{3}={\bf\psi}_{2}+({\bf H}_{3}^{(\alpha)})^{-1}({\bf U }_{2}^{(\alpha)})^{t}({\bf b}_{2}^{(\alpha)}-{\bf U}_{2}^{(\alpha)}{\bf\psi}_{2})\). _Step 4_**:**\({\bf H}_{4}^{(\alpha)}={\bf H}_{3}^{(\alpha)}+({\bf V}_{2}^{(\alpha)})^{t}({\bf V }_{2}^{(\alpha)});\ {\bf\psi}_{4}={\bf\psi}_{3}+({\bf H}_{4}^{(\alpha)})^{-1}({\bf V }_{2}^{(\alpha)})^{t}({\bf f}_{2}^{(\alpha)}-{\bf V}_{2}^{(\alpha)}{\bf\psi}_{3})\). ... etc.... _Step (m-1)_**:**\({\bf H}_{m-1}^{(\alpha)}={\bf H}_{m-2}^{(\alpha)}+({\bf U}_{k}^{(\alpha)})^{t}({ \bf U}_{k}^{(\alpha)});\ {\bf\psi}_{m-1}={\bf\psi}_{m-2}+({\bf H}_{m-1}^{(\alpha)})^{-1}({\bf U }_{k}^{(\alpha)})^{t}({\bf b}_{k}^{(\alpha)}-{\bf U}_{k}^{(\alpha)})^{t}({\bf b }_{k}^{(\alpha)}-{\bf U}_{k}^{(\alpha)})^{t}({\bf b}_{k}^{(\alpha)}-{\bf U}_{ k}^{(\alpha)})^{t}({\bf b}_{k}^{(\alpha)}-{\bf U}_{k}^{(\alpha)})^{t}({\bf b}_{k}^{(\alpha)}-{\bf U }_{k}^{(\alpha)})^{t}({\bf b}_{k}^{(\alpha)}-{\bf U}_{k}^{(\alpha)})^{t}({\bf b }_{k}^{(\alpha)}-{\bf U}_{k}^{(\alpha)})^{t}({\bf b}_{k}^{(\alpha)}-{\bf U}_{ k}^{(\alpha)})^{t}({\bf b}_{k}^{(\alpha)}-{\bf U}_{k}^{(\alpha)})^{t}({\bf b}_{k}^{( \alpha)}-{\bf U}_{k}^{(\alpha)})^{t}({\bf b}_{k}^{(\alpha)}-{\bf U}_{k}^{( \alpha)})^{t}({\bf b}_{k}^{(\alpha)}-{\bf U}_{k}^{(\alpha)})^{t}({\bf b}_{k}^ \(\mathbf{U}_{k}^{(\alpha)}\boldsymbol{\psi}_{m-2}\)), where \(k=m/2\). _Step m:_\(\mathbf{H}_{m}^{(\alpha)}=\mathbf{H}_{m-1}^{(\alpha)}+(\mathbf{V}_{k}^{(\alpha)})^{t}( \mathbf{V}_{k}^{(\alpha)});\ \boldsymbol{\psi}_{m}=\boldsymbol{\psi}_{m-1}+(\mathbf{H}_{m}^{(\alpha)})^{-1 }(\mathbf{V}_{k}^{(\alpha)})^{t}(\mathbf{f}_{k}^{(\alpha)}-\mathbf{V}_{k}^{( \alpha)}\boldsymbol{\psi}_{m-1})\). The algorithm alternates between using \((\mathbf{b}_{i},\mathbf{U}_{i})\)'s (i.e. birds data blocks) in odd-numbered steps and \((\mathbf{f}_{i},\mathbf{V}_{i})\)'s (i.e. fish data blocks) in even-numbered steps making it a proper interleaved training approach. Note that the algorithm is an oracle algorithm because it can be implemented only if the mixing coefficient \(\alpha\) is known. Typically this is possible only for simulated synthetic data. Thus, the above algorithm in its current form serves only for illustrating IL and not for any practical applications. For real data, if assumption (1) truly holds then \(\alpha\) can be estimated. It is more likely that for real data there will be a separate mixing coefficient for each column of \(\mathbf{Z}_{i}\); such separate coefficients can also be estimated, for example, using a grid search on the unit interval. **Illustration with synthetic data** We illustrate the algorithm on synthetic data generated as follows. We set \(\alpha=0.25\), \(n_{i}=n=100\), \(q=6\) and \(m=6\) (i.e. \(k=3\)). Performance of the algorithm was assessed by calculating the bias and the mean-squared error (MSE) based on the estimates \(\boldsymbol{\psi}_{i}\)'s after each step of the algorithm. Average bias and MSE over 5000 simulation-iterations were plotted (see Figure 1). Elements in \(\mathbf{r}_{b}\) and \(\mathbf{r}_{f}\) were generated separately from \(Uniform(-5,5)\) distribution, and then fixed for all simulation iterations. For each simulation-iteration, each row of \(\mathbf{U}_{i}\) was generated independently from a multivariate normal - \(N(\boldsymbol{\mu}_{1},I_{6\times 6})\), and similarly each row of \(\mathbf{V}_{i}\) was generated from \(N(\boldsymbol{\mu}_{2},I_{6\times 6})\) where \(\boldsymbol{\mu}_{1}=[\mu_{1},\ldots,\mu_{1}]^{t}\) and \(\boldsymbol{\mu}_{2}=[\mu_{2},\ldots,\mu_{2}]^{t}\). Here \(\mu_{1}\), \(\mu_{2}\) were generated separately from a \(Uniform(-10,10)\) distribution, and then fixed for all the simulation-iterations. Bias plotted below was averaged across all simulation-iterations, but within each simulation-iteration it was also averaged across elements of the parameter vector. Codes used for this example with detailed comments are posted in the following GitHub page ([https://github.com/mjohn5/InterleavedKF4LLS/](https://github.com/mjohn5/InterleavedKF4LLS/)) The red line in Figure 1 corresponds to the scenario where only the \((\mathbf{b}_{i},\mathbf{U}_{i})\) blocks were used for training, and the orange line corresponds to the scenario where only the \((\mathbf{f}_{i},\mathbf{V}_{i})\) blocks were used for training. Since all the testing was done on \((\mathbf{p}_{i},\mathbf{Z}_{i})\) blocks, it is not surprising to see that the scenarios corresponding to the red and orange lines show substantial bias and MSE for all steps. The green line corresponds to the scenario at the other extreme where training and testing were both done on the penguin data (i.e. \((\mathbf{p}_{i},\mathbf{Z}_{i})\)). Again, it is not surprising to see that the bias and MSE for this scenario is very close to zero. Blue lines correspond to the scenario with Interleaved KF4LLS algorithm used for training, and as in all other cases, testing done on 'penguin' blocks. There are two blue lines in each panel, one starting with \((\mathbf{b}_{1},\mathbf{U}_{1})\) and the other starting with \((\mathbf{f}_{1},\mathbf{V}_{1})\); in both cases the algorithm alternates between the 'birds' and 'fish' data blocks. It is easy to see from the figure that, similar to the biological interleaved learning phenomenon, interleaving the training in this simple least squares setting leads to almost nil bias and MSE. The reduction in bias and MSE achieved by the Interleaved KF4LLS algorithm in a few steps is almost the same as that achieved by the algorithm that is trained exclusively with 'penguin' data. With this synthetic data example, it is also observed that the Interleaved KF4LLS algorithm achieves almost zero bias in just two steps, a phenomenon that has some theoretical justification (see below). **Some theoretical justification** Let \(\mathcal{F}_{2j}\) denote the 'history of the algorithm' up to and including the \(2j^{th}\) step, \(j=1,\ldots,k\). Figure 1: Bias and MSE, averaged across 5000 simulation-iterations, of interleaved KF4LLS algorithm applied on synthetic data. In all scenarios, MSE was calculated as the prediction error when the trained models where applied on ‘penguin’ test data. Red, orange and green lines correspond to training based on birds, fish and penguin data blocks, respectively, without interleaving. The blue lines correspond to training based on interleaving algorithm, either starting with a bird data block or with a fish data block. That is, \(\mathcal{F}_{2j}\) is the sigma-field generated by \(\mathbf{b}_{1},\mathbf{U}_{1},\ldots,\mathbf{b}_{j},\mathbf{U}_{j};\ \mathbf{f}_{1},\mathbf{V}_{1},\ldots, \mathbf{f}_{j},\mathbf{V}_{j}\). Then the following lemmas show that even with two steps the estimator obtained by the algorithm (i.e. \(\mathbf{\psi}_{2}\)) is a good approximation to the unknown parameter-vector that we are trying to estimate, namely, \(\mathbf{r}_{p}\). Thus, the following theory closely mirrors the result that we saw with synthetic data above. **Lemma-1**.: \[\mathbf{\psi}_{2}=(\mathbf{H}_{2}^{(\alpha)})^{-1}[(\mathbf{U}_{1}^{(\alpha)}) \mathbf{b}_{1}+(\mathbf{V}_{1}^{(\alpha)})\mathbf{f}_{1}].\] (3) Hence, \[\mathbb{E}(\mathbf{\psi}_{2}/\mathcal{F}_{2})=\left[\alpha(\mathbf{U}_{1}^{t} \mathbf{U}_{1})+(1-\alpha)(\mathbf{V}_{1}^{t}\mathbf{V}_{1})\right]^{-1}\left[ \alpha(\mathbf{U}_{1}^{t}\mathbf{U}_{1})\mathbf{r}_{b}+(1-\alpha)(\mathbf{V}_ {1}^{t}\mathbf{V}_{1})\mathbf{r}_{f}\right]. \tag{4}\] **Proof of Lemma-1:** Adding \[\mathbf{H}_{1}^{(\alpha)}\mathbf{\psi}_{1}=\mathbf{H}_{1}^{(\alpha)}\mathbf{\psi}_{0 }+(\mathbf{U}_{1}^{(\alpha)})^{t}\mathbf{b}_{1}^{(\alpha)}-(\mathbf{U}_{1}^{( \alpha)})^{t}\mathbf{U}_{1}^{(\alpha)}\mathbf{\psi}_{0}\] and \[\mathbf{H}_{2}^{(\alpha)}\mathbf{\psi}_{2}=\mathbf{H}_{2}^{(\alpha)}\mathbf{\psi}_{1} +(\mathbf{V}_{1}^{(\alpha)})^{t}\mathbf{f}_{1}^{(\alpha)}-(\mathbf{V}_{1}^{( \alpha)})^{t}\mathbf{V}_{1}^{(\alpha)}\mathbf{\psi}_{1}\] and cancelling terms, we get eq. (3). Eq. (4) follows from eq. (3) since \(\mathbb{E}(\mathbf{b}_{1}/\mathcal{F}_{2})=\mathbf{U}_{1}\mathbf{r}_{b}\) and \(\mathbb{E}(\mathbf{f}_{1}/\mathcal{F}_{2})=\mathbf{V}_{1}\mathbf{r}_{f}\). Also, as a side remark, the symmetry in the result above explains why it is irrelevant whether we start with \((\mathbf{b}_{1},\mathbf{U}_{1})\) or with \((\mathbf{f}_{1},\mathbf{V}_{1})\) as seen in the synthetic data example. The following lemma states that up to a first order approximation based on Taylor series expansion, \(\mathbf{\psi}_{2}\) calculated in step-2 of the Interleaving KF4LLS algorithm is an unbiased estimator of \(\mathbf{r}_{p}\), if the columns of \(\mathbf{U}_{1}\) (and similarly columns of \(\mathbf{V}_{1}\)) are (respectively) pairwise uncorrelated and with constant standard deviation. **Lemma-2**.: If \[n^{-1}\mathbb{E}(\mathbf{U}_{1}^{t}\mathbf{U}_{1})=n^{-1}\mathbb{E}(\mathbf{V }_{1}^{t}\mathbf{V}_{1})=\sigma I_{n\times n}, \tag{5}\] then up to a first order approximation \[\mathbb{E}(\mathbf{\psi}_{2})\approx\alpha\mathbf{r}_{b}+(1-\alpha)\mathbf{r}_{f} =\mathbf{r}_{p}. \tag{6}\] **Proof of Lemma-2:** It is well-known that using a first-order Taylor series approximation, the expectation of a ratio is approximately the ratio of the expectation. Taking expectations in eq. (4), applying the above-mentioned fact and using eq. (5) we get eq. (6). ## 4 Conclusions, Brief Discussion and Future Directions Interleaved learning is a learning technique observed in human brain areas such as neocortex which helps with long-term retention and in general better learning. Inspired by this biological phenomenon, machine learning algorithms have tried to incorporate interleaving while training models, especially complex neural network models. In this short note, we presented a simple statistical framework based on linear least squares to better understand computational interleaving learning. Our assumption in eq. (1), we think, makes intuitive sense. However at first glance, it may seem that our assumption in eq. (2) on the weight parameters is a bit artificial or unrealistic from a real data perspective, especially since we use the same mixing coefficient \(\alpha\) as in eq. (1). We would like to point out that interleaving based machine learning algorithms seen in the literature (see for example Ban and Xie, 2021) make the assumption that weight parameters are same or similar across the 'learners'. For example, in Ban and Xie, 2021, the authors go even to the extend of incorporating a penalty term in the optimization program to ensure that the weight parameters do not vary across the learners, which is based on the implicit assumption that the true weight parameters in the underlying unknown model are the same across learners. Translating into our setting, we have two 'learners' corresponding to the data blocks from birds and fish. If we assume that the weight parameters for these two learners to be the same (that is, \(\mathbf{r}_{b}=\mathbf{r}_{f}\)), then our assumption in eq. (2) forces \[\mathbf{r}_{p}=\mathbf{r}_{b}=\mathbf{r}_{f}. \tag{7}\] The theory that we presented showcasing the approximate unbiasedness of \(\boldsymbol{\psi}_{2}\) follows with eq. (7) as well which is a special case of eq. (2). An important comment related to eq. (1) is that implicitly we assumed the dimensions of \(\mathbf{U}_{i}\)'s and \(\mathbf{V}_{i}\)'s to be the same, making our setting a bit restrictive. Also, although not explicitly stated, we think that in our set-up the corresponding columns of \(\mathbf{U}_{i}\)'s and \(\mathbf{V}_{i}\)'s have to be both numerical from a real data analysis perspective. If one or both of them are categorical, interpreting \(\mathbf{Z}_{i}\) based on eq. (1) may often be problematic in a practical setting. In our synthetic data example, we had the corresponding columns in \(\mathbf{U}_{i}\)'s and \(\mathbf{V}_{i}\)'s not only numerical but also from the same distribution with only the mean values different, which may make our example a bit limited. However, we note that some similarity between the corresponding columns in \(\mathbf{U}_{i}\)'s and \(\mathbf{V}_{i}\)'s is necessary for interleaving to be effective. It has been mentioned in previous literature (Saxena, Shobe and McNaughton, 2022; McClelland, McNaughton and Lampinen, 2020) that the biological brain interleaves only old items with substantial representational similarity to new items. Our simple framework based on linear least squares can probably be extended to logistic regression models or any generalized linear models and support vector machines as well, which we intend to pursue as future work. A framework like the one presented in this short note will also help with better understanding the convergence properties of interleaving algorithms. Future work will include stating and proving such theoretical properties as well. ## Acknowledgements We thank Professor Bruce McNaughton for inspiring us on this line of work. We also thank Rajat Saxena and Bruce McNaughton for sharing their draft review paper related to forward transfer in continual learning.
2309.09479
LogShrink: Effective Log Compression by Leveraging Commonality and Variability of Log Data
Log data is a crucial resource for recording system events and states during system execution. However, as systems grow in scale, log data generation has become increasingly explosive, leading to an expensive overhead on log storage, such as several petabytes per day in production. To address this issue, log compression has become a crucial task in reducing disk storage while allowing for further log analysis. Unfortunately, existing general-purpose and log-specific compression methods have been limited in their ability to utilize log data characteristics. To overcome these limitations, we conduct an empirical study and obtain three major observations on the characteristics of log data that can facilitate the log compression task. Based on these observations, we propose LogShrink, a novel and effective log compression method by leveraging commonality and variability of log data. An analyzer based on longest common subsequence and entropy techniques is proposed to identify the latent commonality and variability in log messages. The key idea behind this is that the commonality and variability can be exploited to shrink log data with a shorter representation. Besides, a clustering-based sequence sampler is introduced to accelerate the commonality and variability analyzer. The extensive experimental results demonstrate that LogShrink can exceed baselines in compression ratio by 16% to 356% on average while preserving a reasonable compression speed.
Xiaoyun Li, Hongyu Zhang, Van-Hoang Le, Pengfei Chen
2023-09-18T04:27:05Z
http://arxiv.org/abs/2309.09479v1
# LogShrink: Effective Log Compression by Leveraging Commonality and Variability of Log Data ###### Abstract. Log data is a crucial resource for recording system events and states during system execution. However, as systems grow in scale, log data generation has become increasingly explosive, leading to an expensive overhead on log storage, such as several petabytes per day in production. To address this issue, log compression has become a crucial task in reducing disk storage while allowing for further log analysis. Unfortunately, existing general-purpose and log-specific compression methods have been limited in their ability to utilize log data characteristics. To overcome these limitations, we conduct an empirical study and obtain three major observations on the characteristics of log data that can facilitate the log compression task. Based on these observations, we propose LogShrink, a novel and effective log compression method by leveraging commonality and variability of log data. An analyzer based on longest common subsequence and entropy techniques is proposed to identify the latent commonality and variability in log messages. The key idea behind this is that the commonality and variability can be exploited to shrink log data with a shorter representation. Besides, a clustering-based sequence sampler is introduced to accelerate the commonality and variability analyzer. The extensive experimental results demonstrate that LogShrink can exceed baselines in compression ratio by 16% to 356% on average while preserving a reasonable compression speed. Log Compression, Data Compression, Log Analysis, Clustering + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author. + Footnote †: Hongyu Zhang is the corresponding author.
2309.07332
Reliability-based cleaning of noisy training labels with inductive conformal prediction in multi-modal biomedical data mining
Accurately labeling biomedical data presents a challenge. Traditional semi-supervised learning methods often under-utilize available unlabeled data. To address this, we propose a novel reliability-based training data cleaning method employing inductive conformal prediction (ICP). This method capitalizes on a small set of accurately labeled training data and leverages ICP-calculated reliability metrics to rectify mislabeled data and outliers within vast quantities of noisy training data. The efficacy of the method is validated across three classification tasks within distinct modalities: filtering drug-induced-liver-injury (DILI) literature with title and abstract, predicting ICU admission of COVID-19 patients through CT radiomics and electronic health records, and subtyping breast cancer using RNA-sequencing data. Varying levels of noise to the training labels were introduced through label permutation. Results show significant enhancements in classification performance: accuracy enhancement in 86 out of 96 DILI experiments (up to 11.4%), AUROC and AUPRC enhancements in all 48 COVID-19 experiments (up to 23.8% and 69.8%), and accuracy and macro-average F1 score improvements in 47 out of 48 RNA-sequencing experiments (up to 74.6% and 89.0%). Our method offers the potential to substantially boost classification performance in multi-modal biomedical machine learning tasks. Importantly, it accomplishes this without necessitating an excessive volume of meticulously curated training data.
Xianghao Zhan, Qinmei Xu, Yuanning Zheng, Guangming Lu, Olivier Gevaert
2023-09-13T22:04:50Z
http://arxiv.org/abs/2309.07332v1
Reliability-based cleaning of noisy training labels with inductive conformal prediction in multi-modal biomedical data mining ###### Abstract Accurately labeling biomedical data presents a challenge. Traditional semi-supervised learning methods often under-utilize available unlabeled data. To address this, we propose a novel reliability-based training data cleaning method employing inductive conformal prediction (ICP). This method capitalizes on a small set of accurately labeled training data and leverages ICP-calculated reliability metrics to rectify mislabeled data and outliers within vast quantities of noisy training data. The efficacy of the method is validated across three classification tasks within distinct modalities: filtering drug-induced-iver-injury (DIL) literature with title and abstract, predicting ICU admission of COVID-19 patients through CT radiomics and electronic health records, and subtyping breast cancer using RNA-sequencing data. Varying levels of noise to the training labels were introduced through label permutation. Results show significant enhancements in classification performance: accuracy enhancement in 86 out of 96 DILI experiments (up to 11.4%), AUROC and AUPRC enhancements in all 48 COVID-19 experiments (up to 23.8% and 69.8%), and accuracy and macro-average F1 score improvements in 47 out of 48 RNA-sequencing experiments (up to 74.6% and 89.0%). Our method offers the potential to substantially boost classification performance in multi-modal biomedical machine learning tasks. Importantly, it accomplishes this without necessitating an excessive volume of meticulously curated training data. data labeling, radiomics, inductive conformal prediction, multimodal biomedical data, semi-supervised learning ## I Introduction Machine learning, particularly supervised learning, has contributed to many successful applications in the biomedical field, such as clinical decision support systems to predict sepsis based on electronic health records [1], to triage COVID-19 patients based on predicted needs of intensive care units and mechanical ventilation based on computed tomography (CT) radiomics and electronic health records [2], to predict patient prognosis from multi-omics data [3, 4], and information retrieval system based on natural language processing (NLP) to rapidly extract structured clinical diagnosis codes and symptoms from free-text clinical notes [5]. However, the challenge in obtaining accurately labeled data with high-fidelity in the multi-modal biomedical machine learning has posed a significant challenge in developing effective models and algorithms. For instance, to create accurately labeled diagnoses and lesion segmentation, experienced radiologists have to manually process a large number of radiological images (e.g. X-ray, CT) [2, 6, 7]. This process typically involves a group of independent radiologists to label randomized and blinded images, which can pose a huge burden for both clinicians if they are required to provide labels with high confidence and fidelity. Similarly, to train an effective model for biomedical natural language processing, experts need to read through thousands of free-text notes to generate text classifications [8]. This text labeling process can take a long time but without high quality labels, it can be difficult to train models. As a case in point, labeling noise of diagnostic codes have become a recognized issue in cardiovascular patients' electronic health records [5, 9, 10, 11]. To address the scarcity of well-curated labels in biomedical data, weakly supervised learning and semi-supervised learning have been proposed. Weakly supervised learning leverages noisy labels generated by weak labeling based on rules and benefits from the information from multiple noisy labels, which has been shown to be effective in multiple tasks [12, 13, 14, 15]. Semi-supervised learning generally uses both labeled data and unlabeled data [16, 17, 18, 19, 20], leverages the distributional information in the labeled data and uses the unlabeled data to boost the model performance [16, 21]. Both mechanisms have been shown to be successful in multiple applications but there are still limitations. Weakly supervised learning mechanism may not fully benefit from the ground-truth labels made by clinicians or radiologists with high confidence. It also requires an additional step of weak labeling and generation of noisy labels, which can be dependent on the expert knowledge that may not be generalizable across different research fields [22]. It might also be challenging to determine what the incomplete or weakly labeled data represents and this ambiguity can lead to inaccurate learning [23]. When the labeling process is weak, it may be difficult to measure the performance of a model accurately as well. The model evaluation metrics might be unreliable [24]. As an example, for diagnostic code extraction from clinical notes, the weak labeling process in weakly supervised learning still takes time and needs some expert knowledge to design rule-based information extraction systems based on regularized expression. Traditional semi-supervised learning, on the other hand, may lose some of the information of the data which are deemed as the unlabeled data in the unsupervised learning part. For example, radiologists may have knowledge of how to label a portion of unlabeled X-ray images but the confidence might be low. In this case, if the data are deemed as labeled data for the supervised learning part, they contain noise in the labels. On the contrary, if they are deemed as unlabeled data, the radiologists' understanding and prior knowledge are not fully made use of. However, even though there could be labeling noise in the labels given by the radiologists (i.e. wrongly labeled data), the original labels may still contain information. Therefore, regarding them as definitely correct reference labels or absolutely unlabeled data may not be the best-optimized idea. Additionally, semi-supervised learning often makes the assumption that the unlabeled data follows a similar distribution as the labeled data (e.g., the labeled data and unlabeled data cover similar space on a manifold or Euclidean feature space). If this assumption is incorrect, then the model's performance may be degraded [24]. The complexity of weakly supervised learning and the under-utilization of noisy labels and strict distribution assumptions in traditional semi-supervised learning lead us to think whether we can further benefit from a small portion of well-curated high quality data while also using a large portion of labeled data with noise. If we can accurately label certain portions of training data, can we use this information to clean the other part of potentially noisy data? If we can correct noisy training data, can we further improve the performance in supervised learning tasks? One method to curate the noisily labeled data is to use the reliability quantified by conformal predictions [25, 26]. Based on a weak assumption of independently and identically distributed (i.i.d) data, conformal prediction quantifies the reliability of a particular combination of feature and label based on the calibration of the nonconformity measure [25, 26, 27]. In previous studies, conformal prediction has been shown effective in reliability-based unlabeled data augmentation tasks [28, 29, 30]. In these studies, researchers start with a labeled training data set and train conformal predictors to filter the unlabeled data. Researchers attach pseudo-labels to the unlabeled data based on certain models, and remove the unreliable pseudo-labeled data. The augmented training data set can lead to statistically significantly better classification performance with robustness. The reliability-based unlabeled data augmentation method also showed effectiveness in improving model generalizability under domain drifts between the training and validation/test datasets [28, 31]. The "training data augmentation" idea in these works inspired us to think in the opposite direction with the idea of "trimming training data": can we use the inductive conformal prediction (ICP) framework to clean the noisy training data, to improve the training data quality and thus to improve the classification performances in the downstream tasks? The setup of the ICP framework corresponds naturally to the scenarios we face: it requires a small portion of calibration data and a large portion of proper training data [25, 26, 28]. The latter is used to train a conformal predictor to compute the nonconformity measure while the former is used to calibrate the reliability of a particular combination of a feature and a label based on the reference distribution of nonconformity measure on the calibration set. To leverage the ICP, we can deem the well-curated labeled training data as the calibration set while using the large portion of remaining data with unknown label quality as the proper training set. Instead of the typical usage of ICP to quantify the test samples' reliability, we use the ICP to quantify the proper training samples' reliability in a retrospective manner and detect whether the original labels are of high reliability based on the P-values, or more specifically, how well the original label conforms to the reference distribution of nonconformity measure on the calibration set and whether there is another label showing better conformity. In this study, we propose a reliability-based training data cleaning method based on ICP to detect potentially wrongly labeled data (another label is more likely to be correct) and outliers (the out-of-distribution data, no labels are likely to be correct) and make corrections based on rules (Methods Section 2, 3). We validate the effectiveness of the method on three biomedical machine learning classification tasks with different modalities: a clinical NLP task, a radiomics task using quantitative image features extracted from radiographic images, an electronic health record task, and an RNA-sequencing task (see Methods Section 1). We manually pollute the training data to simulate different levels of labeling noises in the training data, by manually permuting different percentages of labels. The classification performance metrics are evaluated with and without training data cleaning (see Methods Section 4). Additionally, the mechanisms of the cleaning method were investigated by visualizing the cleaning processes: how many corrections of wrongly labeled data and how many removals of outliers were made by the method under different hyper-parameters. ## II Methods ### _Datasets and task description_ To investigate a broad range of medical data modalities and different scales of feature dimensionality, in this study, we tested the reliability-based training data cleaning method based on inductive conformal prediction on three classification tasks: 1) a natural language processing task: filter drug-induced liver injury (DILI) literature based on word2vec (W2V) and sent2vec (S2V) embeddings [8]; 2) an imaging and electronic health record task: predict whether a COVID-19 patient in the general ward will be admitted to the intensive care unit (ICU) [2]; 3) an RNA-sequencing (RNA-seq) task: classify breast cancer subtypes based on The Cancer Genome Atlas Program (TCGA) RNA-seq dataset [32]. The details of these datasets are introduced in Table I and the following paragraphs. The DILI dataset was released by the Annual International Conference on Critical Assessment of Massive Data Analysis (CAMDA 2021), with the DILI-positive samples (7,177) and DILI-negative samples (7,026) curated by FDA experts. The data involve both title and abstract of publications and the task is to predict whether a publication contains DILI information. The detailed pre-processing of the text can be found in the previous publication [8]: the text were lowercased, with additional removals of punctuation, numeric, special characters, multiple white spaces, stop words, and the text was finally tokenized with Gensim library on Python 3.7 [33]. Then, we generated the text embeddings based on the biomedical sent2vec (S2V) model [34] and biomedical word2vec (W2V) model [35] because they have shown good classification performances on this challenge in previous studies with logistic regression classifiers, and these two text vectorizations generate text embeddings with low feature dimensionality (700 for S2V and 200 for W2V) [8]. COVID-19 data (n=2,113) in this study were collected from 40 hospitals in China from December 27, 2019 to March 31, 2020 [2]. Patients selection followed the inclusion criteria: (a) RT-PCR confirmed positive severe acute respiratory syndrome coronavirus (SARS-CoV-2) nucleic acid test; (b) baseline chest CT examinations and laboratory tests on admission; (c) short-term prognosis information (discharge or admission to ICU). Data for each patient included: 1) Clinical data based on electronic Health Records (EHR): (a) demographics: age and gender; (b) comorbidities: coronary heart disease, diabetes, hypertension, chronic obstructive lung disease (COPD), chronic liver disease, chronic kidney disease, and carcinoma; (c) clinical symptoms: fever, cough, myalgia, fatigue, headache, nausea or vomiting, diarrhea, abdominal pain, and dyspnea on admission. 2) Lab data based on laboratory test: (a) blood routine: white blood cell (WBC) count (\(\times 10^{9}/L\)), neutrophil count (\(\times 10^{9}/L\)), lymphocyte count (\(\times 10^{9}/L\)), platelet count (\(\times 10^{9}/L\)), and hemoglobin (\(g/L\)); (b) coagulation function:thrombin time (PT) (\(s\)), activated partial thrombolistin time (aPTT) (\(s\)), and D-dimer (\(mg/L\)); (c) blood biochemistry: albumin (\(g/L\)), alanine aminotransferase (ALT) (\(U/L\)), aspartate Aminotransferase (AST) (\(U/L\)), total bilirubin (\(mmol/L\)), serum potassium (\(mmol/L\)), sodium (\(mmol/L\)), creatinine (\(\mu mol/L\)), creatine kinase (CK) (\(U/L\)), lactate dehydrogenase (LDH) (\(U/L\)), \(\alpha\)-Hydroxybutyrate dehydrogenase (HBDH) (\(U/L\)); (d) infection-related biomarkers: C-reactive protein (CRP) (\(mg/L\)). 3) Radiomics data based on CT imaging: a commercial deep-learning AI system (Beijing Deepwise & League of PhD Technology Co. Ltd) was first used to detect and segment the pneumonia lesion, and two radiologists checked the results of the automatic segmentation. Then, pyradiomics (v3.0) running in the Linux platform was adopted to extract radiomic features (1,652 features per lesion). Next, for a given patient and for each radiomic feature, we summarized the distribution of the feature values across all the lesions for the patient by several summary statistics (mean, median, standard deviation, skewness, the first quartile, the third quartile) and the number of lesions. Finally, a total of 9,913 quantitative radiomic features were extracted from CT images for each patient. Detailed data collection and preprocessing are shown in the supplementary materials. RNA-seq data of the TCGA breast cancer cohort [32] were obtained from the UCSC Xena browser [36]. Gene expression values were normalized using the Fragments Per Kilobase of transcript per Million mapped reads (FPKM) method. The dataset included 1,096 patients, and the molecular subtypes were labeled using the methods as described previously [37]. The number in each subtype was: LumA (n = 569), LumB (n = 219), Basal (n = 187), HER2 (n = 82), and normal-like (n = 39). The feature dimensionality is 31,098. ### _Inductive conformal prediction:_ #### Ii-B1 General Pipeline of Inductive Conformal Prediction Inductive Conformal Prediction (ICP) is a computational framework that operates under the assumption of identically and independently distributed data. For comprehensive discussions on conformal predictors, readers are directed to past works [25, 26, 30]. To succinctly describe the ICP framework, it begins with splitting the training data into a proper training set and a calibration subset. Nonconformity measures are then derived from the proper training set using specific heuristic rules or algorithms, such as using the conditional probability from a classification model like Conformal Prediction with Shrunken Centroids (CPSC) [29] or using the ratio of cumulative distances between dissimilar and similar samples as in Conformal Prediction with k-nearest neighbors (CPKNN) [27, 30]. Following this, nonconformity measures for the calibration and test data are computed and utilized as calibration statistics. For the calibration set, these nonconformity measures are computed using both features and ground-truth labels. In contrast, for the test data, the nonconformity measures are computed for every possible label within the label space in order to quantify and compare the conformity of each possible label. ICP then uses the empirical distribution of the nonconformity measures on the calibration set as a reference, assessing at which percentile the test sample-label combination's nonconformity measure falls. Based on this percentile, ICP calculates the P-value, which indicates the degree of conformity of a specific feature-label combination to the underlying data distribution, i.e., how well the test feature-label combination conforms to the distribution of the nonconformity measure on the calibration set. This enables the quantification of prediction reliability by considering the conformity of the most likely label to the training data distribution. The ICP frame work used in this study is visualized in top-left section of Fig. 1, where the entire training set with \(b\) samples is divided into a proper training set \(\{X_{1},X_{2},X_{3},\ldots,X_{a}\}\) and a calibration set \(\{X_{a+1},X_{a+2},X_{a+3},\ldots,X_{b}\}\left(a<b\right)\). Based on a nonconformity measure algorithm trained on the proper training set, the nonconformity measure of calibration set and proper training set (\(A\_Cal\) and \(A\_Pro\)) are calculated. The P-values, which are the calibrated results reflecting the conformity of a feature-label combination, are then calculated for all possible labels of each sample in the proper training set, and the P-values indicate the reliability of a proper training sample when attached every possible label judged by the nonconformity measure distribution on the calibration set. In this study, to clean the training data with noisy labels, the proper training data represent the majority part of the labeled data but may be noisy with outliers and wrongly labeled data unknown to the users, while the calibration set are the smaller portion of labeled data with well-curated labels. It should be mentioned that different from the typical use case of ICP where it uses the calibration set to compute the P-values for the test set, in this study, we aim to compute the P-values for the samples in proper training set and the leverage the P-values for training data cleaning. The nonconformity measure for the \(i\)-th sample with a supposed label \(y\), expressed as \(\alpha_{i}^{y}\), is determined via the CPSC algorithms, which we will cover in the following subsection. The computation of P-values for a given sample \(x_{i}\) proceeds as outlined below. In this context, \(p_{i}^{y}\) signifies the P-value of \(x_{i}\) for a possible label \(y\) within the label space. Similarly, \(\alpha_{j}^{y}\) represents the nonconformity measure for the \(j\)-th sample, associated with the label \(y\) in the calibration set. Lastly, \(\alpha_{i}^{y}\) stands for the nonconformity measure for a potential label \(y\) tied to the \(i\)-th sample in the proper training set. Here, Laplace smoothing was applied. \[p_{i}^{y}=\frac{\left|\left\{j=a+1,\ldots,b\right\}:\alpha_{j}^{y}\geq\alpha_{ i}^{y}\right|+1}{b-a+1} \tag{1}\] #### Ii-B2 Nonconformity Measure Algorithm with Shrunken Centroids In this study, we applied our previously developed CPSC algorithms with the shrunken centroids (SC) as the basis for nonconformity measure because it has shown higher computation efficiency and effectiveness in reliability quantification when compared with the conventional conformal prediction with k-nearest neighbors algorithm (CPKNN) and several other conformal predictors [29], and its effectiveness in quantifying reliability has been validated on both the DILI dataset [8] and the COVID-19 patient ICU admission prediction task [38]. The computation is based on the following steps similar to the steps of shrunken centroids algorithm [39] (assume the dimensionality of feature space is denoted as \(D\)): In the feature space, we firstly calculate the class centroids \(\bar{x}_{m}\in\mathrm{I\!R}^{\mathrm{D}}\) for class \(1,2,...,M\)) and the overall centroid \(\mu\). \(C_{m}\) refers to the set of samples in class \(m\), \(n_{m}\) denotes the total number of samples within class \(m\) and \(x_{jm}\) denotes the \(j\)-th sample of class \(m\). \[\bar{x}_{m}=\sum_{x_{jm}\in C_{m}}\frac{x_{jm}}{n_{m}} \tag{2}\] \[\mu=\sum_{i=1}^{n}\frac{x_{i}}{n} \tag{3}\] Then, the pooled standard deviation is computed and the contrasts between class centroids and the overall centroid are normalized using the pooled standard deviation: \[\sigma^{2}=\frac{1}{n-M}\sum_{m=1}^{M}\sum_{x_{j}\in Cm}\left(x_{j}-\bar{x}_{ m}\right)^{2} \tag{4}\] \[d_{m}=\left(\bar{x}_{m}-\mu\right)/\sigma \tag{5}\] Next, these contrasts are shrunken towards the overall centroid with a soft threshold symbolized by \(\Delta\), which is regarded as a hyperparameter in this study (6): \[d_{m}^{\prime}=\mathrm{sign}\left(\mathrm{d}_{m}\right)\left( \left|\mathrm{d}_{m}\right|-\Delta\right)_{+} \tag{6}\] \[h_{+}\quad=\left\{\begin{array}{cc}h&h>0\\ 0&h\leq 0\end{array}\right.\] The impact of regularization is governed by the threshold parameter \(\Delta\). If the absolute value of \(d_{xy}\), denoting the contrast in an \(x\)-th feature for class \(y\), is less than the threshold \(\Delta\), it is concluded that the related feature lacks sufficient discriminatory power for classification. Consequently, the shrunken contrast for this attribute is reduced to zero, thus discarding the attribute considered as non-contributory and diminishing the dimensionality of the data. Following this, it becomes possible to recalibrate the class centroids taking into account the shrunken contrasts (7). \[\bar{x}_{m}^{\prime}=\mu+s*d_{m}^{\prime} \tag{7}\] Then, in the feature space, the discriminatory score of a new sample \(x^{*}\) can be compared with shrunken centroids for each class: \[\delta_{m}\left(x^{*}\right)=\log\pi_{m}-\frac{1}{2}\sum_{k=1}^{\mathrm{D}} \frac{\left(x_{k}^{*}-\bar{x}_{km}^{\prime}\right)^{2}}{s_{j}^{2}} \tag{8}\] where \(x_{k}^{*}\) denotes the \(k\)-th feature of the new sample \(x^{*}\) while \(\bar{x}_{km}^{\prime}\) denotes the \(k\)-th value of the shrunken centroid of class \(m\). The discriminatory score \(\delta_{m}\left(x^{*}\right)\) quantifies the proximity of a novel sample \(\left(x^{*}\right)\) to the \(m\)-th shrunken centroid), or in other words, it represents the log probability of \(x^{*}\) being part of class \(m\), without normalization. This resulting score is constituted by two components: the initial term, \(\log\pi_{m}\), indicates the prior probability of class \(m\), determined by the frequency of samples in class \(m\) amidst all observations. The secondary term constitutes the standardized squared distance between the \(m\)-th centroids and the new sample. Consequently, the log probability of a specified class \(m\), devoid of normalization, \(P(Y=m|X)\), is influenced by both the initial class distribution and the sample's closeness to varying centroids. The probability of the new sample \(x^{*}\) from class \(k\) can then be modeled with the discriminatory score for all classes: \[\hat{p}\left(k|x^{*}\right)=\frac{e^{\delta_{k}\left(x^{*}\right)/T}}{\sum_{\ell= 1}^{K}e^{\delta_{l}\left(x^{*}\right)/T}} \tag{9}\] In order to obtain a normalized probability distribution (one that spans from 0 to 1 and whose elements sum to 1), the softmax function is employed. This transforms log probabilities (not normalized) from arbitrary real numbers into normalized probabilities. It operates on \(\delta_{m}\left(x^{*}\right)\) in a manner comparable to its handling of logits in neural networks [40]. Moreover, a scaling factor \(T\) (denoted as 'temperature') is implemented to diminish the value of \(\delta_{m}(x^{*})\) with the aim of rendering the predicted probability distribution more uniform or "softer", while conserving the relative probability ranks across each class. The concept of temperature hyperparameter \(T\) is adopted from the softmax function used in knowledge distillation. This typically results in a more evenly spread distribution across various labels, hence retaining the information of less probable labels while also mitigating overfitting to some degree [41]. Simultaneously, as per (8), the posterior probability as given by \(\delta_{m}\left(x^{*}\right)\) is invariably a negative value. This could lead to the term \(e^{\delta_{m}\left(x\right)}\) in the softmax function becoming exceedingly small and potentially causing numerical instability. This risk is particularly pronounced when handling high-dimensional data, as crucial probability information might be lost in these spiky conditional probabilities. By scaling the original \(\delta_{m}\left(x^{*}\right)\) with \(T\), the absolute value of the exponential term in the softmax function increases, leading to greater information retention. To mitigate the risks of overfitting and numerical instability, \(T\) will be tuned as a hyperparameter in this study. Finally, we convert the predicted probability to a nonconformity measure in ICP framework based on (10). Here, we applied a design of the nonconformity measure \(\alpha_{j}^{y_{i}}\) that has been validated in multiple machine learning applications [31, 38, 8]: \[\alpha_{j}^{y_{i}}=0.5-\frac{\hat{p}\left(y_{i}\mid x_{j}\right)-\max\hat{p}_ {y_{l}=y_{i}}\left(y_{i}\mid x_{j}\right)}{2} \tag{10}\] ### _Training data cleaning methods based on inductive conformal prediction_ In this study, the cleaning method is visualized in Fig. 1. Upon partitioning the entire dataset into training (60%), validation (20%) and test sets (20%), we partition our training data into a proper training set with unknown label qualities (80% entire training samples) and a well-curated calibration set (20% entire training samples). Here, to simulate the real-world scenarios in biomedical machine learning applications, we let the proper training set represent the large portion of training samples with unknown label qualities, while the calibration set represent the small portion of training data with clean labels. As is introduced in the previous subsection, ICP can be used to quantify the conformity of a particular feature-label combination based on the well-curated calibration set. In this study, instead of quantifying the conformity for samples in the test set, we aim to quantify the conformity of the noisy training data in the proper training data with the well-curated calibration set. Then, the wrongly labeled data and outliers can be detected and corrected based on the following rules (shown in the bottom left section of Fig. 1): * Wrongly labeled data detection: if the original label of a training data point has a P-value smaller than another possible label by a large margin (abbreviated as "detection threshold" in the following sections: 0.8/0.5/0.2), the training data point is deemed as wrongly labeled because there is a label that shows better conformity evaluated by the well-curated calibration set; * Wrongly labeled data correction: the label with the highest P-value will be used to replace the original label of the training data point; * Outlier detection: if all of the possible labels are with P-values smaller than 0.1, the training data point is deemed as an outlier since all labels show low conformity to the well-curated calibration set; * Outlier correction: the training data point is removed. Upon detecting the wrongly labeled data and outliers, corrections are made accordingly. The cleaned proper training set is then combined with the calibration set to be the cleaned training set and used for the downstream prediction tasks (right half section of Fig. 1). For the downstream tasks, we used the linear discriminant analysis (LDA) and logistic regression (LR) as two representative classifiers. The classifier hyperparameters are fixed with the default values in scikit-learn packages, because this study focuses on investigating the influence of the training method process and whether cleaner training data can lead to better classification performances. To investigate how the thresholds to detect the wrongly labeled data affect the performance, we did experiments on three detection thresholds: 0.2, 0.5, and 0.8, and reported all of the results. For the DILI prediction task, the cleaning method is shown exactly as in Fig. 1. For the COVID-19 prediction task, before any classification modeling (both in the training of CPSC and the downstream LDA/LR classifier), the training data have been augmented with synthetic minority oversampling technique (SMOTE) to up-sample the positive cases to balance the two classes [2]. Additionally, for the COVID-19 prediction task and TCGA breast cancer subtype prediction task, Lasso feature selection was performed before any classification modeling to deal with the high feature dimensionality. The hyperparameters of the CPSC (\(\Delta\) and \(T\)), which is the core algorithm of the cleaning process, as well as the strength of L1 penalty in Lasso feature selection (\(C\)), were tuned based on the performance on the validation dataset. For the three tasks, the metrics used to feedback the hyperparameter tuning were: classification accuracy for the DILI dataset, the sum of AUROC and AUPRC for the COVID-19 dataset, the sum of classification accuracy and macro-average F1 score for the TCGA dataset. The range of the hyperparameters was: 1) in CPSC: \(\Delta\): [0, 0.1, 0.2, 0.3], \(T\): [1, 10, 100]; 2) in Lasso feature selection: \(C\): [0.05, 0.1, 1, 10]. To simulate different levels of noises in the training data in the real-world applications, we artificially permuted 0 to 80% training labels in the proper training set (increment: 10%). As a result, there were a total of 54 scenarios for each classification task: 3 different detection thresholds, 9 different levels of noises, and 2 different classifiers. 48 scenarios were with the known labeling noises caused by manual label permutation and 6 scenarios were with the original training data. ### _Model performance evaluation_ To evaluate the performance of the training data cleaning method, we investigate both the effectiveness and the cleaning processes of the method. The effectiveness of the training data cleaning method is quantified by the improvement of classification metrics on the three datasets. On the DILI literature classification task, the classification accuracy was used as the metric considering the balanced classes in the dataset; on the COVID-19 ICU admission prediction task, the AUROC and AUPRC were used as the metric due to the imbalanced dataset with much fewer positive cases; on the TCGA breast cancer subtype prediction task, the multi-class classification accuracy and macro-average F1 score were used as the metric. To analyze how the training data cleaning method works and investigate the dynamics of the method, the cleaning processes were visualized. We counted and plotted the total number of corrections for wrongly labeled data detected by the method, the number of wrongly labeled data for each class, as well as the number of outliers detected by the method. Additionally, using the well-curated DILI dataset as an example, we investigate the correctness of the cleaning processes by counting and plotting the total number of wrongly labeled data detected (same as the total number of corrections), the total number of truly wrongly labeled data, the number of wrongly labeled data after correction. How the cleaning processes variate with the detection thresholds for wrongly labeled data was also investigated. Finally, we investigate how the cleaning processes variate with the hyperparameters in CPSC (\(\Delta\) and \(T\)), which is the core algorithm of the training data cleaning method and the results are shown in the supplementary materials. ### _Statistical tests_ To test the robustness of experiments, we performed random dataset partitions for 30 times and perform each of the 54 scenarios for 30 repetitive experiments. The mean value and 95% confidence interval (CI) of the classification performance metrics will be reported in the result section. Paired t-tests were performed to test the statistical significance in the metrics with/without training data cleaning. ## III Results _Effectiveness of training data cleaning on the drug-induced liver injury literature filtering task_ The effectiveness of inductive conformal prediction in training data cleaning was first evaluated on the DILI classification task: to predict whether a publication has drug-induced-liver-injury contents or not based on the free-text title and abstract. Fig. 1: The pipeline of the design of the reliability-based training data cleaning method based on inductive conformal prediction and the validation process. The training data cleaning method based on the conformal prediction is shown on the left half of the figure while the modeling of the downstream classification tasks and the evaluation on the validation and test sets are shown on the right half of the figures. Based on the standard ICP method, the training dataset is partitioned into proper training set and calibration set. The proper training set is used to represent the noisy training data and the calibration set represents the well-curated dataset. Potential wrongly labeled data and outliers in the proper training set are detected and corrected based on the P-value calibrated on the nonconformity measure distribution on the calibration set. The cleaned training set is then used to train classifiers for downstream classification tasks and compared against the baseline. For this task, we considered two types of word embeddings, and the results are respectively shown in Fig. 2 (S2V embeddings) and 3 (W2V embeddings). Each type of text embedding was tested in 54 scenarios, encompassing two classifiers (LR, LDA), three detection thresholds of wrongly labeled data (0.8, 0.5, 0.2) and nine levels of training label permutation from 0 to 80% (increment of 10%). As the percentage of permuted training labels increased, the classification accuracy decreased. In scenarios with no permuted training labels (six scenarios in total), the cleaning process did not yield significant improvements. This outcome can be attributed to the DILI dataset being well curated by FDA experts, suggesting that the original training labels contained minimal noise requiring correction through the cleaning method. With the S2V embeddings, when the percentage of labels permuted is larger than zero, with the conformal-prediction-based training data cleaning method, the classification accuracy for both LR and LDA models on the test is statistically significantly better in most scenarios on the test set (\(p<0.05\) on 47 out of 48 scenarios with permuted training labels). A higher detection threshold for the wrongly labeled data (a stricter strategy to correct the wrongly labeled data from the perspective of the conformal predictor) also leads to more evident accuracy improvement when the noise in the training data is low (percentage of labels permuted lower than 0.3), while a lower detection threshold can lead to more accuracy improvement when the training labels become noisier (percentage of labels permuted larger than 0.4). The largest improvement in test accuracy (in the term of absolute value of accuracy increment) was in the scenario with 80% labels permuted and a detection threshold of 0.2 with the LDA classifier: from 0.8120 to 0.9048 (11.4%). With the W2V embeddings, similar results were shown: the accuracy was significantly better in most scenarios on the test set (\(p<0.05\) on 39 out of 48 scenarios with permuted training labels). The effect of the detection threshold for the wrongly labeled data is similar to that shown based on the S2V embedding. What is more evident is that with a detection threshold of 0.2, the cleaning method cannot guarantee model accuracy improvement when the training data is less noisy (i.e., when the percentage of labels permuted is lower than 0.4). The largest improvement in test accuracy (in the term of the absolute value of accuracy increment) was in the scenario with 80% labels permuted and a detection threshold of 0.2 with the LR classifier: from 0.8747 to 0.9019 (3.1%). ### _Effectiveness of training data cleaning on the COVID-19 ICU admission prediction task_ In the COVID-19 ICU admission prediction task: to predict whether a COVID-19 patient admitted to the general ward will be admitted to ICU based on the fusion of radiomics data, clinical data and laboratory data, the model performance in AUROC and AUPRC are shown in Fig. 4 and 5 (as examples with detection thresholds of 0.8 and 0.5 for wrongly labeled data, the results on the detection threshold of 0.2 are shown in the supplementary materials). Different from the well-curated DILI dataset, even without artificial label permutation, the original proper training dataset in the COVID-19 task may contain noise after the SMOTE up-sampling for the minority data. The results with a detection threshold of 0.8 (Fig. 4) show that even without any artificial label permutation (the percentage of labels permuted is 0), the AUROC and AUPRC can be significantly improved (\(p<0.001\) for LDA on both validation and test sets, \(p<0.01\) for LR on the validation set). With the increasing percentage of labels permuted, the conformal-prediction-based training data cleaning method shows its effectiveness in improving the AUROC and AUPRC on all scenarios with manual label pollution (n = 48) on the test set (\(p<0.05\) for both AUROC and AUPRC in all scenarios with label permutations). Similar results were shown with a detection threshold of 0.5 (Fig. 5): significantly improved AUROC and AUPRC with LDA models without any label permutation (\(p<0.001\)), and significantly improved AUROC and AUPRC with label permutation (\(p<0.05\) in all scenarios except for the AUPRC with LR with 10% label permutation) on the test set. The largest improvement in test AUROC (in the term of absolute value of increment) was in the scenario with 80% labels permuted and a detection threshold of 0.2 with the LDA classifier: from 0.5967 to 0.7389 (23.8%) while the largest improvement in test AUPRC was in the scenario with 50% labels permuted and a detection threshold of 0.2 with the LDA classifier: from 0.1829 to 0.3106 (69.8%). ### _Effectiveness of training data cleaning on the breast cancer subtype prediction task_ We next evaluated the effectiveness of reliability-based training data cleaning method in classifying the molecular subtypes of breast cancer using RNA-seq data of the TCGA cohort (n = 1,096 patients) [32]. We considered five subtypes in our analysis, namely LumA (n = 569), LumB (n = 219), Basal (n = 187), HER2 (n = 82), and normal-like (n = 39). The accuracy and macro-averaged F1 score are shown in Fig. 6 (as an example with detection thresholds of 0.5 for wrongly labeled data; the results on the detection thresholds of 0.8 and 0.2 are shown in the supplementary materials). In the absence of noises in training labels, the cleaning method did not yield a significant improvement in performance, in terms of both classification accuracy and macro-average F1 score. However, when dealing with scenarios containing noisy training labels (n = 48), the data cleaning method leads to a significant improvement in classification accuracy (\(p<0.05\) for 47 out of 48 scenarios) and macro-averaged F1 score (\(p<0.05\) for 47 out of 48 scenarios). Similar to the findings from previous tasks, we observed that using lower detection thresholds (less strict rules to detect wrongly labeled data) resulted in better improvements in classification accuracy and macro-averaged F1 score, especially in highly noisy scenarios where over 40% training labels were permuted. The largest improvement in test accuracy (as quantified by the absolute value of increment) was observed when 70% of labels were permuted, and the detection threshold was set to 0.2, using the LDA classifier. In this scenario, accuracy increased by 74.6% (from 0.3508 to 0.6128), while the macro-average F1 score improved by 89.0% (from 0.2672 to 0.5049). Overall, these results demonstrate the ## IV Conclusion Fig. 3: The model accuracy improvement with training data cleaning in DILI literature classification task based on the W2V embeddings under different percentages of training data label permutation. The classification accuracy on the validation set (A-C) and on the test set (D-F) with a wrongly labeled data detection threshold of 0.8 (A,D), 0.5 (B,E) and 0.2 (C,F). The mean and 95% confidence intervals are shown. The statistically significant improvement in accuracy has been marked as follows: : \(p<0.1\), *: \(p<0.05\), **: \(p<0.01\), ***: \(p<0.001\); first row: LR models, second row: LDA models. Fig. 2: The model accuracy improvement with training data cleaning in DILI literature classification task based on the S2V embeddings under different percentages of training data label permutation. The classification accuracy on the validation set (A-C) and on the test set (D-F) with a wrongly labeled data detection threshold of 0.8 (A,D), 0.5 (B,E) and 0.2 (C,F). The mean and 95% confidence intervals are shown. Statistically significant improvement in accuracy has been marked as follows : \(p<0.1\), *: \(p<0.05\), **: \(p<0.01\), ***: \(p<0.001\); first row: LR models, second row: LDA models. effectiveness of our data-cleaning method in improving the classification performance on the TCGA RNA-seq dataset. ### _Analyses of the cleaning processes based on inductive conformal prediction_ To show the detailed processes of the training data cleaning method, the number of wrongly labeled data and outliers are calculated and visualized in Fig. 7, 9 and 10. Firstly, we used the DILI W2V dataset as an example to showcase how the cleaning process behaves under different detection thresholds for wrongly labeled data when the hyperparameters of the conformal predictor (CPSC) are fixed (Fig. 7, \(\Delta\) = 0.3 and \(T\) = 100). As the percentage of labels permuted increases and the proper training data becomes noisier, the number of wrongly labeled data detected by the method increases while it is harder to detect outliers which indicates that the models are less confident to ascertain outliers. Additionally, as the detection thresholds are set lower, more wrongly labeled data are detected and more label corrections are made by the training data cleaning method. Meanwhile, as the training data grows to be noisier, the detection can be biased towards the wrongly labeled positive cases (DILI-related publications). Because the DILI data set is well-curated with high-fidelity labels, we were also able to evaluate the correctness of the corrections made by the training data cleaning method by directly comparing the cleaned training data set and the original noisy training data set. Here, we visualized the number of ground-truth wrong labels before and after the training data cleaning processes, as well as the total number of corrections made by the data cleaning method in Fig. 8. When the models are stricter in detecting wrongly labeled data (when the detection threshold is set to 0.8), the models are effective in reducing the Fig. 4: The model performance in AUROC and AUPRC with training data cleaning in COVID-19 patient ICU admission prediction task under different percentages of training data label permutation. The AUROC (A) and AUPRC (B) on the validation set, and the AUROC (C) and AUPRC (D) on the test set with a wrongly labeled data detection threshold of 0.8. The mean and 95% confidence intervals are shown. The statistically significant improvement in accuracy has been marked as follows: : \(p<0.1\), *- \(p<0.05\), **: \(p<0.01\), ***: \(p<0.001\); first row: LR models, second row: LDA models. Fig. 5: The model performance in AUROC and AUPRC with training data cleaning in COVID-19 patient ICU admission prediction task under different percentages of training data label permutation. The AUROC (A) and AUPRC (B) on the validation set, and the AUROC (C) and AUPRC (D) on the test set with a wrongly labeled data detection threshold of 0.5. The mean and 95% confidence intervals are shown. The statistically significant improvement in accuracy has been marked as follows: : \(p<0.1\), *- \(p<0.05\), **: \(p<0.01\), ***: \(p<0.001\); first row: LR models, second row: LDA models. Fig. 6: The model performance in accuracy and F1 score with training data cleaning in the breast cancer subtype prediction task under different percentages of training data label permutation. The classification accuracy (A) and macro-averaged F1 score (B) on the validation set, and the classification accuracy (C) and macro-averaged F1 score (D) on the test set with a wrongly labeled data detection threshold of 0.5. The mean and 95% confidence intervals are shown. The statistically significant improvement in accuracy has been marked as follows: : \(p<0.1\), *: \(p<0.05\), *: \(p<0.01\), ***: \(p<0.001\); first row: LR models, second row: LDA models. number of wrongly labeled data after making the corrections. When the detection thresholds are set lower (0.5 and 0.2), when the percentage of labels permuted is lower than 0.5, the corrections are effective in reducing the number of wrongly labeled data. However, as the percentage goes above 0.5 and the training data contain more noise than signals, the data cleaning method can lead to over-correction: after the cleaning process, the number of wrongly labeled data can be even higher. Additionally, we showed the cleaning process in the COVID-19 patient ICU admission prediction task in Fig. 9. Different from Fig. 7, we visualized the cleaning process after the CPSC has been optimized via hyperparameter tuning, respectively based on LR and LDA classifiers. Similar to the results on DILI dataset, a lower detection threshold leads to more wrongly labeled data detected and more corrections, and as the training data become noisier, generally, more corrections for the wrongly labeled data are observed. However, in this task, the wrongly labeled positive cases (patients admitted to the ICU) and wrongly labeled negative cases (patients staying in the general ward) are more balanced. The hyperparameter tuning process leads to certain spikes of wrongly labeled data detected at certain noise levels. In addition, the models are not confident in telling outliers in this task. For the breast cancer subtype prediction task, similar observations are shown in Fig. 10: lower detection thresholds lead to more wrongly labeled data being detected and curated; the number of wrongly labeled data detected for each category correspond with the prevalence of each subtype. ## IV Discussion To address the scenario with the challenge of collecting well-curated labels and the hardship in accurately labeling biomedical data with high-fidelity for supervised learning, this study proposes a reliability-based training data cleaning method based on inductive conformal prediction. With a small portion of well-curated training data (in the calibration set of the inductive conformal prediction framework), our proposed method leverages the reference distribution of nonconformity measure on the calibration set to calibrate the conformity of the noisy labels in the large portion of training data with unknown label quality (in the proper training data of the inductive conformal prediction framework). By simulating scenarios with different levels of noise in the proper training data by manually permuting the training labels, we validated the reliability-based training data cleaning method on three biomedical machine learning tasks representing different modalities: the drug-induced-liver-injury literature filtering challenge (a natural language processing task), the COVID-19 patient ICU admission prediction task (a radiomics and electronic health records task) and the breast cancer subtype prediction task (an RNA-seq data mining task). The training data cleaning method showed its effectiveness on the majority of the simulated scenarios on all three tasks where the classification performance with two basic LDA and LR classifiers was significantly improved after the data cleaning process. The visualization of the cleaning processes also showed that the model was effective in detecting wrongly labeled data based on the reliability quantified by the inductive conformal prediction framework. Our results demonstrate that this method can be used in a broad range of multimodal biomedical classification applications to help improve Fig. 7: The number of wrong labels and outliers detected under different percentages of training data label permutation in DILI literature prediction task with W2V embeddings. The number of wrongly labeled data (A-C) and outliers (D-F) under different detection thresholds of wrongly labeled data: 0.8 (A,D), 0.5 (B,E), 0.2 (C,F). The cleaning process visualization is based on W2V embeddings and fixed hyperparameters for the conformal predictor. the classification performance without the requirement of large quantities of well-curated labeled training dataset. The novelty of this study is the proposal of a reliability-based training data cleaning method based on inductive conformal prediction, which enables users to leverage large quantities of noisy labels with the requirement of only a small portion of well-curated training data. Different from the traditional semi-supervised learning which typically requires strong assumptions of the data distributions for the unlabeled data and labeled data [24], the inductive conformal prediction is based on a weaker assumption of independent and identical distribution. Moreover, this method leverages both well-curated labels and noisy labels with uncertainty. This idea of this work was inspired by the semi-supervised, reliability-based training data augmentation work based on conformal prediction previously proposed [29, 28, 31]. In previous work, researchers first attach pseudo-labels to the unlabeled data and then leverage the prediction reliability of the pseudo-labels quantified by conformal predictors, to filter these pseudo-labeled data. On the classification tasks, even with domain drifts between the training and test datasets, the reliability-based unlabeled data augmentation framework showed significantly better performances when compared with multiple baseline models: fully supervised learning benchmark, as well as other semi-supervised learning (i.e., label propagation [42] and label spreading [43]) and data augmentation frameworks [28]. These studies follow the idea of reliability-based unlabeled data augmentation, which inspired us to think in the opposite direction with the idea of reliability-based training data reduction and trimming: as we can leverage the reliability Fig. 8: The number of ground-truth wrong labels before and after training data cleaning under different percentages of training data label permutation in DILI literature prediction task with W2V embeddings. The number of wrongly labeled data before/after training data cleaning and the number of corrections made under different detection thresholds of wrongly labeled data: 0.8 (A), 0.5 (B), 0.2 (C). The cleaning process visualization is based on W2V embeddings and fixed hyperparameters for the conformal predictor. Fig. 9: The number of wrong labels detected under different percentages of training data label permutation in COVID-19 patient ICU admission prediction task. The number of wrongly labeled data based on LR models (A-C) and LDA models (D-F) under different detection thresholds of wrongly labeled data: 0.8 (A,D), 0.5 (B,E), 0.2 (C,F). The cleaning process visualization is based on optimized hyperparameters for the conformal predictor tuned on the validation dataset for each classifier and each percentage of labels permuted. quantified by the conformal predictors to filter and add the unlabeled data to benefit the classification modeling process, we should also be able to leverage the reliability to remove the wrongly labeled data in a noisy training set that may confuse the classification modeling process. The rich information given by the nonconformity measure of the conformal predictors enables us to correct the noisy labels in the training data and detect the wrongly labeled data. After implementing this idea, we found that the reliability-based training data cleaning method works well in diverse multi-modal biomedical data sets and classification tasks. The mechanism of the reliability-based training data cleaning method is worthy of further discussion. The method quantifies how well a combination of a training sample's feature with every possible label conforms to the reference distribution of the calibration set. By leveraging and calibrating the nonconformity measure distribution on a small portion of well-curated training data (calibration set), the method can detect whether a training sample may be wrongly labeled after trying out all possible labels that are attached to a training sample's feature. As our method strictly follows the framework of inductive conformal prediction, the training data after cleaning tends not to overfit as both the noisy training data and the clean calibration data set are used at two separate stages: the noisy proper training data were used to fit the conformal predictor while the clean calibration set was used for calibration. The bulk size of the proper training data helps the conformal predictor to learn a general yet fuzzy mapping from the noisy data to the classes while the calibration process better clean the training data by filtering out the labels that are extremely unlikely. Instead of using the same dataset to train conformal predictor and calibrate the reliability of predictions, which have been used in some of the previous studies (the non-inductive conformal prediction framework) [27, 28, 8], the relatively better independence of these two sets based on the inductive conformal prediction framework helps the training data avoid biasing towards the small portion of the calibration set by dissociating the two processes in inductive conformal prediction. As a result, the cleaned training data (the combination of the cleaned proper training set and the calibration set) leads to better classification performances. Another adaptive feature of the training data cleaning method is the freedom for users to choose the detection threshold of wrongly labeled data, more specifically how much another label's P-value needs to be larger than the current label's P-value to detect a wrongly labeled data point. We have observed that for less noisy training sets (i.e. the percentage of labels permuted is lower than 50%), a higher detection threshold (e.g., 0.8) is more likely to lead to significant improvement in classification performance. On the contrary, for highly noisy proper training sets (percentage of labels permuted over 50%), a lower detection threshold (e.g., 0.2) can lead to better improvement in classification performance. We hypothesize that once the training set gets noisier, less classification information is conveyed in the proper training dataset. Therefore, the CPSC model can be less confident in telling wrongly labeled data. Lowering the detection threshold, in this case, enables more wrongly labeled data to be detected to counteract low confidence in the CPSC model. Therefore, if future users of our method have knowledge of the noise Fig. 10: The number of wrong labels detected under different percentages of training data label permutation in TCGA breast cancer subtype prediction task. The number of wrongly labeled data based on LR models (A-C) and LDA models (D-F) under different detection thresholds of wrongly labeled data: 0.8 (A,D), 0.5 (B,E), 0.2 (C,F). The cleaning process visualization is based on optimized hyperparameters for the conformal predictor tuned on the validation dataset for each classifier and each percentage of labels permuted. level in the training dataset (e.g., a rough idea of how many labels may be wrongly labeled in a particular dataset), they can choose a high or low detection threshold to help optimize the performance of the training data cleaning method. Some observations in the results are also worth further discussion. Firstly, on the DILI task we have observed as the training data grows to be noisier, the detection can bias towards the wrongly labeled positive cases (DILI-related publications) (Fig. 7). We assume this is because, in this task, the negatives are by default where a publication is absent of DILI information. This means that the negative samples can be highly heterogeneous: a diverse range of publications associated with vaccine development, optogenetics, epigenetes, transcriptomics, neurological disorders, etc. can be labeled as DILI-negative papers. The DILI-negative samples may not be necessarily a clearly defined class based on contents but grouped because of the absence of DILI information. On the contrary, the positive cases (i.e., the DILI-related publications) are relatively more homogeneous. Therefore, the model tends to be more confident in telling whether a DILI-related publication is wrongly labeled as a DILI-irrelevant publication. In contrast, for the COVID-19 dataset and the TCGA RNA-seq dataset, the classes are more clearly defined. Secondly, we have observed over-correction in the DILI task in the visualization of the correctness of the training process of the W2V classification task (Fig. 8): when the percentage of labels permuted is over 50%, the cleaning can lead to more wrongly labeled data. However, it should be mentioned that although over-correction was observed, the cleaning process still showed effectiveness in improving classification accuracy. We hypothesize that this may potentially be due to that this visualization of cleaning processes is based on fixed hyper-parameters with sub-optimal CPSC models, and secondly, potentially these over-corrections may be less influential in the decision boundary determination. Additionally, although the method was able to detect outliers in the DILI classification task (Fig. 7, no outliers were detected in the COVID-19 dataset. We hypothesize that due to that the COVID-19 dataset is noisier than the well-curated DILI dataset because of the SMOTE data augmentation and potential wrong labels in the multi-institute data collection process, the models are less confident in judging outliers in this application. Although this study has proposed an effective training data cleaning method based on inductive conformal prediction, there are limitations: first, conformal predictions are more generally used in classification tasks to quantify prediction reliability. How much the idea of reliability-based training data cleaning can benefit regression tasks remains unknown. The performance of such a training data cleaning framework needs to be developed and validated in regression tasks to further expand the applicability. Secondly, in this study, we only tested one inductive conformal prediction mechanism, namely, the conformal prediction based on shrunken centroids (CPSC). We chose this framework because previous work showed better efficacy and better efficiency in quantifying the reliability when compared with conformal prediction based on k-nearest neighbors (CPKNN), support vector machines (CPSVM), light gradient-boosting machine (CPLGB) and artificial neural networks (CPANN) [29]. Although the CPSC tends to be more effective and efficient in the reliability quantification process and was more effective in the reliability-based unlabeled data augmentation process, the question whether our training data cleaning method using other base algorithms works better still remains to be tested. Thirdly, the tasks we tested are with large quantities of samples. With over 500 samples, the partition of the calibration set and proper training set leave both sets not too small and this may have enabled us to better leverage the inductive conformal prediction framework. How well the method works on smaller datasets, and whether we need to use one single clean training set to train conformal predictor as well as perform calibration (generally abandon the inductive conformal prediction framework) at the sacrifice of the overfitting risks, need to be further investigated. ## V Conclusion Collecting well-curated training data has posed a challenge for biomedical machine-learning applications. To address this challenge, a reliability-based training data cleaning method based on inductive conformal prediction has been proposed in this study. With a small portion of well-curated training data, our method leverages the reliability quantified by inductive conformal prediction to detect the wrongly labeled data and outliers in the large portion of noisy-labeled training data. The effectiveness of this method is validated on three multi-modal biomedical machine learning classification tasks: detecting drug-induced-iver-injury literature based on free-text title and abstract, predicting ICU admission of COVID-19 patients based on radiomics and electronic health records, subtyping breast cancer based on RNA-seq data. The method generally leads to significantly improved classification performances under different levels of labeling noises simulated by manual label permutation. The method can be applied to multi-modal biomedical machine learning classification tasks to better use the noisy training data without requiring large quantities of well-curated training data. ## VI Code Availability [https://github.com/xzhan96-stf/icp_train_clean](https://github.com/xzhan96-stf/icp_train_clean)
2309.13347
My Science Tutor (MyST) -- A Large Corpus of Children's Conversational Speech
This article describes the MyST corpus developed as part of the My Science Tutor project -- one of the largest collections of children's conversational speech comprising approximately 400 hours, spanning some 230K utterances across about 10.5K virtual tutor sessions by around 1.3K third, fourth and fifth grade students. 100K of all utterances have been transcribed thus far. The corpus is freely available (https://myst.cemantix.org) for non-commercial use using a creative commons license. It is also available for commercial use (https://boulderlearning.com/resources/myst-corpus/). To date, ten organizations have licensed the corpus for commercial use, and approximately 40 university and other not-for-profit research groups have downloaded the corpus. It is our hope that the corpus can be used to improve automatic speech recognition algorithms, build and evaluate conversational AI agents for education, and together help accelerate development of multimodal applications to improve children's excitement and learning about science, and help them learn remotely.
Sameer S. Pradhan, Ronald A. Cole, Wayne H. Ward
2023-09-23T11:52:36Z
http://arxiv.org/abs/2309.13347v1
# My Science Tutor (Myst)-a Large Corpus of ###### Abstract This article describes the MyST corpus developed as part of the My Science Tutor project-one of the largest collections of children's conversational speech comprising approximately 400 hours, spanning some 230K utterances across about 10.5K virtual tutor sessions by around 1.3K third, fourth and fifth grade students. 100K of all utterances have been transcribed thus far. The corpus is freely available1 for non-commercial use using a creative commons license. It is also available for commercial use2. To date, ten organizations have licensed the corpus for commercial use, and approximately 40 university and other not-for-profit research groups have downloaded the corpus. It is our hope that the corpus can be used to improve automatic speech recognition algorithms, build and evaluate conversational AI agents for education, and together help accelerate development of multimodal applications to improve children's excitement and learning about science, and help them learn remotely. Footnote 1: [https://myst.cemantix.org](https://myst.cemantix.org) 2 Footnote 2: [https://boulderlearning.com/resources/myst-corpus/](https://boulderlearning.com/resources/myst-corpus/) _Sameer S. Pradhan\({}^{1,2}\), Ronald A. Cole\({}^{3}\), Wayne H. Ward\({}^{4}\)_ \({}^{1}\)cemantix.org, Cambridge MA, USA \({}^{2}\)Linguistic Data Consortium, University of Pennsylvania,, Philadelphia PA, USA \({}^{3}\)Boulder Learning Inc., Boulder CO, USA \({}^{4}\)University of Colorado at Boulder, CO, USA [email protected] automatic speech recognition, educational applications, speech corpus, conversational speech, dialog, virtual tutor ## 1 Introduction According to the 2009 National Assessment of Educational Progress (NAEP, 2009), only 34 percent of fourth-graders, 30 percent of eighth-graders, and 21 percent of twelfth-graders tested as proficient in science. A more recent assessment, in 2019, reported a statistically significant decrease in the average score for fourth graders in science3 compared with the most recent previous assessment, in 2015. Thus, approximately two thirds of U.S. students are not proficient in science4. Footnote 3: [https://www.nationsreportcard.gov/](https://www.nationsreportcard.gov/) Footnote 4: This does not consider the significant impact that the educational system experienced owing to the Covid-19 pandemic. This article describes a resource that was the result of a 13-year project conducted between 2007 and 2019. The project investigated improvements in students' learning proficiency in elementary school science using conversational multimedia virtual tutor, Marni. The operating principles for the tutor are grounded on research from education and cognitive science where it has been shown that eliciting self-explanations plays an important role [1, 2, 3, 4, 5]. Speech, language and character animation technologies play a central role because the focus of the system is on engagement and spoken explanations by students during spoken dialog with the virtual tutor. A series of studies conducted during this project demonstrated that students who interacted with the virtual tutor achieved substantial learning gains, equivalent to students who interacted with experienced human tutors, with moderate effect sizes [6, 7]. Surveys of participating teachers indicate that it is feasible to incorporate the intervention into their curriculum. Surveys given to students indicated that over 70% of students tutored by Marni were more excited about studying science in the future. ## 2 The Myst Corpus The MyST children's conversational speech corpus consists of spoken dialog between 3\({}^{rd}\), 4\({}^{th}\) and 5\({}^{th}\) grade students, and a virtual tutor in 8 areas of science. It consists of 393 hours of speech collected across 1,371 students. The collection comprises a total of 228,874 utterances across 10,496 sessions. ### Data Collection As part of the study, students engaged in spoken dialog with a virtual science tutor--a lifelike computer character that produced accurate lip and tongue movement synchronized with speech produced by a voice talent. Analyses of the spoken dialog sessions indicated that, during a dialog of about 15 minutes, tutors and students produced about the same amount of speech, around 5 minutes each. This approach was used to develop over 100 tutorial dialog sessions, of about 15 minutes each. The MyST corpus was collected in two stages--Phase I and Phase II. In both phases, the scientific content covered is aligned to classroom science content of Full Option Science System (FOSS) modules, which typically last 8 weeks during the school year. FOSS is used by over 1 million children in over 100,000 classrooms in all 50 states in the U.S. FOSS modules are centered on science investigations. There are typically 4 Investigations in a module (e.g., in the Magnetism and Electricity module, the 4 investigations are Magnetism, Serial circuits, Parallel Circuits, and Electromagnetism). Each Investigation has 3 to 4 classroom "investigation parts" where groups of students work together to, for example, build a serial circuit to make a motor run, and record their observations in science notebooks. Shortly after conducting an "investigation part", students interact one-on-one with a virtual tutor for 15-20 minutes. The tutor asks the student questions about science presented in illustrations, animations or interactive simulations, with follow-up questions designed to stimulate reasoning and help students construct accurate explanations. The system is _strict turn-taking_; the tutor presents information, asks a question and waits for the student to respond. Students wear headsets with close-talking noise-canceling microphones. To respond, the student presses the spacebar on the laptop, holds it down while speaking, and releases it when done. Each student turn is recorded as a separate audio file. When transcribed, an utterance level transcript file is created for each audio file. No identifying information is stored with the data, only anonymized codes for schools and students. All students and their parents signed consent forms allowing us to distribute their anonymous speech data. ### Transcription Roughly 45% of all utterances have been transcribed at the word level. Phase I of the project used rich (slow, expensive) transcription guidelines5--the ones typically used by speech recognition researchers. However, for the purposes of this project, that level of detail was not required in the transcriptions, and during Phase II, a reduced (quick, cheaper) version of those guidelines6 was used, allowing transcription of more data. Footnote 5: [https://ceamantix.org/myst/phase-i-guidelines/](https://ceamantix.org/myst/phase-i-guidelines/) Footnote 6: [https://ceamantix.org/myst/phase-ii-guidelines/](https://ceamantix.org/myst/phase-ii-guidelines/) ### Data Composition Some characteristics of the data collected in the two phases are described below. Phase I comprised sessions from students in grades 3-5 across four science modules. All the sessions from this phase have been transcribed using rich transcription guidelines. Phase II comprised sessions from students in grades 4-5. It included five modules, with an average of 10 parts each. Table 1 lists the modules included in each phase. Table 2 lists the size of the corpus based on a few different parameters. ### Corpus Structure The directory structure for the corpus is as shown in Figure 1 below. Variables are enclosed in angle-brackets (<variables>) and can take values as described immediately after. <partition> is one of "train", "development" or "test". <student_id> is a 6-digit ID with the first 3 digits representing the school code and the next 3 digits the student number. <session_id> is the ID for a particular session and is further represented as <corpus><student_id>_<date>_<time> _module><investigation><.<part>. <date> is represented as <YYYY><MDR><CDD>. <time> is represented as <th>><mm>><ss>, where <th>, <mm> and <ss> represent two digit hour, minutes and seconds respectively7. <module> is a two- or three-character string enumerated in Table 1 earlier. <investigation> \begin{table} \begin{tabular}{c c c} \hline \hline **Phase** & **Module** & **Description** \\ \hline I & **MS** & Mixtures and Solutions \\ & **ME** & Magnetism and Electricity \\ & **VB** & Variables \\ & **WA** & Water \\ \hline II & **EE** & Energy and Electromagnetism \\ & **LS** & Living Systems \\ & **MX** & Mixtures \\ & **SRL** & Soil, Rocks and Landforms \\ & **SMP** & Sun, Moon and Planets \\ \hline \hline \end{tabular} \end{table} Table 1: Science modules covered in Phase I and II \begin{table} \begin{tabular}{l c c} \hline \hline **Description** & \multicolumn{2}{c}{**Phase**} \\ \cline{2-3} & **I** & **II** \\ \cline{2-3} & **Count (Hours)** & **Count (Hours)** \\ \hline Number of Students & 421 & 950 \\ Number of Sessions & 1509 (102) & 8,987 (291) \\ Transcribed Sessions & 1509 (102) & 1,426 ( 95) \\ Untranscribed Sessions & 0 ( 0) & 3,711 (196) \\ \hline \hline \end{tabular} \end{table} Table 2: Size of corpus based on a few different criteria. is a decimal number representing the respective investigation for a module. <part> is the utterance ID within a session. Numbers 001 onward represent the index of each utterance in a session8. <file-extension> is one of.flac or.trn..flac is the compressed audio file and.trn is the transcription of the corresponding audio file. Footnote 8: 000 is reserved to represent the entire session. ## 3 Data cleanup and pre-processing We did a pass over the corpus to clean up various types of errors that could be identified using statistics on the underlying audio and potentially erroneous data collection. ### Data Provenance #### 3.1.1 Consent The University of Colorado's Institutional Review Board reviewed and approved all components of the My Science Tutor project to assure student privacy. All utterances in the corpus were signed by a student's parent or guardian, and by the student. The review board approved the Parental Consent forms and the Student Assent forms. The final Parental Consent and Student Assent forms approved by the IRB explicitly provide permission for anonymous student speech data and transcriptions to be distributed for both research and commercial use. We manually verified that we had parental consent and student assent for every student in the corpus. ### Session Quality Bad--empty or corrupted sessions were removed using simple heuristics and based on missing data. #### 3.2.1 Session Length Sessions that were less than a certain minimal threshold (\(<\) 10 minutes), or longer than a certain maximum threshold (\(>\) 1 hour) were inspected and corrected or removed. #### 3.2.2 Missing audio files Sessions that were missing audio files for a significant number of utterances were deleted. ### Audio Quality All utterances were processed to identify all possible unacceptable recordings and were removed from the database. We performed the following checks for audio quality. #### 3.3.1 Clipping Rate If there was a significant number of frames (exceeding a certain threshold) that were clipped, we removed or marked the audio file. We removed them if it impacted more than a certain fraction of utterances in a session. In which case we also removed the session from the release. If only a small number of files had large fraction of clipping, we tagged them in a report file, so that the users can determine whether to include or exclude that data from their study. #### 3.3.2 Silence Sometimes there are significant amounts of leading and trailing silence in the audio files. We trimmed all such silence except for a small fraction at the beginning and end of the utterance. We did not, however, remove or compress silence that occurred within an utterance. #### 3.3.3 Background Noise Uterances with a significant amount of noise or cross talk were removed. This was only possible for the cases that were transcribed or fell in the fraction of sample utterances that we manually verified. ### Transcription Quality We fixed obvious spelling errors in the transcriptions. We tried to retain explicitly mispronounced words as much as possible. ### Updated Pronunciation Dictionary We also make available an updated pronunciation dictionary. We used CMU's pronunciation dictionary as a starting point and added words that were novel to this corpus. The updated pronunciation dictionary is part of the corpus release. ## 4 Evaluation For the convenience of the ASR community, we partitioned and structured the corpus upfront into training, development and test sets as three separate directories in the corpus release. Figure 1: The MyST Corpus Structure. ### Evaluation Partitions These partitions were generated using stratified sampling strategy thus ensuring that they reasonably represent each of the science module in MyST, proportionately represent each phase, and each student is present in only one of the three partitions. We also included untranscribed data in all partitions in order to be able to allow limited semi-supervised training data augmentation using the untranscribed portions of the data, with an additional advantage of pseudo-unseen data--in the form of transcriptions that are as yet absent. ### Experimental Setup We used SpeechBrain [9] speech toolkit for our experiments. More specifically, we used an end-to-end transformer model. We fine-tuned the model, trained on LibreSpeech model, using the MyST training set. Owing to memory limitations, we were only able to use utterances less than 30 secs. during training. ### Word Error Rate We use the traditional evaluation metric of word error rate (WER) to report ASR performance. In spite of several checks during the preparation of the corpus release, we were informed of a few new transcription errors. We corrected the errors in the test set. For this work, we removed most of the suspicious cases from the training set and retrained the model using the filtered set of utterances. Given the amount of utterances in the training data and the very small fraction of the filtered instances, we did not see any noticeable difference in the performance on the test set. We plan to make another quality control pass through the corpus to correct residual errors in the development and training set and release an updated version of the corpus in the near future. We will also address other issues that arise as the corpus is used by the larger research community. Table 4 shows the word error rate on the uncorrected and corrected version of the test set. ### Replicability It is important that the ASR community report consistent and comparable WER on the MyST corpus to enable fair comparison across improved architectures. In order to facilitate that we make details of the evaluation setup and the exact configuration available at the corpus git repository9 of the corpus. This mechanism should enable such consistent, replicable evaluations. Footnote 9: [https://myst.cemantix.org](https://myst.cemantix.org) ## 5 Related Work Over the years researchers have created several speech corpora for the analysis of children's speech. Below are a few that are typically used for ASR evaluation. For a more thorough empirical evaluation of various end-to-end ASR systems specifically focused on children's speech can be found in [10]. * CID children's speech corpus (American English, read speech, 436 children aged between 5 and 17 years) [11] * CMU Kid's speech corpus (American English, read speech, 76 children, aged between 6 and 11 years) [12] * CU Kid's Prompted and Read Speech corpus (American English, read speech, 663 children, aged between 4 and 11 years) [13], * CU Kid's Read and Summarized Story corpus (American English, spontaneous speech, 326 children, aged between 6 and 11 years) [14], * OGI Kid's speech corpus (English, read speech, 1100 children, aged between 5 and 15 years) [15]. * BIRMINGHAM corpus (British English, 159 children, aged between 4 and 14 years, part of corpus PF-STAR) [16] \begin{table} \begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{**MyST Test Set**} \\ & Un-Corrected (\%) & Corrected (\%) \\ \hline **WER** & 11.6 & 10.0 \\ \hline Insertions & 3.4 & 2.9 \\ Substitutions & 5.5 & 5.1 \\ Deletions & 2.2 & 3.2 \\ \hline \hline \end{tabular} \end{table} Table 4: Word Error Rate on the MyST test set for ASR Model trained only using only the training and development sets. \begin{table} \begin{tabular}{l l r r r r} \hline \hline \multirow{2}{*}{**Phase**} & \multirow{2}{*}{**Science**} & \multicolumn{3}{c}{**Experiment Partition**} & \multirow{2}{*}{**Overall**} \\ \cline{3-4} & & \multicolumn{1}{c}{**Train**} & & **Dev.** & **Test** \\ \cline{3-4} & & (Hrs.) & (Hrs.) & (Hrs.) & (Hrs.) \\ \hline I & MS & 31 & 5 & 5 & 41 \\ & ME & 30 & 4 & 4 & 38 \\ & VB & 14 & 2 & 2 & 18 \\ & WA & 4 & 1 & 1 & 6 \\ \hline II & EE & 114 & 16 & 14 & 144 \\ & LS & 75 & 4 & 4 & 83 \\ & MX & 29 & 5 & 7 & 41 \\ & SRL & 16 & 2 & 1 & 19 \\ & SMP & 2 & 1 & 1 & 4 \\ \hline & Overall & 315 & 40 & 39 & 393 \\ \hline \hline \end{tabular} \end{table} Table 3: Experimental partitions for the MyST corpus ## 6 Conclusion Improvements in automatic transcription of children's speech-especially spontaneous conversations-can open doors to transformational applications in various areas. Improved applications for use in education and in clinical diagnosis have the potential of making a significant global impact. In spite of an exponentially large collection of data at our finger tips, it is difficult to get access to a reasonably large collection of specific kinds of data such as children's speech which are required by the data hungry end-to-end machine learning algorithms. One of the larger models for reporting performance on children's speech [17] used roughly 20K training utterances. However, the data underlying this study is not generally available which is a necessary requirement for open, replicable research. Our hope is that the large MyST corpus of children's conversational speech will allow researchers to improve upon consistent evaluation benchmarks.
2309.15736
A Bernstein theorem of ancient solutions to mean curvature flow
We proved a Bernstein theorem of ancient solutions to mean curvature flow.
Xiangzhi Cao
2023-09-27T15:48:34Z
http://arxiv.org/abs/2309.15736v1
# A Bernstein theorem of ancient solutions to mean curvature flow ###### Abstract We proved a Bernstein theorem of ancient solutions to mean curvature flow. _Keywords and phrases_: Bernstein theorem, Ancient solution, Gauss map _MSC 2010_: 53C24, 53E10 ## 1 Introduction Let \(F_{0}:M^{n}\rightarrow\mathbb{R}^{n+m}\) be an isometric immersion from an \(n\)-dimensional oriented Riemannian submanifold \(M^{n}\) to the Euclidean space \(\mathbb{R}^{n+m},n\geq 2,m\geq 1\). The mean curvature flow (MCF) is a one-parameter family of smooth immersions \(F:\,M^{n}\times[-T,0]\rightarrow\mathbb{R}^{n+m},T>0\), which satisfies the following evolution equation: \[\left\{\begin{array}{l}\frac{\partial}{\partial t}F(x,t)=H(x,t),\quad x\in M ^{n},t\in[-T,0],\\ F(\cdot,0)=F_{0},\end{array}\right.\] where \(H(p,t)\) is the mean curvature vector of \(F\left(M^{n},t\right)\subset\mathbb{R}^{n+m}\). It is well known self shrinkers is type I singularity and translating soliton is type II singularity of the mean curvature flow. Bao and Shi [1] used the function \(\omega_{1}\) to get the following Bernstein result for codimension one complete translating soliton. This result is generalied by Kunikawa, who proved the following theorem **Theorem 1** (Kunikawa [3, Theroem 1.2], 2015 ).: _Let \(F:M^{n}\times(-\infty,0]\to\mathbb{R}^{n+1}\) be an ancient solution to the mean curvature flow. If there exist a positive constant \(c\) and a nonnegative constant \(C_{H}\) such that \(\omega(p,t)\geq c\) and \(|\vec{H}(p,t)|\leq C_{H}\) for any point in \(\mathcal{M}_{\infty}\), then \(M_{t}\) must be a hyperplane for any \(t\in(-\infty,0]\)._ In order to deal with higher codimension submanifold, we need to define \(\omega\) function. Let \(\{e_{i}\}\) be a positively oriented orthonormal frame field along \(M^{n}\). For a fixed unit \(n\)-plane \(a_{1}\wedge\cdots\wedge a_{n}\) in \(\mathbb{R}^{n+m}\), we define the \(\omega\)-function on \(M^{n}\) by \[\omega=\left\langle e_{1}\wedge\cdots\wedge e_{n},a_{1}\wedge\cdots\wedge a_{ n}\right\rangle=\det\left(\left\langle e_{i},a_{j}\right\rangle\right).\] If \(\omega\) is positive, we set the \(v\) function by \(v=\frac{1}{\omega}\). One can also refer to [6, section 2] or [9] and reference therein for the details of Grassmanian manifold and the fuction \(\omega\) and the function \(v\) on the manifold. For higher codimension translating soliton, Xin proved that **Theorem 2** (cf. [9], 2015).: _Let \(M^{n}\subset\mathbb{R}^{n+m}\) be an \(n\)-dimensional complete translating soliton with \(m\geq 2\). If \(\omega\geq\omega_{0}\) for some constant \(\omega_{0}>\frac{1+3^{\frac{3}{2}}}{2\cdot 3^{\frac{3}{2}}}\), then \(M\) must be an affine subspace._ Guan, Xu and Zhao proved **Theorem 3** (cf. [2, theroem 1.7]).: _Let \(M^{n}\times(-\infty,0]\to\mathbb{R}^{n+m}\) be an ancient solution to the mean curvature flow. If \(\omega\geq\omega_{0}\) for a constant \(\omega_{0}>\frac{1}{\sqrt{2}}\) and \(|H|\leq C\) for a nonnegative constant \(C\) at any point in \(\mathcal{M}_{\infty}\), then \(M_{t}\) must be an affine subspace._ Recently, using the Gauss map, Qiu [6] generalized [9],[2, Corollary 1.8] and proved **Theorem 4** (cf. [6, Theorem 1] ).: _Let \(M^{n}\) be a complete \(n\)-dimensional translating soliton in \(\mathbb{R}^{m+n}\) with codimension \(m\geq 2\) and the positive \(w\)-function. Put \(v_{0}:=\frac{2\cdot 4^{\frac{3}{2}}}{1+4^{\frac{3}{2}}}\). If for any constant \(v_{1}<v_{0}\), the \(v\)-function satisfies_ \[v\leq v_{1}<v_{0}\] _then \(M^{n}\) is affine linear._ **Remark 1**.: Obviously, the bigger the constant \(v_{0}\) is, the better the result is. In other words, the smaller the constant \(\omega_{0}\) is, the weaker the condition is. This papragraph is copied from [2, theroem 1.6]. Without loss of generality, we assume that the origin \(o\in\mathbb{R}^{n+m}\) lies in \(M^{n}\). Let \(\bar{B}_{R}^{n+m}\) be an Euclidean closed ball of radius \(R\) with the center at \(o\) and \(B_{R,T}(o)=\bar{B}_{R}^{n+m}\times[-T,0]\subset\mathbb{R}^{n+m}\times(-\infty,+\infty)\) be a cylindrical domain in the space-time. Consider \(\mathcal{M}_{T}\) as the space-time domain \[\{(F(p,t),t)\mid p\in M,t\in[-T,0]\}\subset\mathbb{R}^{n+m}\times(-\infty,+ \infty).\] Finally, we define the space-time domain \(D_{R,T}(o)=\mathcal{M}_{T}\cap B_{R,T}(o)\). \(D_{R,T}(o)\) is compact since \(M_{t}\) can be written as a complete graph for each \(t\). Inspired the above results(especially [6][2]), we generalized [2, theroem 1.6] **Theorem 5**.: _Let \(F:M^{n}\times[-T,0]\rightarrow\mathbb{R}^{m+n}\) be a solution to the mean curvature flow. Assume that there exist a nonnegative constant \(C_{H}\)such that the norm of the second fundamental form \(|B(p,t)|\leq C_{B}|H(p,t)|,|H(p,t)|\leq C_{H}\),for any point in \(M_{T}\). Put \(v_{0}=\frac{2\cdot 4^{\frac{2}{3}}}{1+4^{\frac{2}{3}}}\), If for any constant \(v_{1}<v_{0}\), the \(v\)-function satisfies \(v\leq v_{1}<v_{0}\). Define the function \(h=(\frac{v}{2-v})^{\frac{3}{2}}\). Then there exists a constant \(C\) which is independent of \(R\) and \(T\) such that_ \[\sup_{D_{R/2,T/2}(o)}\frac{|H|}{h_{2}-h\circ\gamma}\leq C\left(\frac{1}{R}+ \frac{1}{\sqrt{R}}+\frac{1}{\sqrt{T}}\right),\] _where \(b\) is a constant such that \(\sup_{\mathcal{M}_{T}}h\circ\gamma\leq 1-c<b<1\), \(\gamma:M^{n}\to G_{n,m}\) is the Gauss mao from \(M\)._ **Remark 2**.: We replaced the conditon \(\omega(p,t)\geq c\) in. our results is better than [2, theroem 1.6], since \(\omega_{0}=\frac{1}{v_{0}}\approx 0.698\), while in [2, theroem 1.6], their \(\omega_{0}\approx 0.7071\). In [9], \(\omega_{0}\approx 0.74\) **Remark 3**.: It is hard to get the result if we remove the conditiono \(|B(p,t)|\leq C_{B}|H(p,t)|\). Unless, one must improve this inequality for \(\partial_{t}|B|^{2}-\Delta|B|^{2}\geq 2|\nabla B|^{2}-3|B|^{2}\) and use the test function \[\frac{|B|^{2}}{(h_{2}-h\circ\gamma)^{2}}\] We have tried it, but failed. As a consequence of Theorem 5, we obtain our main result. **Theorem 6**.: _Let \(F:M^{n}\times(-\infty,0]\rightarrow\mathbb{R}^{m+n}\) be an ancient solution to the mean curvature flow. If there exist a positive constant \(C_{B}\) and a nonnegative constant \(C_{H}\) such that the norm of the second fundamental form \(|B(p,t)|\leq C_{B}|H(p,t)|,|H(p,t)|\leq C_{H}\), for any point \((p,t)\) in \(M_{\infty}\). Put \(v_{0}=\frac{2\cdot 4^{\frac{2}{3}}}{1+4^{\frac{2}{3}}}\), if for any constant \(v_{1}<v_{0}\), the \(v\)-function satisfies \(v\leq v_{1}<v_{0}\). then \(M_{t}\) must be a hyperplane for any \(t\in(-\infty,0]\)._ **Remark 4**.: We can see that [6, Theorem 1] is our corollary. Preliminary We give some Lemmas used in the proof. **Lemma 1** (cf. [3] or [4]).: _There exists a smooth function \(\eta(r,t):\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{R}\) supported on \([-R,R]\times\,-T,0]\) which has the following properties:_ _(1) \(\eta(r,t)\equiv 1\) on \([-R/2,R/2]\times[-T/2,0]\) and \(0\leq\psi\leq 1\)._ _(2) \(\eta(r,t)\) is decreasing if \(r\geq 0\), i.e., \(\partial_{r}\eta\leq 0\)._ _(3) \(\left|\partial_{r}\eta\right|/\eta^{a}\leq C_{a}/R,\left|\partial_{r}^{2}\eta \right|/\eta^{a}\leq C_{a}/R^{2}\) when \(0<a<1\)._ _(4) \(\left|\partial_{t}\eta\right|/\eta^{a}\leq C_{a}/T\) when \(0<a<1\)._ **Theorem 7** (cf. Wang [7]).: _The Gauss maps of a mean curvature flow \(\gamma:(\Sigma,g_{t})\to G_{n,m}\) form a harmonic map heat flow, i.e._ \[\frac{d}{dt}\gamma=\operatorname{tr}\nabla d\gamma\] _where \(d\gamma\) is considered as a section of \(T^{*}\Sigma_{t}\otimes\gamma^{-1}TG(n,m)\) and the trace is taken with respect to the induced metric \(g_{t}\)._ **Lemma 2** (cf. [8] or [5]).: \[\Delta|H|^{2}-\partial_{t}|H|^{2}\geq 2|\nabla H|^{2}-2|H|^{2}|B|^{2}\] ## 3 The proof of Theorem 5 _The proof of Theroem 5._ We use the auxillary function in [6], however we follow the lines in [3]. As in [6], let \(h_{0}=(\frac{v_{0}}{2-v_{0}})^{\frac{3}{2}}\), \(h_{1}=(\frac{v_{1}}{2-v_{1}})^{\frac{3}{2}}\) be two constatnts, we choose a constant \(h_{2}\) such that \(h_{1}<h_{2}<4\), thus \(1\leq h\leq h_{1}<h_{1}<4\). We need to compute \[\left(\frac{\partial}{\partial t}-\Delta\right)\frac{|H|^{2}}{(h_{2}-h\circ \gamma)^{2}}\] Let \(\phi=\frac{|H|^{2}}{(h_{2}-h\circ\gamma)^{2}}\). A direct calculation shows that \[\nabla\phi=\frac{\nabla|H|^{2}}{(h_{2}-h\circ\gamma)^{2}}+\frac{2|H|^{2} \nabla h\circ\gamma}{(h_{2}-h\circ\gamma)^{3}}.\] Similarly we can compute \[\Delta\phi=\frac{\Delta|H|^{2}}{(h_{2}-h\circ\gamma)^{2}}+\frac{4\left\langle \nabla h\circ\gamma,\nabla|H|^{2}\right\rangle}{(h_{2}-h\circ\gamma)^{3}}+ \frac{2|H|^{2}\Delta h\circ\gamma}{(h_{2}-h\circ\gamma)^{3}}+\frac{6|\nabla h \circ\gamma|^{2}|H|^{2}}{(h_{2}-h\circ\gamma)^{4}}.\] \[\Delta h\circ\gamma=Hess(h)(d\gamma(e_{i}),d\gamma(e_{i}))+dh(\tau( \gamma))\] \[\geq 3h|B|^{2}+dh(\tau(\gamma))\] and for higher codimension mean curvature flow,we have (cf. [8]) \[\Delta|H|^{2}-\partial_{t}|H|^{2}\geq 2|\nabla B|^{2}-2|H|^{2}|B|^{2}\] So, we get \[\Delta\phi= \frac{2|\nabla B|^{2}+\partial_{t}|H|^{2}-2|H|^{2}|B|^{2}}{(h_{2 }-h\circ\gamma)^{2}}+\frac{4\left\langle\nabla h\circ\gamma,\nabla|H|^{2} \right\rangle}{(h_{2}-h\circ\gamma)^{3}}\] \[+\frac{2|H|^{2}(3h|B|^{2}+dh(\tau(\gamma))}{(h_{2}-h\circ\gamma) ^{3}}+\frac{6|\nabla h\circ\gamma|^{2}|H|^{2}}{(h_{2}-h\circ\gamma)^{4}}.\] On the other hand, the time derivative of \(\phi\) is given by \[\partial_{t}\phi=\frac{\partial_{t}|H|^{2}}{(h_{2}-h\circ\gamma)^{2}}+\frac{2 |H|^{2}dh(\partial_{t}\gamma)}{(h_{2}-h\circ\gamma)^{3}}.\] We continue the calculation as \[\Delta\phi= \frac{2|\nabla H|^{2}-2|H|^{2}|B|^{2}}{(h_{2}-h\circ\gamma)^{2}} +\frac{4\left\langle\nabla h\circ\gamma,\nabla|H|^{2}\right\rangle}{(h_{2}-h \circ\gamma)^{3}}\] \[+\frac{6h|H|^{2}|B|^{2}}{(h_{2}-h\circ\gamma)^{3}}+\frac{6|\nabla h \circ\gamma|^{2}|H|^{2}}{(h_{2}-h\circ\gamma)^{4}}+\partial_{t}\phi.\] Note that the following relations hold: \[\frac{2|\nabla H|^{2}}{(h_{2}-h\circ\gamma)^{2}}+\frac{2|\nabla h \circ\gamma|^{2}|H|^{2}}{(h_{2}-h\circ\gamma)^{4}}\geq\frac{4|\nabla H||\nabla h \circ\gamma||H|}{(h_{2}-h\circ\gamma)^{3}},\] \[\frac{2\left\langle\nabla|H|^{2},\nabla h\circ\gamma\right\rangle }{(h_{2}-h\circ\gamma)^{3}}+\frac{4|H|^{2}|\nabla h\circ\gamma|^{2}}{(h_{2}-h \circ\gamma)^{4}}=\frac{2\langle\nabla h\circ\gamma,\nabla\phi\rangle}{(h_{2} -h\circ\gamma)}.\] Hence, we get \[\Delta\phi-\partial_{t}\phi\geq(8-2h_{2})\frac{|B|^{2}|H|^{2}}{((h_{2}-h\circ \gamma))^{3}}+\frac{2\langle\nabla h\circ\gamma,\nabla\phi\rangle}{(h_{2}-h \circ\gamma)}\] Let \(\psi(F(p,t)):=\eta(r(F),t)\) Let \(L:=-2\nabla h\circ\gamma/(h_{2}-h\circ\gamma)\). We can calculate \[\Delta(\psi\phi)+\left\langle L,\nabla(\psi\phi)\right\rangle-2 \left\langle\frac{\nabla\psi}{\psi},\nabla(\psi\phi)\right\rangle-\partial_{t }(\psi\phi)\] \[= \psi\left(\Delta\phi-\partial_{t}\phi\right)+\phi\left(\Delta\psi -\partial_{t}\psi\right)+\left\langle\psi L,\nabla\phi\right\rangle+\left\langle \phi L,\nabla\psi\right\rangle-2\frac{|\nabla\psi|^{2}}{\psi}\phi\] \[\geq 2\psi\frac{(8-2h_{2})|H|^{2}|H|^{2}}{(h_{2}-h\circ\gamma)^{3}}+ \phi\left(\Delta\psi-\partial_{t}\psi\right)+2\frac{\langle\nabla h\circ\gamma,\nabla\psi\rangle}{h_{2}-h\circ\gamma}\phi-2\frac{|\nabla\psi|^{2}}{\psi}\phi.\] Note that \(D_{R,T}(o)\) is compact, since any time slice \(M_{t}\) can be written as an entire graph. Hence \(\psi\phi\) attains its maximum at some point \(F\left(p_{1},t_{1}\right)\) in \(D_{R,T}(o)\). At this point, we have \[\nabla(\psi\phi)=0,\quad\Delta(\psi\phi)\leq 0,\quad\partial_{t}(\psi\phi)\geq 0.\] Hence, we obtain \[2\psi(8-2h_{2})\frac{|H|^{2}|H|^{2}}{(h_{2}-h\circ\gamma)^{3}} \leq 2\phi\frac{\langle\nabla h\circ\gamma,\nabla\psi\rangle}{h_{2 }-h\circ\gamma}+2\phi\frac{|\nabla\psi|^{2}}{\psi}+\phi\left(\partial_{t}\psi- \Delta\psi\right)\] \[=I+II+III.\] Note that the following holds: \[|\nabla\psi|^{2}=|\partial_{r}\eta|^{2}\left|\nabla r\right|^{2}\leq n\left| \partial_{r}\eta\right|^{2}.\] By [3], we know that \[|\nabla h\circ\gamma|\leq C_{1}|B|\] Where \(C_{1}=(\frac{v_{1}}{2-v_{1}})^{\frac{5}{2}}\). By using (Young's inequality and the property of \(\eta\), we can estimate \(I\) as follows: \[I \leq 2\phi\frac{|\nabla h\circ\gamma|}{h_{2}-h\circ\gamma}|\nabla\psi|\] \[\leq 2\phi\frac{C_{1}|B|}{h_{2}-h\circ\gamma}|\nabla\psi|\] \[\leq\frac{\varepsilon}{4}\psi\frac{|H|^{\frac{8}{3}}|B|^{\frac{4 }{3}}}{(h_{2}-h\circ\gamma)^{4}}+\frac{C(\varepsilon)|\nabla\psi|^{4}}{\psi^{3}}\] \[\leq\frac{\varepsilon}{4}\psi\frac{|H|^{\frac{8}{3}}|B|^{\frac{4 }{3}}}{(h_{2}-h\circ\gamma)^{4}}+\frac{n^{2}C(\varepsilon)\left|\partial_{r} \eta\right|^{4}}{\psi^{3}}\] \[\leq\frac{\varepsilon}{4}\psi\frac{|H|^{\frac{8}{3}}|B|^{\frac{4 }{3}}}{(h_{2}-h\circ\gamma)^{4}}+\frac{C(\varepsilon,n)}{R^{4}},\] where \(\varepsilon>0\) is an arbitrary constant, \(C(\varepsilon)\) and \(C(\varepsilon,n)\) are constants depending only on \(\varepsilon\) and \(n\). Similarly, as in [3], we can calculate by using Young's inequality and the property of \(\eta\), \[II=2\phi\frac{|\nabla\psi|^{2}}{\psi}\leq\frac{\varepsilon}{4}\psi\phi^{2}+ \frac{C(\varepsilon,n)}{R^{4}}.\] Now we assume \(|\vec{H}(p,t)|\leq C_{H}\). Since \(\partial_{r}\eta\leq 0\), we have \[\Delta\psi=\left(\Delta r\right)\left(\partial_{r}\eta\right)+|\nabla r|^{2} \left(\partial_{r}^{2}\eta\right)\geq\left(C_{H}+\frac{n}{r}\right)\left( \partial_{r}\eta\right)-n\left|\partial_{r}^{2}\eta\right|.\] Hence we obtain for the second term of \(III\) in the same way as [3], \[-\phi\Delta\psi\leq\frac{\varepsilon}{4}\psi\phi^{2}+C(\varepsilon,n)\left(\frac{ 1}{R^{4}}+\frac{1}{R^{2}}\right).\] (Note that we may assume \(R/2\leq r\) for the second inequality, since \(\partial_{r}\eta\equiv 0\) for \(r\leq R/2\).) As for the first term of \(III\), as in [3] we have \[\phi\left(\partial_{t}\psi\right)\leq\frac{\varepsilon}{4}\psi\phi^{2}+C\left( \varepsilon,C_{H}\right)\left(\frac{1}{R^{2}}+\frac{1}{T^{2}}\right).\] Combing the above estimates, we finally obtain \[\frac{1}{n}(8-2h_{2})(h_{2}-h\circ\gamma)\psi\phi^{2}\leq\frac{\varepsilon}{4 }\psi\frac{|H|^{\frac{8}{3}}|B|^{\frac{4}{3}}}{(h_{2}-h\circ\gamma)^{4}}+\frac {3\epsilon}{4}\psi\phi^{2}+C\left(\varepsilon,n,C_{H}\right)\left(\frac{1}{R ^{4}}+\frac{1}{R^{2}}+\frac{1}{T^{2}}\right).\] Noticing our assumption \[|B|\leq C|H|\] Since \[1\leq h\leq h_{1}<h_{2}<4\] So we can take a sufficiently small \(\varepsilon\) such that \[2(8-2h_{2})(h_{2}-h\circ\gamma)-\varepsilon>0.\] Then we have \[(\psi\phi)^{2}\leq\psi\phi^{2}\leq C\left(\frac{1}{R^{4}}+\frac{1}{R^{2}}+ \frac{1}{T^{2}}\right).\] Since \(\psi\equiv 1\) on \(D_{R/2,T/2}(o)\), \[\sup_{D_{R/2,T/2}(o)}\frac{|H|}{h_{2}-h\circ\gamma}\leq C\left(\frac{1}{R}+ \frac{1}{\sqrt{R}}+\frac{1}{\sqrt{T}}\right).\] This completes the proof of Theorem 5.
2309.11578
The Role of Groups in Galaxy Evolution: compelling evidence of pre-processing out to the turnaround radius of clusters
We present clear and direct evidence of the pre-processing effect of group galaxies falling into clusters in the local Universe ($z \lesssim 0.1$). We start with a sample of 238 clusters, from which we select 153 with N$_{200} \ge$ 20. We considered 1641 groups within the turnaround radius ($\sim$ 5$\times$R$_{200}$) of these 153 clusters. There are 6654 {\it individual cluster galaxies} and 4133 {\it group galaxies} within this radius. We considered two control samples of galaxies, in isolated groups and in the field. The first comprises 2601 galaxies within 1606 {\it isolated groups}, and the latter has 4273 field objects. The fraction of star forming galaxies in infalling groups has a distinct clustercentric behavior in comparison to the remaining cluster galaxies. Even at $5 \times $R$_{200}$ the {\it group galaxies} already show a reduced fraction of star forming objects. At this radius, the results for the {\it individual cluster galaxies} is actually compatible to the field. That is strong evidence that the group environment is effective to quench the star formation prior to the cluster arrival. The group star forming fraction remains roughly constant inwards, decreasing significantly only within the cluster R$_{200}$ radius. We have also found that the pre-processing effect depends on the group mass (indicated by the number of members). The effect is larger for more massive groups. However, it is significant even for pairs an triplets. Finally, we find evidence that the time scale required for morphological transformation is larger than the one for quenching.
P. A. A. Lopes, A. L. B. Ribeiro, D. Brambila
2023-09-20T18:30:54Z
http://arxiv.org/abs/2309.11578v1
The Role of Groups in Galaxy Evolution: compelling evidence of pre-processing out to the turnaround radius of clusters ###### Abstract We present clear and direct evidence of the pre-processing effect of group galaxies falling into clusters in the local Universe (\(z\lesssim 0.1\)). We start with a sample of 238 clusters, from which we select 153 with N\({}_{200}\geq 20\). We considered 1641 groups within the turnaround radius (\(\sim 5\times\)R\({}_{200}\)) of these 153 clusters. There are 6654 _individual cluster galaxies_ and 4133 _group galaxies_ within this radius. We considered two control samples of galaxies, in isolated groups and in the field. The first comprises 2601 galaxies within 1606 _isolated groups_, and the latter has 4273 field objects. The fraction of star forming galaxies in infalling groups has a distinct clustercentric behavior in comparison to the remaining cluster galaxies. Even at 5\(\times\)R\({}_{200}\) the _group galaxies_ already show a reduced fraction of star forming objects. At this radius, the results for the _individual cluster galaxies_ is actually compatible to the field. That is strong evidence that the group environment is effective to quench the star formation prior to the cluster arrival. The group star forming fraction remains roughly constant inwards, decreasing significantly only within the cluster R\({}_{200}\) radius. We have also found that the pre-processing effect depends on the group mass (indicated by the number of members). The effect is larger for more massive groups. However, it is significant even for pairs an triplets. Finally, we find evidence that the time scale required for morphological transformation is larger than the one for quenching. keywords: surveys - galaxies: clusters: general - galaxies: groups: general - galaxies: star formation - galaxies: evolution. ## 1 Introduction According to the concordance model of the Universe (\(\Lambda\)CDM) low-mass dark matter halos are formed first (at high redshift), while larger halos come later through mergers and/or accretion of smaller systems. In this hierarchical structure formation scenario galaxy clusters represent the most massive and latest systems to form in the Universe due to their own gravity. Hence, we expect the presence of substructures, in the form of infalling groups, within clusters. That has been detected for many years, in different wavelengths (Bahcall, 1977; Jones & Forman, 1984; Dressler & Shectman, 1988; Mohr et al., 1993; Pinkney et al., 1996; Girardi et al., 1997; Lopes et al., 2006, 2018). Galaxy properties are well known to depend on the environment they are located (Oemler, 1974; Dressler, 1980; Baldry et al., 2006; Cucciati et al., 2006; Cooper et al., 2006). Late-type, gas rich, blue star forming galaxies prefer sparse populated regions of the Universe, while early-type, gas poor, and passive objects dominate the most dense locations, such as the central parts of groups and clusters. Several mechanisms are expected to influence galaxy evolution in dense environments. Those processes can be related to interactions to other members and/or the cluster potential. Another possibility is through interactions with the hot gas trapped in massive systems (groups and clusters, De Lucia et al., 2012). However, some mechanisms (such as tidal and ram pressure stripping) are more effective in the central parts of clusters, while some are more common in their outskirts and within groups. For example, due to the high relative velocity between cluster members, mergers are a rare phenomenon within their virial radius, being more common inside groups. The combination of the hierarchical structure growth with the environment dependence of galaxy properties naturally creates the expectation that part of galaxies within clusters (those infalling within groups) are more evolved than the remaining. This fast aging is a result of their prior life within the dense environment of a group. The term used to describe this phenomenon is 'pre-processing' (Zabludoff et al., 1996; Fujita, 2004). This process has been extensively investigated in the past years, using simulations (Bahe et al., 2019; Han et al., 2018; Bakels et al., 2021) and/or observations (Cortese et al., 2006; Dressler et al., 2013; Roberts & Parker, 2017; Einasto et al., 2020; Estrada et al., 2023). It can be detected through the investigation of many different properties. For instance, it has been shown that the fraction of star forming (SF) galaxies in clusters increases from the center to the outskirts, but never reaching the field level (even at very large radius). The interpretation is that galaxies arriving in clusters, but previously belonging to groups, had their evolution accelerated in the group environment, leading to a reduced fraction of SF objects among those systems (Lewis et al., 2002; Haines et al., 2015; Bianconi et al., 2016; et al., 2018). The existence of merging features among some cluster galaxies is also taken as an indirect proof of the pre-processing effect, as mergers are much more likely to happen within groups. It is important to bear in mind that galaxies arriving in clusters within groups will also face what is called _post-processing_, a combination of the environmental effects from the parent group and the cluster. However, it is difficult to disentangle this effect, which has not been deeply investigated (an exception is found in Choque-Challapa et al., 2019). Previous studies in the literature have focused on different aspects of the pre-processing effect. For instance, McGee et al. (2009); De Lucia et al. (2012); Bahe et al. (2013); Pallero et al. (2019) investigated the accretion history of group galaxies into clusters. Donnari et al. (2021) aimed to disentangle the effects of AGN feedback, environment, and pre-processing, finding those depend on the galaxy and host mass. Some works investigated the variation of the star-forming (or passive) population as a function of clustercentric distance (Haines et al., 2015). However, just a few studies tried to separate the group from the non-group populations (Hou et al., 2014; Bianconi et al., 2018) and sampled clusters down to at least 5\(\times\)R\({}_{200}\)(Lewis et al., 2002). This letter presents an investigation of the pre-processing effect (focusing on the variation of the star formation and late-type fractions), based on a large sample of cluster galaxies, which are separated in galaxies belonging or not to infalling groups, up to the turnaround radius. We have also used two control samples of galaxies, in isolated groups and in the field, for comparison. This work is structured as follows. In Section 2 we described our data and methodology to build each sample. In SS3 we present our results, while a discussion is made in SS4. The cosmology assumed in this work considers \(\Omega_{\rm m}=\)0.3, \(\Omega_{\lambda}=\)0.7, and H\({}_{0}=100\) h km \(s^{-1}\) Mpc\({}^{-1}\), with h set to 0.7. ## 2 Data and Methodology ### The galaxy data The photometric and spectroscopic data used in this paper were taken from the seventh release of the Sloan Digital Sky Survey (SDSS). The magnitudes retrieved from the SDSS are de-reddened model magnitudes. We derived absolute magnitudes taking in account the distance modulus, k and e\(-\)corrections. Rest-frame colours are also derived for all objects. We also use the total stellar mass and star formation rate values (SFRs) obtained by the MPA-JHU group Brinchmann et al. (2004). Note that this study is based only on a complete sample of what we call _bright galaxies_, having M\({}_{r}\)\(\sim\) M\({}^{*}+1\) (\(\leq\) -20.58). We also impose a minimum stellar mass cut (Log M\({}_{*}=\) 9.50). We adopt the \(\Sigma_{5}\) galaxy density estimator as a tracer of the local environment. For each galaxy in our sample, we compute the projected distance, d\({}_{5}\), to the 5th nearest galaxy around it. We also impose to the neighbor search a maximum velocity offset of 1000 \(km\) s\({}^{-1}\), and a maximum luminosity, which we adopt as M\({}_{r}=\) M\({}^{*}+1\). The local density \(\Sigma_{5}\) is simply given by 5/\(\sigma\)d\({}_{5}^{2}\), and is measured in units of galaxies/Mpc\({}^{2}\). Finally, we also take in account the fiber collision issue when deriving galaxy densities. The procedure is well described in La Barbera et al. (2010); Lopes et al. (2014). ### Clusters and groups This work investigates the properties of galaxies belonging to clusters out to the turnaround radius (R\({}_{\rm ta}\), assumed to be \(\sim\) 5\(\times\)R\({}_{200}\), as shown in Rines & Diaferio, 2006). Some of these galaxies are falling into the clusters as part of other systems (groups), while the remaining comprises the rest of the cluster population (galaxies not associated to any infalling group). For the current work, we call the first population as _group galaxies_ and the second as _individual cluster galaxies_. We separate the two populations as we identify which cluster galaxies belong to infalling groups. We describe below the cluster and group samples and the identification of the different galaxy populations. #### 2.2.1 The cluster sample The cluster sample is a combination of different catalogs that we have been working with for the past years. We have clusters from the supplement version of the Northern Sky Optical Cluster Survey (NoSOCS, Lopes et al., 2004, 2009), the Cluster Infall Regions in the SDSS (CIRS, Rines & Diaferio, 2006), the HIghest X-ray FLUx Galaxy Cluster Sample (HIFLUGCS, Reiprich & Bohringer, 2002; Andrade-Santos et al., 2017), the Planck Early Sunyaev-Zel'Dovich (ESZ, Planck Collaboration et al., 2011), the SPIDERS catalog (Kirkpatrick et al., 2021) and now we also add clusters from Tempel et al. (2012) (see SS2.2.2 below). In Bramblia et al. (2023) we combined all the catalogs above with the exception of the one from Tempel et al. (2012). For the current work we consider only objects with \(0.03\leq z\leq 0.10\). The upper redshift limit (\(z=0.10\)) is due to the completeness limit of the SDSS main spectroscopic sample, limited at \(r_{\rm ptot}=\) 17.77. That corresponds to an absolute magnitude limit of M\({}_{r}\sim\) M\({}^{*}+1=-20.58\) at \(z\sim 0.10\). 1 We consider as clusters all objects with M\({}_{200}\geq 10^{14}\) M\({}_{\odot}\). Given these constraints, we have 133 clusters from the above catalogs (without those from Tempel et al. (2012)). Footnote 1: Note those are the cluster redshift limits. The galaxies within clusters obviously span a slightly broader range (\(0.025<z<0.105\)). We applied the shifting gagger technique (Fadda et al., 1996; Lopes et al., 2009) to all galaxies with available redshifts around each cluster to select members and exclude interlopers. One difference to our previous approach is to consider all galaxies within 10.0 h\({}^{-1}\) Mpc (instead of 2.5 h\({}^{-1}\) Mpc), as we now want to have a member list sampling the infall pattern of the clusters. However, we only use the members within 2.5 h\({}^{-1}\) Mpc to derive an initial estimate of the velocity dispersion (\(\sigma_{\rm cl}\)). An estimate of M\({}_{200}\) is obtained adopting the equation 1 of Ferragamo et al. (2020) (also see Munari et al., 2013). The corrections considered by Ferragamo et al. (2020) to \(\sigma_{\rm cl}\) and M\({}_{200}\) are also employed. Next, an estimate of R\({}_{200}\) is derived from the above mass estimate. Then we derive final \(\sigma_{\rm cl}\) and mass estimates, now considering only members within R\({}_{200}\) (instead of 2.5 h\({}^{-1}\) Mpc). We refer the reader to Lopes et al. (2009, 2014, 2018) and Ferragamo et al. (2020) for more details on the estimates above. This same procedure is applied to all systems in the catalog from Tempel et al. (2012), with at least three FoF members and 0.03 \(\leq z\leq 0.10\) (17801 objects). We only use the coordinates, redshift, and velocity limits (derived from the FoF members) as input to our code. We are able to obtain estimates of \(\sigma_{\rm cl}\), R\({}_{200}\) and M\({}_{200}\) for 2854 groups and clusters of Tempel et al. (2012). We then kept only the 189 clusters of this data set (M\({}_{200}\geq 10^{14}\) M\({}_{\odot}\)). Our original sample described above has 120 clusters (out of 133) within the main contiguous area of SDSS, which is used by Tempel et al. (2012) (see their Fig. 1). Hence, we combined these 120 clusters from our original list to the 189 from Tempel et al. (2012), resulting in 238 objects. We still impose a final cut requiring a minimum number of 20 galaxies within R\({}_{200}\) to call an object as a cluster (N\({}_{200}\geq 20\)), leading to a final cluster sample of 153 systems. There are 17839 _individual galaxies_ associated to the 238 clusters described above. Considering only clusters with N\({}_{200}\geq 20\) and galaxies within 5 \(\times\) R\({}_{200}\) we are left with 12628 galaxies within the 153 clusters. We actually only work with _bright galaxies_ (M\({}_{r}\leq\) M\({}^{*}\)+ 1), and impose that Log M\({}_{*}\geq\) 9.50, so that the final sample comprises 6654 cluster galaxies. A very important remark is that these numbers do not represent the actual number of members obtained by the shifting paper method. The reason is that we actually excluded from this cluster member list galaxies that are also members of infalling groups (4133 objects), as described in SS2.2.2.1. Hence, the total number of cluster members is larger (10787), as we split those in two populations, of _group galaxies_ and _individual cluster galaxies_. The number above (6654) reflects only the second population. #### 2.2.2 The group sample We adopted the group sample from Tempel et al. (2012), which was built using Sloan Digital Sky Survey (SDSS) Data Release 8 (DR8). In reality, their sample comprises groups and clusters. Their clusters were actually use in combination to the other cluster samples above (see SS2.2.1). However, the only group sample we consider for this work is theirs. The authors applied a modified friends-of-friends (FoF) method with a variable linking length in both directions, eliminating selection effects and achieving high completeness. Their catalog has 77858 groups (and clusters) with two or more members. The number of observed members in each group is called its richness. The authors also provide other group parameters that scale with mass, such as an estimate of the virial radius (given by the projected harmonic mean), the velocity dispersion and total luminosity of the group in the \(r-\)band. Further details on the generation of their catalog with the FoF algorithm, and the avaliable galaxy and group properties, can be found in Tempel et al. (2012). There are 40330 objects in the catalog from Tempel et al. (2012), with \(0.03\leq z\leq 0.10\). In order to eliminate common clusters to our list with 238 clusters described above (in SS2.2.1), we compared the two catalogs, keeping 40100 systems with at least two members from Tempel et al. (2012). Those objects have no counterpart to the 238 clusters. For these groups, we kept the FoF membership assignment, which is more appropriate to this very low number of members. #### 2.2.2.1 The infalling group sample We considered a membership matching approach to select groups in the infall regions of clusters. We compared all the galaxies that are group members (according to the FoF) to those that are cluster members (from the shifting gapper technique). All the FoF groups with galaxies matched to the cluster member list are considered as infalling groups. It is obviously possible that not all galaxies from an infalling group have a counterpart among the cluster members. That happens for groups that are close to the escape velocity of a cluster at a given radius. That could be the case for recent arrivals into the clusters or due to imperfect membership assignment (from the FoF or shifting gapper). For those cases, we did consider all group galaxies in our analysis, even if some of them are not matched to cluster galaxies. These objects correspond to \(\sim\) 12% of the group galaxies. However, we verified that our results (SS3) are not affected if we exclude them. It is important to keep in mind that what we consider as groups infalling into clusters are systems located within the caustic profile associated to the cluster members. This profile can be seen as the result of the membership selection produced with the shifting gapper technique. Although we have infalling groups with small cluster-centric distances (\(\lesssim\) R\({}_{200}\)), their velocity offset distribution (relative to their parent clusters) is non-Gaussian, as expected from an infalling population. That is in full agreement with Haines et al. (2018), who selected infalling X-ray groups around massive clusters. Due to the limited field of view their selection resulted only in groups with small radial offsets (0.3 \(\lesssim\) R/R\({}_{200}\lesssim\) 1.3), but with a non-Gaussian velocity distribution. We find 3792 groups in the infall regions of clusters. However, as we consider the infall region limited to the turnaround radius (\(\sim\)5 \(\times\) R\({}_{200}\)) we actually have 1941 groups within this limit, being 1641 groups associated to clusters with N\({}_{200}\gtrsim\) 20. There are 18340 galaxies in the 3792 infalling groups. Considering only groups that are in the infall of clusters with N\({}_{200}\geq\) 20, we have 15240 galaxies, from which 7477 are within 5 \(\times\) R\({}_{200}\) of their parent cluster. Out of those we have 4133 bright galaxies (also having Log M\({}_{*}\geq\) 9.50). Those are the members of our infalling group sample of 1641 systems. #### 2.2.2 The isolated group sample We have also created a sample of isolated groups. We did so to have a comparison group sample free of the effects related to the cluster environment. We have 36308 groups that are not in the infall patterns of clusters (after excluding the 3792 infalling groups from the sample of 40100 groups). To consider a group as isolated we do the following. We compare each group to high density galaxies (\(\Sigma_{5}\geq\) 5 gals/Mpc\({}^{2}\); typical for galaxies within R\({}_{200}\) of clusters). A group is considered isolated if it is not found within 15 Mpc and \(\pm\) 3000 km s\({}^{-1}\) from a high density galaxy. From the 36308 groups we selected 1606 isolated groups. There are 4097 galaxies within those systems, from which 2601 are bright galaxies with Log M\({}_{*}\geq\) 9.50. ### The field sample One way to characterize the pre-processing effect is through the comparison of the fraction of star forming galaxies (or other populations or galaxy properties) in clusters and in the field. We built a field sample through the comparison of each galaxy (from SDSS DR7) to a group and cluster catalog, as in Brambila et al. (2023). However, here we discard galaxies with a distance smaller than 4 Mpc and with \(|\Delta z|\leq\) 0.10 of any object from a combined cluster catalogue (based on the sample from Gal et al. 2009 and all catalogues described above). Besides that, we also avoided galaxies within a radius of 2.0 Mpc and velocity difference of \(\pm\) 2000 km s\({}^{-1}\) from a bright neighbor (M\({}_{r}\leq\) M\({}^{*}\)+ 1) in the spectroscopic union sample of the SDSS DR7. We found 4598 bright field galaxies in the same redshift range of cluster galaxies (0.025 \(<z<\) 0.105). The distribution of LOG \(\Sigma_{5}\) shows a sharp cut at -0.5. However, there are still some galaxies (\(<\) 0.5%) with larger density values, which we exclude. We also remove one galaxy with LOG \(\Sigma_{5}\) = -4.0. Hence, the field list was then reduced to 4551 bright galaxies (also having Log M\({}_{*}\geq\) 9.50). We performed a last cleaning, discarding 278 objects that were part of a group from Tempel et al. (2012), so that the final field sample comprises 4273 bright galaxies. However, the exclusion of these objects do not impact the field results shown below. To sum up, unless otherwise stated, those are the numbers of bright and massive galaxies (M\({}_{r}\leq\) M\({}^{*}\)+ 1; Log M\({}_{*}\geq\) 9.50) this work is based on: 6654 (_individual cluster galaxies_), 4133 _group galaxies_, 2601 _isolated group galaxies_ and 4273 field objects. The median values of Log M\({}_{*}\) (the first and third quartiles are in parenthesis) for these four samples are 10.66 (10.48; 10.87), 10.65 (10.45; 10.87), 10.64 (10.43; 10.87) and 10.54 (10.33; 10.76), respectively. ## 3 Pre-processing galaxies in groups The pre-processing effect is usually characterized in the literature by the difference in the fraction of star forming galaxies (F\({}_{\rm SF}\)) in clusters and the field. We expect that the F\({}_{\rm SF}\) in clusters to increase from the center to the outskirts of clusters. However, the measurements of F\({}_{\rm SF}\) are generally found to be below the field results, even at large cluster radius (\(\ga 3\times\) R\({}_{200}\), Haines et al., 2015). This difference is interpreted as the result of the group environment affecting cluster galaxies prior to their infall. In Fig. 1 we display the F\({}_{\rm SF}\) as a function of clustercentric distance (up to 5 \(\times\) R\({}_{200}\)) for different populations. We consider the galaxy location in the star formation rate-stellar mass plane to classify galaxies as passive or star-forming. We call galaxies as SF if their log (SFR) value is greater or equal the line defined by equation 2 of Trussler et al. (2020) (log SFR = 0.70 log M\({}_{*}-8.02\)). In Fig. 1 the black squares represent all galaxies that are cluster members and the gray dashed line indicates the field fraction. That is usually what we see in the literature (although observational results normally do not reach 5 \(\times\) R\({}_{200}\)), being the difference between field and clusters at large radii associated to the pre-processing effect. Here we further separate the cluster data in two subpopulations, of _individual cluster galaxies_ (blue diamonds) and _group galaxies_ (red circles); see SS2.2.1 and SS2.2.2.1. We also show the results for galaxies in isolated groups (magenta dot-dashed line). In addition to the usual indirect signature of the pre-processing effect (difference between the field and cluster results - gray line and black points) we now provide direct and compelling evidence in its favor. The F\({}_{\rm SF}\) grows from the center to the clusters' outskirts. However, the blue diamonds (_individual cluster galaxies_) do not display a flat behavior for R \(\ga 2\times\)R\({}_{200}\). Actually, the F\({}_{\rm SF}\) values of _individual cluster galaxies_ are reconciled to the field fractions at \(\sim 5\times\)R\({}_{200}\). The difference between the field and cluster results can be fully attributed to the _group galaxies_ (red points), as those are previously pre-processed in these dense environments. Fig. 2 provides a natural explanation for the pre-processing effect. We display the local density parameter (\(\Sigma_{5}\)) _vs_ (R / R\({}_{200}\)) for all galaxies in clusters (black squares), _individual cluster galaxies_ (blue diamonds) and _group galaxies_ (red circles). We also show the results for the isolated groups (magenta dot-dashed line). The mean \(\Sigma_{5}\) value for the field is 0.082 gals Mpc\({}^{-2}\). We omit it from the figure for clarity. \(\Sigma_{5}\) decreases from the central part of clusters to their outskirts. However, the _individual cluster galaxies_ (blue points) reach much smaller densities (as they are not part of smaller systems), of \(\sim 0.7\) gas Mpc\({}^{2}\). For R \(\ga 3\times\)R\({}_{200}\) the local density (\(\Sigma_{5}\)) becomes approximately flat, while the values for the _group galaxies_ (red points) reach a plateau for R \(\ga 1-2\times\)R\({}_{200}\), with \(\Sigma_{5}\sim 4\) gals Mpc\({}^{-2}\), reflecting the higher density environment of groups. Hence, this plot, in combination with Fig. 1, confirms that _group galaxies_ are quenched before the rest of the cluster population (_individual cluster galaxies_), due to environmental effects within infalling groups. In Fig. 3 we investigate the possible dependence of the pre-processing effect on group mass, traced by the number of members. The results for pair and triplets (N \(\leq 3\)) are shown by the red points. The blue points display the fractions for objects with 3 \(<\) N \(\leq 10\), while we show in dark gray the results for groups with N \(>\) 10. The results for the field (gray dashed line) and isolated groups (magenta dot-dashed line) are displayed as before. For the first subset (N \(\leq 3\)) we detect a flat behavior from the outskirts down to \(\sim 2\times\)R\({}_{200}\), with smaller fractions in the two inner bins. The second subset (3 \(<\) N \(\leq 10\)) displays a nearly flat behavior for the whole interval. The results for the most rich groups (N \(>\) 10) smoothly decrease towards the center (down to R \(\sim 1\times\)R\({}_{200}\)), but within R\({}_{200}\), F\({}_{\rm SF}\) displays a steep drop. An important result is that at large radii (R \(\ga 2\times\)R\({}_{200}\)) there is a significant difference in the F\({}_{\rm SF}\) values according to the number of members. That indicates the fraction of quenched galaxies is already higher in large groups (in comparison to the smaller systems) when Figure 1: Fraction of star forming galaxies in clusters out to 5\(\times\)R\({}_{200}\). Galaxies that are not associated to groups are displayed in blue diamonds, while those that are members of infalling groups are shown in red circles. The fractions for all cluster galaxies (part of groups or not) are in black squares. The field fraction is shown by the gray dashed line, and the fraction for the isolated groups is displayed by the dot-dashed magenta line. F\({}_{\rm SF}\) is computed in intervals of 0.5 \(\times\) (R / R\({}_{200}\)), with the values in the X coordinate given by the mean of all points within each interval. The error bars, and the gray bands over the two horizontal lines, indicate the 1\(\sigma\) standard error of a proportion. Figure 2: The correlation of local galaxy density with normalized clustercentric distance (R / R\({}_{200}\)). The colors of the symbols and lines are the same as for Fig. 1. \(\Sigma_{5}\) is computed in intervals of 1.0 \(\times\) (R / R\({}_{200}\)), with the values in the X coordinate given by the mean of all points within each interval. The exception is for the first two points (of each population) for which we consider intervals of 0.5 \(\times\) (R / R\({}_{200}\)). The error bars (and the gray band over the horizontal line) indicate the 1\(\sigma\) standard error. they arrive at the clusters. However, the pre-processing effect is detected even for pairs an triplets, as their F\({}_{\rm SF}\) values are smaller than the field expectations. The different clustercentric dependence, for the _group galaxies_ and _individual cluster galaxies_, of F\({}_{\rm SF}\) and \(\Sigma_{5}\), is also seen for several structural and star formation related galaxy proprieties. For instance, similar results are found for galaxy specific star formation rate, color, size, concentration, T-type (a number assigned to each type of galaxy, which relates to the Hubble morphological sequence) and fraction of late-type (or early-type) galaxies. Those results reinforce the pre-processing effect and will be extensively discussed in a future paper. Here we show some results regarding morphology in Fig. 4. To classify galaxies as early or late-type (LT) we consider the T-type values and probabilities derived by Dominguez Sanchez et al. (2018). We adopted the criteria described in Bramblia et al. (2023) (LT galaxies have T-type > 0, P\({}_{\rm S0,bulge}<0.6\) and P\({}_{\rm fdisc}>0.8\)). We can see from Fig. 4 the fraction of late-type galaxies (F\({}_{\rm LT}\)) displays a similar behavior to F\({}_{\rm SF}\) as function of (R / R\({}_{\rm 200}\)), corroborating the pre-processing in groups. The _group galaxies_ have smaller values of F\({}_{\rm LT}\) at large clustercentric distances, when compared to the _individual cluster galaxies_. However, an important result we achieve is the fact that the typical values of F\({}_{\rm LT}\) are higher than F\({}_{\rm SF}\), for both galaxy populations, suggesting that SF is quenched in a shorter time scale than what is necessary for the morphological transformation to happens (a result we previously showed in Lopes et al., 2014). Note that our results are for bright massive galaxies. The F\({}_{\rm LT}\) values at R \(\sim 5\times\)R\({}_{\rm 200}\) of _individual galaxies_ are compatible to their F\({}_{\rm SF}\) results. But going inwards F\({}_{\rm SF}\) decreases much faster than F\({}_{\rm LT}\). ## 4 Discussion and Summary In this manuscript we present compelling and direct evidence in favor of the pre-processing effect of galaxies within groups. We do so through the comparison of the different populations of cluster galaxies, which we named _individual cluster galaxies_ (not associated to any subgroup) and _group galaxies_ (belongong to subgroups). This approach was tried only a few times in the literature (e.g., Hou et al., 2014; Bianconi et al., 2018). However, it is important to note that previous observational results usually do not reach the turnaround radius (\(\sim 5\times\)R\({}_{\rm 200}\)), being limited to at most \(3\times\)R\({}_{\rm 200}\)(Hou et al., 2014; Haines et al., 2015; Roberts and Parker, 2017; Bianconi et al., 2018). An exception is the work of Lewis et al. (2002), but they do not split the cluster galaxies as described above. Some other results in the literature are also restricted to investigate the pre-processing effect in the surrounding regions of a single (or a few) cluster or within a supercluster region (Esrada et al., 2023; Einasto et al., 2020). Our study is based on a large sample of groups (1641 infalling and 1606 isolated) and clusters (153), with the later sampled up to the R\({}_{\rm ta}\). Within this radius we have 6654 _individual cluster galaxies_ and 4133 _group galaxies_. Additionally, we have 2601 galaxies within _isolated groups_ and 4273 field objects. The F\({}_{\rm SF}\) is measured separately for the two cluster populations all the way out to \(5\times\)R\({}_{\rm 200}\) (see Fig. 1; to the best of our knowledge that is shown here for the first time). A result first seen here is also the agreement between the cluster F\({}_{\rm SF}\) infall (_individual cluster galaxies_) and field values, at large radius (close to the R\({}_{\rm ta}\)). That indicates that _individual galaxies_ - when first arriving into clusters - do have similar fractions of star forming objects as in the field. As they travel inwards they are progressively quenched, even at large clustercentric distances (in agreement with Babe et al., 2013). On the contrary, the _group galaxies_ arrive into the clusters with an already reduced fraction of star forming galaxies. Their F\({}_{\rm SF}\) values are nearly constant inwards, down to \(\sim 1-2\times\)R\({}_{\rm 200}\). A significant reduction is found only within R\({}_{\rm 200}\) (see the red points in Fig. 1). An interesting result is that at \(\sim\) R\({}_{\rm 200}\) the F\({}_{\rm SF}\) of _group galaxies_ becomes larger than the ones for the _individual cluster galaxies_. That is in disagreement with Bianconi et al. (2018), who found comparable results within R\({}_{\rm 200}\). We interpret this inversion on the behavior of the two populations (when going inside R\({}_{\rm 200}\)) as a combination of two effects. First, the F\({}_{\rm SF}\) results for the _individual cluster galaxies_ (blue points) are _contaminated_ by backsplash galaxies (that is expected in a large range around R\({}_{\rm 200}\)), objects that already crossed the cluster cores once. Hence, being more affected by the cluster environment in Figure 4: Analogous to Fig. 1, but showing the fraction of late-type galaxies. The colors of the symbols and lines are the same as for Fig. 1. F\({}_{\rm LT}\) is computed in intervals of \(1.0\times\) (R / R\({}_{\rm 200}\)), with the values in the X coordinate given by the mean of all points within each interval. The error bars (and the gray bands over the two horizontal lines) indicate the \(1\sigma\) standard error of a proportion. Figure 3: Fraction of star forming galaxies _vs_ normalized clustercentric distance (R / R\({}_{\rm 200}\)) for galaxies within infalling groups. The results are divided according to the number of the members of the groups. Red points indicate the results for pairs and triplets (N \(\leq 3\)), while the blue points represent systems with \(3<\) N \(\leq 10\), and dark gray points are for the most rich objects (with N \(>10\)). The lines are the same as for Fig. 1. F\({}_{\rm SF}\) is computed in intervals of \(1.0\times\) (R / R\({}_{\rm 200}\)), with the values in the X coordinate given by the mean of all points within each interval. The error bars (and the gray bands over the two horizontal lines) indicate the \(1\sigma\) standard error of a proportion. comparison to the _group galaxies_. The latter are actually not expected to survive as group galaxies after the first cluster passage (Choquet-Challapa et al., 2019; Haggar et al., 2023), so that the _group galaxies_ shown here are probably recent arrivals into the clusters. However, this effect is expected to impact our results as well as the literature ones. Second, and most importantly, the results of Fig. 1 consider all groups in our sample. As seen in Fig. 3 the F\({}_{\rm SF}\) depends on the group mass (indicated by the number of members). Smaller groups (N \(\leq 10\)) show much higher (especially within R\({}_{200}\)) star forming fractions than the richer systems (N \(>10\)). Note the results from Bianconi et al. (2018) should reflect what is expected for more massive systems, as they select groups through its X-ray emission. We have also shown - perhaps for the first time - that the local galaxy densities (\(\Sigma_{5}\), see Fig. 2) of the two cluster populations have clustercentric variations that explain the F\({}_{\rm SF}\) radial dependence. The \(\Sigma_{5}\) values of the _individual cluster galaxies_ decrease with radial growth, reaching much smaller values than obtained for the _group galaxies_, for R \(\gtrsim 3\times\)R\({}_{200}\). The pre-processing effect is also verified through the reduced fractions of late-type galaxies in the _group_ sample compared to the _individual cluster galaxies_. That is seen in Fig. 4, from which we can also infer that the time scale required for the morphological transformation to be larger than the one for quenching. We performed a few tests in order to verify the robustness of our conclusions. It is well known that the fraction of galaxy populations (such as star-forming or disc) and galaxy properties (such as color and morphology) are known to be correlated with the environment, but also stellar mass. The different populations investigated in the current work have different mass distributions. In order to check if that could impact our results we built stellar mass matched samples of the galaxies in different data sets. For instance, we verified that the F\({}_{\rm SF}\) values of the _group galaxies_ do not change significantly when using the stellar mass matched samples (in comparison to the _individual cluster galaxies_), which were built in radial bins. The variation of F\({}_{\rm SF}\) looks less flat for R\(>3\times\)R\({}_{200}\), but it is still compatible to the original one (seen in Fig. 1). The only case the difference is larger than 1\(-\sigma\) (but still within 2\(-\sigma\)) is for the last radial bin (close to 5\(\times\)R\({}_{200}\)). We have also investigated if the results change if we consider all galaxies in our original sample of 238 clusters, instead of the sample with 153 clusters (requiring N\({}_{200}\geq 20\)). Another test we applied was to select only galaxies with Log M\({}_{*}\geq 10.0\) (instead of 9.50). For both cases the F\({}_{\rm SF}\) values become a little smaller, in comparison to our original results of Fig. 1, but still in agreement, within the error bars. The results presented in this study represent direct evidence that _group galaxies_ are indeed quenched (and experience morphological transformation) before the rest of the cluster population (_individual cluster galaxies_), as the result of environmental processes within infalling groups. It is not the goal of the present work to investigate which processes are actually responsible for the accelerated quenching of the group galaxies before infall. Those could be related to galaxy encounters, starvation or ram-pressure stripping (in the case of the more massive groups), for instance. Our objective is to provide a careful selection of infalling groups within clusters and to disentangle the cluster population in two subsets: _group galaxies_ and _individual cluster galaxies_. This way we are able to perform a detailed analysis of the variation of the F\({}_{\rm SF}\) (and other properties) for these populations and to compare those to the results from the field and from an isolated group sample. Our results represent an important benchmark for cluster follow-up studies, out to 5\(\times\)R\({}_{200}\), aiming to investigate the pre-processing effect (e.g., the WEAVE Wide-Field Cluster Survey, Cornwell et al., 2022). ## Acknowledgements P.A.A.L. thanks the support of _Conselho Nacional de Desenvolvimento Cientifico e Tecnologico_ (CNPq), grants 433938/2018-8 and 312460/2021-0 and the _Fundacao de Amparo a Pesquisa do Estado do Rio de Janeiro do Rio de Janeiro do Rio de Janeiro_ (FAPERJ), grant E- 26/200.545/2023. ALBR thanks the support of CNPq, grant 316317/2021-7, and Fundacao de Amparo a Pesquisa do Estado da Bahia (FAPESB) INFRA PIE 0013/2016. DB acknowledges the _Coomnetao de Aperfeicoamento de Pessoal de Nivel Superior_ (CAPES) for a PhD fellowship. This research has made use of the SAO/NASA Astrophysics Data System and the SDSS. A list of participating institutions can be obtained from the SDSS Web Site ([http://www.sdss.org/](http://www.sdss.org/)). ## Data Availability The galaxy, group and cluster catalogs used in this work are public available. However, we are willing to provide - upon reasonable request - the separate lists we have created.
2309.17109
Broken Symmetry and Fractionalized Flux Strings in a Staggered U(1) Pure Gauge Theory
Inspired by self-adjoint extensions of the electric field operator in the Hamiltonian formalism, we extend the Wilsonian framework of Abelian lattice gauge theory by introducing a modified action parameterized by an angle $\alpha$, where the ordinary Wilson theory corresponds to $\alpha=0$. Choosing instead $\alpha=\pi$ (the "staggered" case) gives the only other theory in the family which preserves all symmetries of the original model at the microscopic level. We study the case of $3D$ $\mathrm{U}(1)$ pure gauge theory, simulating the staggered case of this model numerically in its dual formulation. We find evidence of a continuum limit with a spontaneously broken $\mathbb{Z}_2$ single-site translational symmetry, in contrast to the ordinary theory. Moreover, the confining string fractionalizes into multiple strands which separate spatial regions in distinct ground states of the broken symmetry.
A. Banerjee, D. Banerjee, G. Kanwar, A. Mariani, T. Rindlisbacher, U. -J. Wiese
2023-09-29T10:08:21Z
http://arxiv.org/abs/2309.17109v1
# Broken Symmetry and Fractionalized Flux Strings ###### Abstract Inspired by self-adjoint extensions of the electric field operator in the Hamiltonian formalism, we extend the Wilsonian framework of Abelian lattice gauge theory by introducing a modified action parameterized by an angle \(\alpha\), where the ordinary Wilson theory corresponds to \(\alpha=0\). Choosing instead \(\alpha=\pi\) (the "staggered" case) gives the only other theory in the family which preserves all symmetries of the original model at the microscopic level. We study the case of \(3D\) U(1) pure gauge theory, simulating the staggered case of this model numerically in its dual formulation. We find evidence of a continuum limit with a spontaneously broken \(\mathbb{Z}_{2}\) single-site translational symmetry, in contrast to the ordinary theory. Moreover, the confining string fractionalizes into multiple strands which separate spatial regions in distinct ground states of the broken symmetry. ## I Introduction Symmetry is one of the most important organizing principles of quantum field theories. In the Wilsonian framework of renormalization, for example, one includes in the action of a quantum field theory all terms consistent with the symmetries of the model. In the context of lattice field theories, one expects that two actions which share a sufficiently large subgroup of spacetime, internal, and gauge symmetries give rise to the same continuum theory. In lattice gauge theory, typical choices of the lattice action exactly implement gauge symmetry and the subgroup of Poincare invariance associated with discrete lattice translations and hypercubic space-time rotations, but otherwise differ by irrelevant operators. Nevertheless, such choices of the action have been demonstrated to recover the continuum gauge theory physics in many theories. For example, U(1) lattice gauge theory is commonly studied using either the standard Wilson action or the Villain action, which differ in their precise formulation but share the same symmetries and the same continuum limit [1; 2; 3]. From this point of view, it is natural to construct alternative discretizations for lattice gauge theories which may either recover the same continuum limit as standard theories or converge to a different continuum limit. In the former case, an alternative discretization can provide a better approach to the continuum, while in the latter case it can be useful from a model-building perspective. In the present work we put forward a method of generating alternative Abelian lattice gauge theory actions inspired by first applying a _self-adjoint extension_ to the electric field operator in the Hamiltonian formalism, then establishing a path integral using Trotterization. The use of self-adjoint extensions is natural in the operator language and manifestly preserves the operator commutation relations that define the theory. On the other hand, the resulting lattice gauge theories are far from obvious from a Euclidean action perspective. In this sense, our work expands the current framework of lattice gauge theories, potentially enabling other such reformulations of gauge theories in ways that are either beneficial for simulation or yield new continuum gauge theories. As a demonstration of the approach, we formulate a class of U(1) lattice gauge theories parameterized by an angle \(\alpha\in[0,2\pi)\). The choice \(\alpha=0\) corresponds to the Villain action, while other choices correspond to novel lattice actions with U(1) gauge symmetry. Taking \(\alpha=\pi\) is particularly interesting as it is the only other choice which preserves all relevant symmetries of the theory, including charge conjugation and parity. In this work, we therefore focus on numerically studying the continuum limit of the \(\alpha=\pi\) theory in three spacetime dimensions. In this case, the symmetry group of lattice translations is partially broken, as translations by a single site in any direction remain a symmetry of the theory only if combined with an appropriate internal symmetry transformation, which we together denote as a "single-site shift" symmetry. This motivates us to call this the "staggered" case. Translations by an even number of lattice spacings remain unbroken, which we expect to be sufficient, together with the hypercubic space-time symmetry, to recover full Poincare invariance in the continuum. Numerical simulation demonstrates that the single-site shift symmetry is spontaneously broken and remains so as the continuum limit is approached. These simulations also demonstrate that the confining string of the theory fractionalizes into two strands separating inner and outer regions of space which exist in different vacua of the spontaneously broken symmetry. These observations suggest a continuum gauge theory that is distinct from the usual U(1) pure gauge theory in 3D. Our numerical conclusions are supported by an effective theory for the \(\alpha=\pi\) theory which is derived analytically. Several features of this model, including a phase diagram characterized by the breaking of single-site shifts and charge conjugation, as well as flux string fractionalization, have been previously observed in the context of quantum link models which display crystalline and nematic confined phases [4; 5; 6; 7], and the former is also observed in the SU(\(N\)) quantum spin ladder regularization of 2D CP(\(N-1\)) models at \(\theta=\pi\)[8; 9]. To the best of our knowledge, this work represents the first time where such phenomena are observed in a lattice gauge theory with an infinite-dimensional Hilbert space. Moreover, systems where single-site shifts play an important role have gained interest in recent years in the context of "emanant" symmetries [10; 11]. The remainder of this work is organized as follows. In Section II, we review analytical and numerical results for the standard three-dimensional U(1) lattice gauge theory. In Section III we introduce the self-adjoint extension of this U(1) gauge theory and explain how the \(\alpha\) parameter is introduced in the case of general dimensionality. We then specialize to three dimensions and consider the staggered (i.e., \(\alpha=\pi\)) theory, discussing its dualization to a height model which allows numerical simulation without a sign problem. Section IV details the numerical investigations of the dual model. We investigate order parameters for the breaking of the relevant symmetries, and we find evidence that the \(\alpha=\pi\) theory has a broken \(\mathbb{Z}_{2}\) symmetry down to the continuum limit, a feature which is absent from the standard U(1) theory. To further characterize the \(\alpha=\pi\) theory, we also present numerical calculations of its mass and string tension. In Section V we analytically derive an effective theory from the microscopic degrees of freedom, which is consistent with the results obtained via numerical simulation. Finally, in Section VI we summarize and conclude with an outlook on future steps. ## II U(1) lattice gauge theory in 3D In this section we review the standard U(1) lattice gauge theory in three dimensions, as it is understood analytically and numerically in both the action and Hamiltonian formulation. While our construction of the \(\alpha\neq 0\) Abelian gauge theory is valid in any dimension, as we will see in later sections, in this work we focus on the three-dimensional case. As such, an understanding of the standard three-dimensional Abelian gauge theory sets a baseline expectation against which the \(\alpha\neq 0\) theory can be compared. ### Action formulation of the standard Abelian gauge theory The standard path integral formulation of U(1) lattice gauge theory in 3D is defined in terms of U(1)-valued variables arranged on the links of a 3D Euclidean space-time lattice. The discretized action is commonly chosen to be one of two equivalent formulations, either the Wilson action or the Villain action. The partition function in the Villain formulation is given by \[Z=\left(\prod_{l\in\text{links}}\int_{0}^{2\pi}d\varphi_{l} \right)\left(\prod_{p\in\text{plaq}}\sum_{n_{p}=-\infty}^{+\infty}\right)\\ \times\exp\left[-\frac{1}{2e^{2}}\sum_{p}((d\varphi)_{p}-2\pi n_ {p})^{2}\right]\,, \tag{1}\] where \(l\) runs over links connecting neighboring sites of the lattice and \(p\) runs over the plaquettes of the lattice. Labelling the ordered links in the plaquette \(p\) from \(1\) to \(4\), we define \[(d\varphi)_{p}\equiv\varphi_{1}+\varphi_{2}-\varphi_{3}-\varphi_{4}. \tag{2}\] In Eq. (1) we have absorbed the lattice spacing \(a\) in the dimensionless coupling \(e^{2}\). Unless otherwise specified, in the rest of this work we set \(a=1\). The theory has been rigorously shown to be confining at all values of the coupling and the asymptotic scaling of the mass and string tension near the continuum limit \(e^{2}\to 0\) have also been analytically computed [12]. In particular, the mass gap \(m\) scales as (restoring the lattice spacing \(a\)) \[a^{2}m^{2}=\frac{8\pi^{2}}{e^{2}}\exp\left[-2\pi^{2}v_{0}/e^{2}\right]\,, \tag{3}\] where \(v_{0}\approx 0.2527\). On the other hand, the string tension \(\sigma\) scales as (again, restoring the lattice spacing \(a\)) \[a^{2}\sigma=\frac{\widetilde{c}}{4\pi^{2}}\,am\,e^{2}\, \tag{4}\] in terms of a dimensionless constant \(\widetilde{c}\). These results have found numerical confirmation [13; 14; 15; 16], and the constant \(\widetilde{c}/4\pi^{2}\approx 0.21\) was estimated in [13]. As the continuum is approached, we then see that the dimensionless ratios \[\frac{\sigma}{m^{2}}\to\infty\,\qquad\quad\frac{(a^{2}\sigma)}{(am)\,e^{2}} \to\frac{\widetilde{c}}{4\pi^{2}}. \tag{5}\] Equation (5) shows that the theory has several inequivalent length scales, even in the continuum. As such, different continuum limits may be obtained depending on what quantity is taken as the standard of length. In a continuum limit where the mass \(m\) is held fixed in physical units,1 the string tension becomes infinite, and the theory is equivalent to a free scalar (the "photoball") of mass \(m\)[12]. If the string tension \(\sigma\) is held fixed instead, the mass (in physical units) goes to zero. Finally, the dimensionful bare coupling \(e^{2}/a\) has dimensions of energy, and can be held fixed as a third possible prescription to obtain a continuum limit, with the corresponding continuum theory conjectured to be free electrodynamics [12]. For further discussion of the continuum limits of this theory, see [13]. The analytical and numerical analysis of the theory is much simplified in its dual formulation. In particular, the partition function in Eq. (1) can be rewritten in terms of integer-valued height variables \(h_{x}\in\mathbb{Z}\) associated with sites \(x\) of the dual lattice. In terms of these new variables, the partition function is given (up to constant prefactors) by [17; 2] \[Z=\left(\prod_{x\in\text{sites}}\sum_{h_{x}=-\infty}^{+\infty}\right)\exp\left[ -\frac{e^{2}}{2}\sum_{\langle xy\rangle}(h_{x}-h_{y})^{2}\right]\,, \tag{6}\] where the product over \(x\) enumerates sites of the dual lattice and the sum over \(\langle xy\rangle\) enumerates pairs of neighboring dual sites \(x\) and \(y\). The dualization forms the basis of our analytical understanding of the theory near the continuum. Near the continuum limit the integer height variables can be replaced with real scalars, which results in the Sine-Gordon model [12; 18] \[Z=\left(\prod_{x}\int_{-\infty}^{+\infty}d\phi_{x}\right)\exp \bigg{[}-\frac{1}{2}\sum_{\langle xy\rangle}(\phi_{x}-\phi_{y})^{2}+\\ +2e^{-2\pi^{2}v_{0}/e^{2}}\sum_{x}\cos\left(\frac{2\pi\phi_{x}}{ \sqrt{e^{2}}}\right)\bigg{]}\, \tag{7}\] where \(\phi_{x}\) is a real scalar field. This provides an effective description of the theory that is valid for small \(e^{2}\). In Section V we extend these results to the \(\alpha=\pi\) case. ### Hamiltonian formulation of the standard Abelian gauge theory As we will see in the next sections, the \(\alpha\neq 0\) U(1) gauge theory is most easily understood in the Hamiltonian formulation. In this section, we review the Hamiltonian formulation of the standard U(1) gauge theory. In the Hamiltonian formulation of lattice gauge theories [19], time is continuous while space is discretized into a square (in general: hypercubic) lattice. The temporal gauge \(A_{0}=0\) is chosen. A classical configuration of the theory is given by an assignment of a group-valued variable \(U_{l}\in\text{U}(1)\) to each spatial link \(l\) of the lattice. In the quantum theory, the Hilbert space on each link \(l\) is therefore given by \(\mathcal{H}_{l}\equiv L^{2}(\text{U}(1))\), the space of square-integrable functions on U(1). The elements \(\ket{\psi}\) of this space may be expanded in terms of wavefunctions \(\psi(\varphi_{l})\) over the angular variable \(\varphi_{l}\in[0,2\pi]\) associated with the U(1) group element by \(U_{l}=\exp\left(i\varphi_{l}\right)\) as \[\ket{\psi}=\int_{0}^{2\pi}d\varphi_{l}\,\psi(\varphi_{l})\ket{U_{l}}\, \tag{8}\] where the orthonormal basis \(\{\ket{U}\}\) may be interpreted as a position basis in group space. The wavefunctions satisfy \[\psi(2\pi)=\psi(0),\qquad\int_{0}^{2\pi}d\varphi\ket{\psi(\varphi)}^{2}<\infty. \tag{9}\] The total Hilbert space \(\mathcal{H}_{\text{tot}}\) is then given by the tensor product of these spaces over all links, \[\mathcal{H}_{\text{tot}}=\bigotimes_{l\in\text{links}}\mathcal{H}_{l}. \tag{10}\] The Hamiltonian may be obtained by studying the transfer-matrix formulation of the path integral and taking the continuous-time limit, giving \[H=\frac{e^{2}}{2}\sum_{l\in\text{links}}E_{l}^{2}+\frac{1}{2e^{2}}\sum_{p\in \text{plaps}}B^{2}((d\varphi)_{p}). \tag{11}\] Here \((d\varphi)_{p}\) is the plaquette variable on spatial plaquettes \(p\) defined as in Eq. (2) above, while the electric field operator \(E_{l}\) on spatial link \(l\) is given by \[E_{l}=-i\frac{\partial}{\partial\varphi_{l}}. \tag{12}\] Therefore, on each link the theory is analogous to the familiar problem of a quantum mechanical particle on the circle [20; 21]: the U(1) variable \(U_{l}=\exp\left(i\varphi_{l}\right)\) identifies the position of the particle on the circle, while the electric field \(E_{l}\) is the canonical momentum conjugate to the position. The magnetic energy term \(B^{2}(\cdot)\) couples these links together, and for the standard U(1) gauge theory it may be chosen as the Wilson term, Villain term, or any other term equivalent up to lattice artifacts. In the previous section, we have chosen the Villain formulation for the Euclidean action of the theory, since it leads to the simpler partition function. Therefore, we here also choose the Villain term, which is easiest to define in exponential form as \[\exp\left(-B^{2}((d\varphi)_{p})/2e^{2}\right)\\ =\sum_{n\in\mathbb{Z}}\exp\left(-\frac{1}{2e^{2}}\sum_{p}((d \varphi)_{p}-2\pi n)^{2}\right)\,. \tag{13}\] For comparison, the Wilson plaquette term would take the form \(B^{2}((d\varphi)_{p})=1-\cos\left((d\varphi)_{p}\right)\). In order to correctly implement the gauge symmetry, only some states in the total Hilbert space \(\mathcal{H}_{\text{tot}}\) should be considered as physical. In particular, one includes in the physical Hilbert space \(\mathcal{H}_{\text{phys}}\) only those states \(\ket{\psi}\) which satisfy the Gauss law constraint \[G_{x}\ket{\psi}=0\,\qquad G_{x}=\sum_{i}\left(E_{x,x+\hat{i}}-E_{x-\hat{i},x} \right)\, \tag{14}\] where the sum over \(i\) runs over all spatial directions and \(\hat{i}\) is the unit vector oriented in the \(i\)-th spatial direction. Both choices of magnetic energy \(B^{2}(\cdot)\) discussed above are designed to commute with the Gauss law operators \(G_{x}\), so that the \(G_{x}\) operators commute with the Hamiltonian and gauge symmetry is properly respected. ## III U(1) Gauge Theory with an \(\alpha\) Angle We next describe the construction of a class of Abelian gauge theories inspired by self-adjoint extensions. Starting from the Hamiltonian formulation of the standard U(1) gauge theory, we extend its Hilbert space in the most general way consistent with the gauge symmetry. The possible extensions are characterized by an angle \(\alpha\), where the choice \(\alpha=0\) corresponds to the standard U(1) theory. The action formulation of the theory is then constructed via Trotterization from the Hamiltonian. In order to preserve cubic symmetries of the action, and thus Lorentz invariance in the continuum limit, the magnetic terms in the action are appropriately modified. The resulting theory explicitly breaks charge conjugation and parity unless \(\alpha=0\) or \(\alpha=\pi\). We therefore choose to focus on \(\alpha=\pi\) for more detailed study. This construction, which is quite natural in the Hamiltonian formulation, leads to a fairly complicated action which does not fall within the Wilsonian framework of gauge theories, but still shares all the symmetries of the standard U(1) gauge theory. Importantly, this includes exact U(1) gauge symmetry, which is therefore inherited by the continuum theory. While the construction applies to arbitrary spacetime dimension, we further focus on the three-dimensional case and dualize the theory, which removes the sign problem present in the original action formulation and results in a theory suitable for numerical study. ### Self-adjoint extension of the Hamiltonian We have seen in the previous sections that the Hilbert space of the standard U(1) gauge theory is given, on each lattice link, by the square-integrable functions on U(1). In the usual formulation, the state \(\ket{\psi}\) is defined by wavefunctions satisfying the periodicity and integrability conditions in Eq. (9). The left side of Eq. (9) ensures that the wavefunctions are continuous on the periodic U(1) manifold. However, the probabilistic interpretation of quantum theories only requires that the absolute value squared \(\ket{\psi(\varphi)}^{2}\) be periodic on U(1) for the wavefunction to provide a representation of the U(1) symmetry. It is therefore consistent to relax the first condition of Eq. (9) and instead require only that the wavefunction be periodic up to an arbitrary phase, \[\psi(2\pi)=e^{i\alpha}\psi(0). \tag{15}\] Such a modification is familiar from the quantum mechanics of a rotor (see for example [20]). In fact, on each link the electric Hamiltonian \(-\frac{e^{2}}{2}\partial_{\varphi}^{2}\) is equivalent to the Hamiltonian of a free non-relativistic particle on the circle. Considering this fictitious particle to have charge 1, twisting the wavefunctions by the \(\alpha\) angle is equivalent to threading a magnetic flux equal to \(\alpha\) through the circle on which this charged particle lives. The choice in Eq. (15) may also be understood in a more mathematical way as a _self-adjoint extension_[22; 23]. The Hamiltonian of the standard U(1) gauge theory involves the electric field operator \(E_{l}=-i\partial/\partial\varphi_{l}\), which acts separately on the Hilbert space of each link. A basic requirement is that the electric field must be _self-adjoint_, so that it has real eigenvalues and an orthonormal basis of eigenfunctions. This is necessary so that the Hamiltonian and the Gauss law have their expected properties. Equation (15) then describes the most general choice for the wavefunction \(\psi(\varphi)\) which is consistent with self-adjointness of the electric field. In principle, this choice only modifies the Hilbert space of the theory, and operators such as the electric field and the Hamiltonian are only affected through their domain of definition. For simplicity, however, we choose to map the Hilbert space of twisted wavefunctions defined by Eq. (15) to the ordinary Hilbert space of periodic wavefunctions via \[\psi(\varphi)\to e^{-i\varphi\alpha/2\pi}\psi(\varphi). \tag{16}\] This mapping modifies the definition of the electric field operator, sending it to \[E_{l}^{\prime}=E_{l}+\tfrac{\alpha}{2\pi}. \tag{17}\] The family of theories defined by the introduction of the angle \(\alpha\) can then by identified with theories over the ordinary Hilbert space, where instead the Hamiltonian is defined in terms of a modified electric field as in Eq. (17). The family of Hamiltonians obtained by all such modifications of the electric field operator are naturally motivated by self-adjoint extensions and maintain the gauge symmetry of the theory. Defining the self-adjoint extension by Eq. (17) also makes it clear that this is still consistent with the Gauss law constraint given in Eq. (14). The choice of wavefunctions in Eq. (15), or the equivalent modification to the electric field operator in Eq. (17), generally affects the other symmetries of the theory. Under charge conjugation, the link variables transform as \(U_{l}\to U_{l}^{*}\) and the electric fields transform as \(E_{l}\to-E_{l}\). This maps \(\alpha\to-\alpha\) in either the wavefunction definition or electric field definition. Therefore unless \(\alpha\in\{0,\pi\}\), the resulting theory explicitly breaks charge conjugation. There is also an implicit choice of orientation in the definition of \(\alpha\), which is more clearly seen when considering the modification to the electric field operator in Eq. (17), because this operator is directional. Choosing \(\alpha\in\{0,\pi\}\) also ensures that the theory preserves parity, which reverses this direction and thus sends \(\alpha\to-\alpha\). In fact, discrete rotations would also be broken by a choice of \(\alpha\not\in\{0,\pi\}\) unless the orientations of these terms are carefully chosen across all links, further motivating the restriction to these values. In \((1+1)\)D, the substitution (17) in the gauge theory Hamiltonian is equivalent to introducing a topological \(\theta\) term where \(\theta\) corresponds to \(\alpha\)[21]. In \((3+1)\)D, the topological \(\theta\) term is instead introduced by the replacement \(\vec{E}\to\vec{E}-\frac{\theta}{8\pi}\vec{B}\), which preserves Lorentz-invariance [21]. On the other hand, the replacement (17) is _not_ Lorentz-invariant in dimensions higher than \((1+1)\)D, and, as we will see in a moment, we therefore need to appropriately modify the partition function of the theory in order to restore Lorentz invariance. In this sense the \(\alpha\) angle represents one possible higher-dimensional generalization of the two-dimensional \(\theta\) term. ### Action formulation for the \(\alpha\neq 0\) theory The action formulation can be obtained from the Hamiltonian formulation via Trotterization. In the "position basis" \(\{|U\rangle\}\) for \(U=e^{i\varphi}\in\mathrm{U}(1)\), the magnetic Hamiltonian is diagonal. The matrix elements of the exponential of the electric Hamiltonian can be computed by inserting a basis of "momentum eigenstates" \(\{|m\rangle\}\), where \(m\in\mathbb{Z}\), which satisfy \[\langle U|m\rangle=\frac{1}{\sqrt{2\pi}}e^{im\varphi} \tag{18}\] and diagonalize the electric field \(E=-i\partial_{\varphi}+\frac{\alpha}{2\pi}\) according to \[E\left|m\right>=\left(m+\frac{\alpha}{2\pi}\right)\left|m\right>. \tag{19}\] By inserting resolutions of the identity in terms of \(\{|m\rangle\}\) and applying Poisson summation, one obtains \[\langle U^{\prime}|\,e^{-\Delta\tau\frac{e^{2}}{2}E^{2}}\,|U \rangle=\frac{1}{\sqrt{2\pi e^{2}\Delta\tau}}e^{-i\frac{\alpha}{2\pi}(\varphi ^{\prime}-\varphi)}\times\\ \times\sum_{m\in Z}\exp{\left(-\frac{1}{2e^{2}\Delta\tau}\,( \varphi^{\prime}-\varphi-2\pi m)^{2}\right)}e^{i\alpha m}. \tag{20}\] Since we are working in the temporal gauge, the matrix element above describes the path integral weight of each timelike plaquette. Meanwhile, the path integral weight of each spacelike plaquette is simply given by the exponential \(e^{-\Delta\tau B^{2}((d\varphi)_{p})}\), because the operator \((d\varphi)_{p}\) is diagonal in the "position basis" \(\{|U\rangle\}\) describing the link variables in the path integral. As remarked in the Introduction, we aim to construct a theory which shares as many of the symmetries of the standard \(\mathrm{U}(1)\) theory as possible. In particular, we would like to construct a theory which is invariant under the hypercubic subgroup of the Lorentz symmetry that remains on the Euclidean lattice. Since we are interested in working on isotropic lattices, one can set as usual \(\Delta\tau=1\). For \(\alpha=0\), the weight of the timelike plaquettes given in Eq. (20) is then identical to the weight of the spacelike plaquettes using the Villain term given in Eq. (13). This leads to the partition function in Eq. (1) for the usual theory with the Villain action, which is manifestly isotropic and invariant under the group of Euclidean hypercubic symmetries. In order to obtain an isotropic lattice theory for \(\alpha\neq 0\), we match the spatial plaquette terms in the lattice action to the space-time plaquette terms Eq. (20) derived by the Trotterization steps above. We note that the Hamiltonian arising from considering the one-timestep transfer matrix then includes an imaginary magnetic field term \(B^{2}(\cdot)\). However the two-timestep transfer matrix remains positive definite and gives rise to a Hermitian Hamiltonian, indicating that this still corresponds to a well-defined quantum theory. The partition function for generic \(\alpha\) is then given by \[Z=\left(\prod_{l}\int_{0}^{2\pi}d\varphi_{l}\right)\prod_{p} \Big{[}\sum_{m\in\mathbb{Z}}e^{-i\frac{\alpha}{2\pi}(d\varphi)_{p}}\times\\ \times\exp{\left(-\frac{1}{2e^{2}}\left((d\varphi)_{p}-2\pi m \right)^{2}\right)}e^{i\alpha m}\Big{]}. \tag{21}\] This partition function is gauge invariant and is invariant under cubic rotations. For \(\alpha=0\) the partition function Eq. (21) reduces to the partition function Eq. (1) of the ordinary theory. As remarked above, choosing \(\alpha=0\) or \(\alpha=\pi\) also explicitly preserves parity invariance of the action, making the action invariant under the full cubic symmetry subgroup of the Lorentz symmetry. While the partition function in Eq. (21) generally suffers from a sign problem, this can be completely resolved by dualization. Following a similar procedure to the one used to arrive at Eq. (6) in the \(\alpha=0\) theory, the partition function for the three-dimensional \(\alpha=\pi\) theory can be dualized to that of a height model, \[Z=\left(\prod_{\pi\;\mathrm{odd}}\sum_{h_{x}\in\mathbb{Z}+ \frac{1}{2}}\right)\left(\prod_{y\;\mathrm{even}}\sum_{h_{y}\in\mathbb{Z}} \right)\times\\ \times\exp{\left[-\frac{e^{2}}{2}\sum_{\langle xy\rangle}(h_{x}-h _{y})^{2}\right]}. \tag{22}\] Details of the dualization are presented in Appendix A. Here each lattice site at position \(\vec{x}=(x_{0},x_{1},x_{2})\) is said to be even or odd according to the parity of \(x_{0}+x_{1}+x_{2}\), as shown in Fig. 1. The only difference compared to Eq. (6) is the assignment of half-integer height variables on the odd sites of the lattice. In particular, all the nearest neighbours of integer-valued height variables are half-integer, and vice versa. This should be contrasted with the related case of quantum link models, where the staggering is only realized in the space directions [4; 5; 6; 7]. Figure 1: A subset of the three-dimensional dual lattice, depicting the staggered integer (black dots) and half-integer (white dots) height variables associated respectively with even and odd dual sites. ### Symmetries and order parameters The staggered height model defined by the partition function Eq. (22) enjoys the following symmetries: 1. _Global \(\mathbb{Z}\)-invariance_: \(h_{x}\to h_{x}+c\) where \(c\) is any constant integer. Note that under this transformation the integer and half-integer nature of the height variables is preserved. Importantly, this symmetry should be understood as a redundancy in our description of the system, whereby the overall height around which the height variables fluctuate is irrelevant [12]. All observables are therefore required to be \(\mathbb{Z}\)-invariant. 2. \(\mathbb{Z}_{2}\) _charge conjugation_\(C\): \(h_{x}\to-h_{x}\). Again, this transformation respectively maps integers and half-integers to integers and half-integers. Note that the standard U(1) theory also enjoys this symmetry, but for \(\alpha\neq 0\), only the choice \(\alpha=\pi\) leads to an action invariant under charge conjugation. 3. \(\mathbb{Z}_{2}\) _single-site shift symmetry_\(S\): \(h_{x}\to h_{x+\hat{\mu}}+\frac{1}{2}\) where \(\mu\) is any spacetime direction. Note that the half-integer offset is required in order to preserve the integer and half-integer nature of the height variables. While \(S\) as defined is not technically speaking a \(\mathbb{Z}_{2}\) symmetry (i.e. it does not square to the identity, but rather to a translation), it can be made so by combining it with parity \(P\) and charge conjugation \(C\) in any order. For example \(CPS:h_{x}\to-h_{-x+\hat{\mu}}-\frac{1}{2}\) squares to the identity. 4. _Translations by an even number of lattice spacings_: While translations by one lattice spacing swap integers and half-integers, translations by an even number of lattice spacings preserve the nature of the height variables. We expect that invariance under these symmetries is enough to recover full translational invariance in the continuum. 5. _Remnant Lorentz symmetry_: The action of the height model is fully isotropic and thus invariant under the group of Euclidean cubic rotations and, for \(\alpha=\pi\), also reflections. This corresponds to all spacetime symmetries of the cubic lattice, and we therefore expect to recover a Lorentz-invariant theory in the continuum. An analogy for the shift symmetry comes from staggered fermions, where the fermion degrees of freedom are spread over multiple lattice sites and translations by one lattice site are a symmetry of the action only when combined with an extra internal rotation [24]. The single-site shift symmetry of staggered fermions can break spontaneously [25], and, as we will see, this also happens in our model. Moreover, quantum link models [4; 5; 6; 7], as well as the quantum spin ladder regularization of \(\text{CP}(N-1)\) models [8; 9], provide further examples of theories with a phase diagram characterized by the breaking of charge conjugation and single-site shifts. In order to investigate the possible breaking of charge conjugation \(C\) and the shift symmetry \(S\), we construct appropriate order parameters, along the lines of [4; 5; 6; 7; 8; 9]. One order parameter is defined by \[O_{CS}=\sum_{x}(-1)^{x}h_{x}=\sum_{x\,\text{even}}h_{x}-\sum_{x\,\text{odd}}h _{x}\, \tag{23}\] and changes sign under either \(S\) or \(C\). Note that \(O_{CS}\) is a sum of local observables and invariant under the global \(\mathbb{Z}\)-symmetry. We expect \(O_{CS}\) to acquire a vacuum expectation value only if both \(C\) and \(S\) are spontaneously broken; either symmetry remaining unbroken is sufficient for \(O_{CS}\) to have zero vacuum expectation value. It is therefore important to also construct an observable which is sensitive to only one of the two symmetries. In particular, we define a second order parameter \[O_{S}=\sum_{c\in\text{cubes}}\sum_{x\in c}(-1)^{x}(h_{x}-\bar{h}_{c})^{2}\, \tag{24}\] where the sum runs first over all elementary cubes \(c\) in the dual lattice, and then over dual sites \(x\) within each cube. The average height variable within the cube is defined as \[\bar{h}_{c}=\frac{1}{8}\sum_{x\in c}h_{x}. \tag{25}\] The observable \(O_{S}\) is \(C\)-invariant, but changes sign under the shift symmetry \(S\). We therefore expect it to acquire a vacuum expectation value if the single-site shift symmetry \(S\) is broken. The somewhat complex construction of \(O_{S}\) is required in order to obtain an observable with the correct symmetry properties, which is at the same time the sum of local terms (the cubes) and invariant under the global \(\mathbb{Z}\) symmetry. ## IV Numerical simulation In order to investigate the \(C\) and \(S\) symmetry structure and the phase diagram of the theory, we numerically simulated the staggered height model using cluster Monte Carlo algorithms [26; 27; 28]. The simulations were performed on \(L^{3}\) lattices from \(L=32\) up to \(L=256\) and couplings between \(e^{2}=0.3\) and \(e^{2}=2.0\). As remarked in previous sections, we expect the continuum limit as \(e^{2}\to 0\), much like in the standard U(1) gauge theory. Unless specifically noted, all quantities are given in lattice units in the following discussion. ### Order parameters To study the symmetry structure of the theory, it is useful to consider histograms of the order parameters \(O_{S}\) and \(O_{CS}\) as well as the normalized susceptibilities and \(O_{CS}^{2}/V^{2}\) in terms of the lattice volume \(V=L^{3}\). For a spontaneously broken symmetry, we expect to see a volume-independent susceptibility and a double-peaked histogram for sufficiently large volumes. This implies that the relevant operator acquires a vacuum expectation value. The operator \(O_{S}\) demonstrates a clear signal of spontaneous breaking of \(S\) symmetry over the couplings studied. The relevant numerical data is shown in Fig. 2. In particular, for couplings \(e^{2}\gtrsim 0.65\) the normalized susceptibility \(O_{S}^{2}/V^{2}\) is essentially volume-independent. Further confirmation can be found in the histogram of the values of \(O_{S}/V\), which shows two clearly defined peaks becoming sharper as the volume is increased. For \(e^{2}\lesssim 0.65\), the normalized susceptibility \(O_{S}^{2}/V^{2}\) begins to develop volume dependence, with decreasing susceptibility as the volume increases until the volume is sufficiently large. This indicates that many of the ensembles correspond to physical volumes that are too small to exhibit spontaneous symmetry breaking. The finite-volume symmetry restoration occurs for smaller and smaller values of the coupling as the volume is increased, which is consistent with it being a finite-volume effect. Fig. 2 also shows a fit of the data from the largest volume to a functional form \(f(e^{2})=A\exp{(-B/e^{2})}\) over the range of couplings \(0.50\leq e^{2}\leq 0.80\); these couplings are large enough to avoid symmetry restoration from the finite volume and are small enough to remain close to the continuum limit. The fit form is inspired by the analytic results for the standard U(1) theory (for example Eq. (3)) as well as by the effective theory for the \(\alpha=\pi\) theory (see Eq. (52)) and provides an excellent fit to the data. Such a functional form would also imply that the \(S\) symmetry remains broken down to the continuum \(e^{2}\to 0\). In order to test whether the symmetry is restored at some small, but non-zero value of \(e^{2}\), we also introduced an offset in the fit form, which was therefore modified to \(f(e^{2})=A\exp{(-B/(e^{2}-e_{c}^{2}))}\), giving a value \(e_{c}^{2}=0.04(11)\), which is consistent with zero. Overall, we interpret the numerical evidence to mean that the \(S\) symmetry is broken for a wide range of couplings, possibly down to \(e^{2}\to 0\) although we cannot strictly exclude that it could be restored at some small but non-zero \(e^{2}\). Importantly, we note that even though the single-site shift symmetry \(S\) is broken, translation symmetry by an even number of lattice spacings remains unbroken, and we therefore expect to recover full translational invariance in the continuum. The situation is quite different for the observable \(O_{CS}\), whose numerical data is shown in Fig. 3. In this case the normalized susceptibility \(O_{CS}^{2}/V^{2}\) decreases with the volume for all couplings considered, indicating that \(O_{CS}\) does not acquire a vacuum expectation value. The decrease is less clear in the central region, \(0.70\lesssim e^{2}\lesssim 0.80\), especially for the larger volumes. However, the histogram for \(e^{2}=0.70\) shows that while two peaks appear to be forming for small volumes, they merge into a single peak around zero as the volume is increased. Therefore the observable \(O_{CS}\) does not acquire a vacuum expectation value at any value of the coupling, which means that at least one of \(C\) or \(S\) remains unbroken. Since we have seen that \(S\) is broken, the data implies that charge conjugation \(C\) remains unbroken for all values of the couplings studied. ### Mass The mass and string tension for the staggered height model were also measured. These are particularly interesting because they can be compared with the effective theory prediction which we derive in Section V as well as with the analytical and numerical results for the standard U(1) theory. Figure 2: Left: Histogram of the operator \(O_{S}\) normalized by the volume \(V\) at \(e^{2}=0.70\). Right: Susceptibility of \(O_{S}\) normalized by \(V^{2}\) as a function of \(e^{2}\) for several volumes, with a fit of the large-volume data to the form \(A\exp{(-B/e^{2})}\). The vertical line is located at the coupling \(e^{2}=0.70\) where the histogram is shown. The double-peaked structure at large volume (left) and scaling as \(V^{2}\) (right) indicates spontaneous breaking of the \(S\) symmetry. The mass may be measured from the exponential decay of correlations between height variables \(h_{x}\) and \(h_{y}\) with increasing distance, after appropriate momentum projection and subtractions [26, 29]. The correlation function \((h_{x}-h_{y})^{2}\) is appropriately invariant under the global \(\mathbb{Z}\)-symmetry. However, since it does not factorize into a product of operators which are local in time, one may worry that it does not admit a proper spectral interpretation. In fact, while each of the three terms \((h_{x}-h_{y})^{2}=h_{x}^{2}+h_{y}^{2}-2h_{x}h_{y}\) admits a proper spectral interpretation, their individual expectation values are not well-defined because they are not \(\mathbb{Z}\)-invariant. Introducing transfer-matrix eigenstates \(\{\ket{n}\}\), on a lattice of time extent \(T\) one finds for \(T\to\infty\), \[\begin{split}\langle&(h_{x}-h_{y})^{2}\rangle=\\ &\bra{0}h_{x}^{2}\ket{0}+\bra{0}h_{y}^{2}\ket{0}-2\bra{0}h_{x} \ket{0}\bra{0}h_{y}\ket{0}\\ &-2\sum_{n>0}\bra{0}h_{x}\ket{n}\bra{n}h_{y}\ket{0}e^{-(E_{n}-E_{ 0})t}\,\end{split} \tag{26}\] where \(t\) is the time separation between \(x\) and \(y\). The expectation values in the first line are not individually \(\mathbb{Z}\)-invariant, but the \(\mathbb{Z}\)-dependence cancels between these terms, leaving a simple \(t\)-independent vacuum contribution. On the other hand, the matrix elements \(\bra{0}h_{x}\ket{n}\) are \(\mathbb{Z}\)-invariant for \(n\neq 0\) and are therefore well-defined. By subtracting the constant vacuum contribution, the exponential decay of the correlation function \((h_{x}-h_{y})^{2}\) thus probes the non-vacuum states in the symmetry sector of the operator \(h_{x}\). The mass is extracted from the same numerical simulations based on the cluster algorithm as used for the order parameters above. It is important to note that we are primarily interested in the behavior of this mass in the phase with the correct symmetry structure. In particular, as shown in Fig. 2, the broken \(\mathbb{Z}_{2}\) symmetry is restored at small couplings due to finite volume effects. Working at a certain fixed volume \(L^{3}\), we therefore limit the range of couplings included in fits to those where the symmetry remains broken, though the mass has been measured for all simulated couplings. The results of the numerical simulations for the mass of the \(\alpha=\pi\) theory are shown in Fig. 4 for \(L=256\) and a range of couplings, together with a fit to an exponential form. Simulations were performed for several volumes; this allows us to establish that the points in the range \(e^{2}\in[0.6,1.0]\) suffer from finite-volume effects and are therefore excluded from the fit. The mass of the \(\alpha=\pi\) theory decreases much more quickly as \(e^{2}\) is decreased Figure 3: Left: Histogram of the operator \(O_{CS}\) normalized by the volume at \(e^{2}=0.70\). Right: Susceptibility of \(O_{CS}\) normalized by \(V^{2}\) as a function of \(e^{2}\) for several volumes. The vertical line is located at the coupling \(e^{2}=0.70\) where the histogram is shown. A single histogram peak at large volumes (left) and scaling smaller than \(V^{2}\) (right) indicates that either the \(C\) or \(S\) symmetries are unbroken. Figure 4: Mass \(m\) of the \(\alpha=\pi\) theory for various values of the coupling \(e^{2}\), given in lattice units. Finite volume effects can be seen from the measurements at smaller volumes diverging from larger volumes as \(e^{2}\) is taken small. The fit is performed with data in the range \(e^{2}\in[1.1,2.0]\), with the fit form motivated by our effective theory for \(\alpha=\pi\) (Eq. (52)). compared to the standard theory. In fact, according to the effective theory for \(\alpha=\pi\) (Eq. (52)), the constant in the exponent should be twice as large as compared to the standard theory. We have attempted to fit our data for the largest volume in the range \([1.1,2.0]\) where it doesn't suffer from finite-volume effects. For the fit, we chose a function of the form \[f(e^{2})=Ag(e^{2})\exp\left(-B/e^{2}\right)\,, \tag{27}\] where \(g\) represents different choices of prefactor. We considered \(g(e^{2})=1\) (simple exponential decay), \(g(e^{2})=1/e^{2}\) (effective theory prediction, Eq. (52)), and \(g(e^{2})=1/\sqrt{e^{2}}\) (behaviour of the standard theory, Eq. (3)). All three choices provide acceptable fits to the data, with the effective theory prediction slightly favored. The fits have a \(\chi^{2}/\mathrm{d.o.f}\approx 0.80,0.95,0.78\), and exponent \(B=4.85(7),6.33(7),5.59(7)\) respectively. As such, our data is in qualitative agreement with the effective theory prediction of \(2\pi^{2}v_{0}\approx 4.99\) in Eq. (52) for the exponent \(B\). However, the data is not sufficiently precise to distinguish the three cases for \(g(e^{2})\), and we are working at a range of couplings which is sufficiently far from the continuum where corrections to scaling are likely important. Similarly to what was done for the observable \(O_{S}\) in Section IV.1, we have also attempted to introduce an offset by replacing \(e^{2}\) with \(e^{2}-e_{c}^{2}\) in the fit forms, in order to test the hypothesis that the transition takes place at a non-zero critical value of the coupling \(e_{c}^{2}\neq 0\). In all cases, we have found the offset \(e_{c}^{2}\) to be consistent with zero within error. ### Energy-momentum dispersion relation We next consider analogous correlation functions at non-zero momentum. The operator \[h(\vec{k},t)\equiv\sum_{\vec{x}}e^{i\vec{x}\cdot\vec{k}}h(\vec{x},t) \tag{28}\] creates a state at non-zero momentum \(\vec{k}\) as long as it is compatible with the finite-volume quantization condition \(\vec{k}=\frac{2\pi}{L}(n_{1},n_{2})\) with \(n_{1,2}\in\mathbb{Z}\). For all \(\vec{k}\neq\vec{0}\), the operator \(h(\vec{k},t)\) is \(\mathbb{Z}\)-invariant. In this case, we are free to analyze the relatively simpler correlation functions \[C(\vec{k},t)\equiv\left\langle h(-\vec{k},t)h(\vec{k},0)\right\rangle\, \tag{29}\] which require no vacuum subtraction. Fitting these correlation functions to single exponentials yields an estimate of the energy \(E(\vec{k})\) of the lowest lying state with given momentum \(\vec{k}\). To study \(E(\vec{k})\), we performed measurements across a variety of momenta for four choices of the bare coupling \(e^{2}\) at \(L=96\), making sure that our choices of couplings all lie within the correct symmetry phase for the volume used (see Fig. (2)). Fig. 5 plots the lattice-units quantities \(\widehat{E}^{2}=4\sinh(aE/2)^{2}\) and \(\widehat{k}^{2}=4-2\sum_{i}\cos(ak_{i})\) which give lattice approximations to \(a^{2}E^{2}\) and \(a^{2}\vec{k}^{2}\) inspired by the simple lattice discretization of a relativistic free particle. In the continuum limit, these are expected to satisfy \(\widehat{E}^{2}=\widehat{k}^{2}\). This reference relation is shown by the diagonal line in the plot, and the data can be seen to approach this scaling as the coupling \(e^{2}\) is taken smaller. Further measurements would be required to reliably extrapolate to the continuum relation, but the data shown already strongly suggest the emergence of the full \(O(3)\) spacetime symmetry group corresponding to a Euclidean representation of a relativistic theory in the continuum limit. ### String tension To measure the string tension, we choose to insert a pair of static charges, one positive and one negative, directly in the partition function. We then employ two complementary methods: one based on a direct measurement of the system's energy with the charges inserted [30, 4] and another based on the snake algorithm [31, 32]. In both cases, a static charge propagating in time at spatial position \(\vec{x}\) is represented in the original theory by a Polyakov loop \(P(\vec{x})\) wrapping around the time direc Figure 5: Measurements of the energy-momentum dispersion relation for a range of couplings, plotted using the lattice-units quantities \(\widehat{E}^{2}=4\sinh(aE/2)^{2}\) and \(\widehat{k}^{2}=4-2\sum_{i}\cos(ak_{i})\) inspired by the dispersion relation of a free relativistic particle on the lattice. In the continuum limit of a relativistic theory, the dispersion relation is expected to converge to the limit \(\widehat{E}^{2}=\widehat{k}^{2}\) indicated by the black line. A clear trend towards this relativistic relation can be observed as the bare coupling is decreased towards the continuum limit. All measurements were performed on an \(L=96\) lattice volume. tion, i.e. \[P(\vec{x})=\prod_{t=0}^{T-1}e^{i\varphi_{t}(t,\vec{x})}\, \tag{30}\] where \(\exp{(i\varphi_{t}(t,\vec{x}))}\) is the U(1) link variable at position \((t,\vec{x})\) oriented in the time direction. The expectation value of the correlator of a charge pair may be expressed directly in the dual theory by [12] \[\langle P(\vec{x}_{1})^{*}P(\vec{x}_{2})\rangle=\frac{1}{Z}\left( \prod_{x\in\text{sites}}\sum_{h_{x}}\right)\\ \times\exp{\left[-\frac{e^{2}}{2}\sum_{\langle xy\rangle}(h_{x}- h_{y}+s_{\langle xy\rangle})^{2}\right]}\, \tag{31}\] where all quantities on the right-hand side live on the dual lattice. Here \(s_{l}\) is a field of 'dislocations' defined on links of the dual lattice by \[s_{l}=\begin{cases}\pm 1&{}^{*}l\in A\\ 0&\text{otherwise}\end{cases}\, \tag{32}\] where \({}^{*}l\) is the plaquette in the original lattice dual to link \(l\), and \(A\) is any surface (i.e. connected collection of plaquettes in the original lattice) bounded by the two Polyakov loops at spatial positions \(\vec{x}_{1}\) and \(\vec{x}_{2}\). The sign of \(s_{l}\) is positive or negative depending on the orientation in which the link \(l\) is traversed. The expectation value \(\langle P(\vec{x}_{1})^{*}P(\vec{x}_{2})\rangle\) is independent of deformations of the surface \(A\), and for convenience in simulations we choose \(A\) to be the rectangular surface bounded by the Polyakov loops at \(\vec{x}_{1}\) and \(\vec{x}_{2}\) which does not cross the boundary. An example of the geometry of the Polyakov loops, rectangular surface \(A\), and the corresponding dual links is shown in Fig. 6. There also exist topologically inequivalent surfaces bounded by the same Polyakov loops: for example, the surface starting at \(\vec{x}_{1}\) and wrapping backwards through the lattice boundary to \(\vec{x}_{2}\) cannot be obtained by local deformations of our choice of surface which goes forwards from \(\vec{x}_{1}\) to \(\vec{x}_{2}\) without crossing the boundary. The choice between these topologically inequivalent surfaces amounts to a choice of boundary conditions in the dual theory, which are equivalent up to finite-volume effects. Our choice of surface allows us to suppress the wrap-around contributions and get more precise estimates of the string tension using separations \(|\vec{x}_{1}-\vec{x}_{2}|\geq L/2\) in the following. The Polyakov loop correlator \(\langle P(\vec{x}_{1})^{*}P(\vec{x}_{2})\rangle\) can be interpreted as the ratio of two partition functions, one with the static charge pair and one in the vacuum sector. To do this, we define the generalized partition function \(Z[s]\) in the background of the dislocations \(s_{l}\) by \[\begin{split} Z[s]&=\left(\prod_{x\in\text{sites} }\sum_{h_{x}}\right)\\ &\times\exp{\left[-\frac{e^{2}}{2}\sum_{\langle xy\rangle}(h_{x} -h_{y}+s_{\langle xy\rangle})^{2}\right]}\.\end{split} \tag{33}\] Note that \(Z[0]=Z\) coincides with the partition function of the staggered height model Eq. (22), and we then have \[\langle P(\vec{x}_{1})^{*}P(\vec{x}_{2})\rangle=Z[s]/Z[0]\, \tag{34}\] for \(s\) defined as in Eq. (32). Variations on the'snake algorithm' have been introduced to effectively evaluate such ratios of partition functions [31; 32]. The basis of these approaches is to measure ratios of partition functions \(Z[s_{n+1}]/Z[s_{n}]\) using independent Monte Carlo calculations, finally giving the desired ratio \(Z[s]/Z[0]\) by a telescoping product. We adopt an analogous simulation strategy, using a series of Monte Carlo simulations to evaluate the ratios of Polyakov loop correlators separated by a variety of spatial distances aligned along the \(x\)-axis (the \(\hat{1}\) direction) of the lattice. Each Monte Carlo evaluation is constructed to evaluate \[\frac{\langle P((R+1)\,\hat{1}+\vec{x}_{0})^{*}P(\vec{x}_{0})\rangle}{\langle P (R\,\hat{1}+\vec{x}_{0})^{*}P(\vec{x}_{0})\rangle}\equiv\frac{Z[s_{R+1}]}{Z[s_ {R}]}\, \tag{35}\] for some spatial separation \(R\). The fields \(s_{R+1}\) and \(s_{R}\) are determined by the Polyakov loop geometries, as discussed above, and only differ in whether they include the column of dual links corresponding to the plaquettes between spatial sites \(R\,\hat{1}+\vec{x}_{0}\) and \((R+1)\,\hat{1}+\vec{x}_{0}\). To gain further insight into the nature of the confining string, we also adopt a second simulation strategy to measure the energy of the static charges based on the Hamiltonian of the system. Since the partition function \(Z[s]\) may be expressed in terms of a Hamiltonian \(H\) via the relation \(Z[s]=\text{tr}(e^{-\beta H})\), it is possible to obtain \(\langle H\rangle\) by analytically differentiating \(Z[s]\) with respect to \(\beta\). One then obtains \(\langle H\rangle\) as an observable that is amenable to Figure 6: An example of the relation between a Polyakov loop (bold lines), the corresponding surface in the original lattice (gray surface), and the dual links \(l\) (links between black/white dual sites) for which \(s_{l}\) is non-zero. The orientation of the Polyakov loops gives an orientation to the surface, determining the signs for the dislocation variables \(s_{l}\). For clarity, only a two-dimensional slice of the 3D volume is shown. Monte Carlo simulation with respect to the probability distribution defined by \(Z[s]\), in particular \[\langle H\rangle=\frac{e^{2}}{2T}\bigg{\langle}-\sum_{\langle xy\rangle_ {\text{time}}}(h_{x}-h_{y}+s_{\langle xy\rangle})^{2}+\\ +\sum_{\langle xy\rangle_{\text{space}}}(h_{x}-h_{y}+s_{\langle xy \rangle})^{2}\bigg{\rangle}\, \tag{36}\] where \(\langle xy\rangle_{\text{time}}\) and \(\langle xy\rangle_{\text{space}}\) denote links in the time and space directions respectively. A careful derivation of this result is presented in Appendix B. One expects Eq. (36) to be valid only in the continuum limit, but it can still be used to give qualitative information about the structure of the confining string and to cross-check the string tension close to the continuum limit. In particular, an advantage of this formulation is that one may also measure the local energy contribution by any individual link and thus visualize the energy distribution of the confining string. As a simulation strategy, the energy (36) is measured using Monte Carlo evaluations at several particle-antiparticle separations \(R\), obtaining \(E(R)\) as a function of \(R\). These Monte Carlo estimates are given by separate Metropolis simulations with fixed static charge pairs for each separation. From either approach, the string tension \(\sigma\) has then been extracted by fitting \[E(R)=-\frac{1}{T}\ln\left\langle P^{*}(\vec{x}_{1})P(\vec{x}_{2})\right\rangle \big{|}_{|\vec{x}_{1}-\vec{x}_{2}|=R} \tag{37}\] to the theoretical predictions of the energy of the confining string, which for large separations \(R\gg 1/\sqrt{\sigma}\) is given by [33; 34; 35; 36; 37] \[E(R)=A+\sigma R-\frac{B}{R}\ +\mathcal{O}(1/R^{2}). \tag{38}\] Alternatively, the estimates of the string tension can be directly obtained from the snake algorithm approach, which most directly gives access to finite differences of the energy \[E(R+1)-E(R)=\sigma+\mathcal{O}(1/R^{2}). \tag{39}\] We have also used this method to obtain precise estimates of \(\sigma\) from the snake algorithm without fitting and without needing to perform Monte Carlo evaluations for all \(R\). The two methods give consistent results when compared on identical lattice parameters, and we therefore adopt the latter approach for all following measurements of \(\sigma\), averaging estimates from several choices of \(R\approx L/2\) to minimize the higher-order effects. In order to test this method, we have used it to compute the string tension in the standard three-dimensional U(1) gauge theory, and we have obtained results compatible with both the analytical predictions Eq. (4) and results previously published in the literature [13]. The static charge potentials measured by the snake algorithm and the Hamiltonian method were also compared and found to agree with improving accuracy as \(e^{2}\) was taken small, corresponding to the continuum limit where both definitions are the same. The results of the numerical simulations for the string tension are shown for the \(\alpha=\pi\) theory in Fig. 7 for \(L\in\{64,96,128,192\}\). The string tension clearly decreases in lattice units towards the continuum limit, as expected. In contrast to the mass, however, one sees statistically significant finite volume effects throughout the range of couplings studied. We have also compared to simulations of the ordinary \(U(1)\) gauge theory with \(\alpha=0\), in which case the finite volume effects are less significant. The structure of the string in the \(\alpha=\pi\) theory provides a potential explanation for this feature. To investigate this structure, we consider the local electric field \[\vec{E}(x)=(h_{x}-h_{x+\hat{2}}+s_{2}(x),-h_{x}+h_{x+\hat{1}}-s_{1}(x))\, \tag{40}\] where the dislocation \(s_{\mu}(x)=s_{l}\) on the link \(l\) connecting \(x\) and \(x+\hat{\mu}\), and local energy density terms \[\begin{split} u_{0}(x)&=\langle-(h_{x}-h_{x+\hat{0} }+s_{0}(x))^{2}\rangle\,\\ u_{i}(x)&=\langle(h_{x}-h_{x+\hat{i}}+s_{i}(x))^{2} \rangle\,\end{split} \tag{41}\] which together define the Hamiltonian energy density \(H(x)=u_{0}(x)+u_{1}(x)+u_{2}(x)\) in Eq. (36). Fig. 8 depicts an example of the measured energy density terms and electric field for the coupling \(e^{2}=2.0\) and a particular separation of the static charge and anti-charge. The electric field lines have the typical structure associated with a positive and negative charge pair. On the other hand, the energy density depicts a clear fractionalization of the confining string into two strands for the \(\alpha=\pi\) Figure 7: String tension \(\sigma\) of the \(\alpha=\pi\) theory for various values of the coupling \(e^{2}\), given in lattice units. Statistically significant finite-volume effects can be observed over the entire range of couplings, potentially reflecting the more diffuse structure of the double-stranded confining string in this theory. theory. We have observed a similar fractionalized structure for all couplings considered, however the choice of coupling \(e^{2}=2.0\) results in the clearest pictures. This string fractionalization suggests that the linear scaling regime of the static charge potential may only set in at larger physical distances, explaining the relatively strong finite-volume effects observed in the string tension. The confining string also shows a non-trivial interplay with the phase structure of the theory. We define local versions of the order parameters \(O_{CS}\) and \(O_{S}\), which are invariant under the global \(\mathbb{Z}\)-symmetry and under deformations of the arbitrary surface connecting the two Polyakov loops. For \(O_{CS}\) we define \[O_{CS}(x)=\frac{(-1)^{x}}{12}\sum_{\hat{\mu}\in\{\pm\hat{0},\pm 1,\pm\hat{2}\}}(h_{x} \!-\!h_{x+\hat{\mu}}\!+\!s_{\mu}(x)). \tag{42}\] While the expression is complicated, one can also define a local version \(O_{S}(x)\) of the operator \(O_{S}\). In order to make it invariant under deformations of the surface bounded by the Polyakov loops, one rewrites it in terms of link variables \(h_{x}-h_{x+\mu}\) and then replaces \(h_{x}-h_{x+\mu}\to h_{x}-h_{x+\mu}+s_{\mu}(x)\). In the case where there are no static charges, summing over these local observables recovers the original order parameters, \[O_{CS}=\sum_{x}O_{CS}(x)\,\qquad O_{S}=\sum_{x}O_{S}(x). \tag{43}\] To understand the relation between the order parameter structure and the broken symmetry, we perform measurements using an ensemble where no tunneling events occurred, which would otherwise wash out the figure. Fig. 9 shows the resulting distributions of the observables \(O_{S}(x)\) and \(O_{CS}(x)\) in the presence of the flux string. The \(O_{S}(x)\) observable shows that inside and outside the two strands the system is found in two different ground states of the broken \(\mathbb{Z}_{2}\) symmetry \(S\). This indicates that Figure 8: Time-averaged local energies and electric field on a spatial \(L\times L\) lattice measured in the presence of two static charges on the ensemble with coupling \(e^{2}=2.0\) and size \(L=64\). The first figure (top left) shows the field lines of the electric field \(\vec{E}\) defined in Eq. (40). The last three figures (top right through bottom right) show the local energy defined in Eq. (41) in the three spacetime directions. In all three figures, the brightest pixels corresponding to contact energy terms are masked. The energy distributions clearly depict fractionalization of the flux string into two strands. the individual strands of the fractionalized string play the role of domain walls between the two vacua of the spontaneously broken symmetry. Meanwhile, the \(O_{CS}(x)\) observable takes non-zero values within each strand of the fractionalized string, which is consistent with explicit breaking of \(C\) symmetry by the insertion of static charges with particular positive and negative values. This order parameter remains zero outside the static charge system as expected from the lack of symmetry breaking observed in the vacuum sector for \(O_{CS}\). In both cases, the particular signs of the order parameters where they are non-zero is not fixed, but spontaneously selected; they are flipped by tunneling events between the vacua of the spontaneously broken \(S\) symmetry. ### Mass and string tension scaling A key prediction for both the ordinary \(\alpha=0\) theory (see Eq. (4)) and the \(\alpha=\pi\) theory (see Section V) is that (in units where \(a=1\)) \(\sigma\) should scale as \(me^{2}\). This implies the three possible continuum limits discussed in Sec. II. By comparing our mass and string tension data, we could attempt to validate this scaling prediction. However, the strong finite volume effects in both the mass and string tension make it difficult to reliably determine an infinite volume ratio \(\sigma/me^{2}\) to be extrapolated towards the continuum limit. Instead, we performed an additional set of Monte Carlo calculations extrapolating along a line of constant physics in a fixed physical volume, as defined by fixing \(mL\) to a constant value. As long as \(mL\gg 1\), we expect the argument for the scaling of string tension with mass to still hold. Thus by studying these constant physical volume ensembles, we can determine whether such ratio has a finite continuum limit. To perform this study, we used five different lattice sizes with couplings tuned to fix \(mL\) to \(6\), \(8\), and \(10\), as detailed in Table 1. We use a prescription of extracting the string tension by the snake algorithm approach described above with measurements of the energy differences based on a window of separations \(R\approx 5L/8\). This value was chosen to maximally suppress the \(O(1/R^{2})\) terms while avoiding the boundary of the lattice. Finally, we use the fact that (in lattice units) the ratio can be written as \[\frac{\sigma}{me^{2}}=\frac{\sigma L}{e^{2}(mL)}. \tag{44}\] Using the fixed choice of \(mL\) instead of direct measurements of the mass allows a much more precise determination of this ratio, although this neglects systematic uncertainties from mistunings of the mass. The resulting measurements of the ratios are shown in Fig. 10. At the available lattice spacings, the ratio determined for each of the fixed physical volumes can be seen to approximately plateau. Nevertheless, at the \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{\(mL=6\)} & \(L\) & 64 & 96 & 128 & 192 & 256 \\ & \(e^{2}\) & 1.35 & 1.29 & 1.23 & 1.12 & 1.09 \\ \hline \multirow{2}{*}{\(mL=8\)} & \(L\) & 64 & 96 & 128 & 192 & 256 \\ & \(e^{2}\) & 1.68 & 1.50 & 1.39 & 1.24 & 1.18 \\ \hline \multirow{2}{*}{\(mL=10\)} & \(L\) & 64 & 96 & 128 & 192 & 256 \\ & \(e^{2}\) & 1.92 & 1.65 & 1.49 & 1.33 & 1.25 \\ \hline \hline \end{tabular} \end{table} Table 1: Choices of coupling constant \(e^{2}\) and lattice size \(L\) tuned over a variety of lattice spacings to fix \(mL\) to \(6\), \(8\), and \(10\). Figure 9: Local \(O_{S}\) (left) and \(O_{CS}\) (right) observables measured in the presence of two static charges on the ensemble with coupling \(e^{2}=2.0\) and lattice size \(L=64\). Shown are the Euclidean-time-averaged measurements over the spatial \(L\times L\) lattice. The \(O_{S}\) observable shows that the two string strands separate regions that correspond to the two ground states of the broken \(\mathbb{Z}_{2}\) symmetry, while the \(O_{CS}\) observable shows that the two strands themselves break charge conjugation. smallest \(e^{2}\), corresponding to the smallest lattice spacing, the data continue to rise indicating that even finer lattices would be required to confirm the scaling behavior. ### Discussion Our results provide evidence for a continuum limit of the \(\alpha=\pi\) theory which is distinct from that of the \(\alpha=0\) theory, despite both theories having exact \(U(1)\) gauge symmetry, charge conjugation symmetry, macroscopic translational symmetry, and cubic symmetry. In particular, the order parameters suggest that the \(\alpha=\pi\) theory has a continuum limit obtained as \(e^{2}\to 0\) where the \(\mathbb{Z}_{2}\) symmetry \(S\) is spontaneously broken and charge conjugation symmetry is unbroken. This theory also shows qualitatively different behaviour, most strikingly in the fractionalization of the confining flux string. This behaviour is confirmed by the effective theory for the \(\alpha=\pi\) theory which we derive in Section V. The data for the mass and the string tension are also compatible with the effective theory predictions, although better control of the finite volume effects and the corrections to scaling is required in order to test quantitative agreement. Both the string fractionalization and the phase separation have been previously observed in the context of quantum link models [4; 5; 6; 7]. In the results observed for some such quantum link models, a spontaneous breaking of parity symmetry was also observed in the static charge system, so that the strands of the string formed an asymmetric arrangement. On the other hand, our results for the \(\alpha=\pi\) theory do not demonstrate any such parity breaking. ## V Effective theory In order to better understand the behavior of the \(\alpha=\pi\) theory and confirm our numerical results, in this section we derive an effective theory following [12]. As we have seen in Section II, the standard Abelian gauge theory in three dimensions has been shown to be equivalent to a Sine-Gordon model with action [18; 38; 2; 12] \[S[\phi] = \frac{1}{2}\sum_{\langle xy\rangle}(\phi_{x}-\phi_{y})^{2}-2 \lambda\sum_{x}\cos\left(\frac{2\pi\phi_{x}}{\sqrt{e^{2}}}\right)\,, \tag{45}\] where \(\phi_{x}\) is a real scalar field and \(\lambda\equiv 2e^{-2\pi^{2}v_{0}/e^{2}}\). This effective theory is valid for small \(e^{2}\), where the continuum limit is expected to lie. The derivation leading to Eq. (45) can be performed also in the case of our staggered height model, and the corresponding effective theory is given again by a Sine-Gordon model with a _staggered_ potential, \[S[\phi]=\frac{1}{2}\sum_{\langle xy\rangle}(\phi_{x}-\phi_{y})^{2}-2\lambda \sum_{x}(-1)^{x}\cos\left(\frac{2\pi\phi_{x}}{\sqrt{e^{2}}}\right)\,. \tag{46}\] Non-relativistic versions of Eq. (46) have been discovered as effective descriptions of quantum antiferromagnets [39]. Due to the staggering, this model, unlike the action in Eq. (45) does not immediately admit an analytical continuum limit. We therefore rewrite it in terms of variables which, as we will see, allow a continuum description. Following the discussion in [39], we define sum and difference variables \[\chi_{x}=\frac{\phi_{x}+\phi_{x+\hat{\mu}}}{2}\,\qquad\xi_{x}=\frac{\phi_{x}- \phi_{x+\hat{\mu}}}{2}\, \tag{47}\] defined to live only on the _even_ lattice sites. In particular, this definition singles out a specific direction \(\mu\) and combines the scalar fields on the even site \(x\) and odd site \(x+\hat{\mu}\) into the sum and difference variables on the even sites. This mapping is one-to-one and has a unit Jacobian. The arbitrary choice of direction disappears in the final result. Substituting into the effective theory, we find that \(\xi\) has a mass of the order of the lattice cutoff: we therefore ignore all of its gradients [39] and find \[S[\chi,\xi]=\sum_{x}\left[\frac{1}{2}(\vec{\nabla}\chi_{x})^{2}+ 12\xi_{x}^{2}+\right.\\ +4\lambda\sin\left(\frac{2\pi\chi_{x}}{\sqrt{e^{2}}}\right)\sin \left(\frac{2\pi\xi_{x}}{\sqrt{e^{2}}}\right)\right]\,. \tag{48}\] In Eq. (48) the sum is taken only over the even sites of the original hypercubic lattice. The corresponding lattice, the one on which the effective theory in Eq. (48) is defined, is therefore a three-dimensional lattice of tetrahedra and octahedra; each point is connected to nearest neighbours in four directions, and in particular \((\vec{\nabla}\chi_{x})^{2}\) is the sum of the squares of the lattice derivatives in the four directions. Since it is an isotropic kinetic term on the Figure 10: Measured ratios \(\sigma/me^{2}\) across a range of ensembles tuned to fix \(mL\) to 6, 8, and 10, given in lattice units. The resulting ratios can be seen to plateau towards small \(e^{2}\), corresponding to smaller lattice spacings, but a notable rise is still present at the smallest lattice spacings accessible. Nevertheless, the measured ratios are quite distinct from estimates of the scaling ratio \(\widetilde{c}/4\pi^{2}\approx 0.21\) in the \(\alpha=0\) theory [13]. lattice, we expect that it will become a Lorentz-invariant kinetic term in the continuum limit. Next, we again use the fact that \(\xi\) has a mass of the order of the lattice cutoff to integrate it out. Its mean-field value is given by [39] \[\xi_{x}\approx-\frac{\pi}{3}\frac{\lambda}{\sqrt{e^{2}}}\sin\left(\frac{2\pi \chi_{x}}{\sqrt{e^{2}}}\right)\,, \tag{49}\] which we substitute back into the action to find \[S[\chi]=\frac{e^{2}}{4\pi^{2}}\sum_{x}\left[\frac{1}{2}(\vec{\nabla}\chi_{x})^ {2}+\frac{8}{7}\pi^{4}\frac{\lambda^{2}}{e^{4}}\cos\left(2\chi_{x}\right) \right]\, \tag{50}\] where we also redefined \(\chi_{x}\rightarrow\frac{\sqrt{e^{2}}}{2\pi}\chi_{x}\) for clarity. As is clear from Eqs. (46) and (48), this model admits a global \(\mathbb{Z}\) symmetry which acts as \(\chi_{x}\rightarrow\chi_{x}+2\pi\). We identify this symmetry with the global \(\mathbb{Z}\) redundancy of the original model, meaning configurations of \(\chi_{x}\) which differ by an integer multiple of \(2\pi\) are identified. With this in mind, the final effective theory in Eq. (50) has a \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) global symmetry structure generated by \[C:\chi_{x}\rightarrow-\chi_{x}+\pi\qquad S:\chi_{x}\rightarrow\chi_{x}+\pi. \tag{51}\] In particular, \(C^{2}=S^{2}=1\) and \(CS=SC:\chi_{x}\rightarrow-\chi_{x}\). The global \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) symmetry structure of this model is the same as that of the original height variables, although it is unclear how to trace the action of the original symmetries onto the scalar field in the effective theory. The double cosine potential has minima at \(\chi=\pm\pi/2\), which implies that the \(C\) symmetry remains unbroken, while \(S\) (and therefore also \(CS\)) is broken. This is again the same symmetry breaking structure that we observe numerically, as shown in Section IV. We therefore identify the \(C\) symmetry of the effective theory with charge conjugation and the \(S\) symmetry with the single-site shift in the original model. While for the standard effective theory in Eq. (45) all minima of the potential are physically equivalent because they are related by a global \(\mathbb{Z}\) redundancy, this is no longer the case in our staggered model, where we find a proper \(\mathbb{Z}_{2}\) symmetry breaking. Much like in the standard theory (see Section II), the effective theory suggests two different ways of taking the continuum limit. In one case, one defines a mass (restoring the lattice spacing \(a\)) \[a^{2}m^{2}=\frac{32\pi^{4}}{3}\frac{1}{e^{4}}\exp\left(-4\pi^{2}v_{0}/e^{2} \right)\,, \tag{52}\] where \(e^{2}\) is the dimensionless coupling. Similarly to the discussion in Section II, keeping the mass \(m\) fixed in physical units and sending the lattice spacing to zero forces \(e^{2}\to 0\). In this case, the height of the potential which separates the two physical minima becomes infinite, and therefore one finds that in the continuum this system consists of a free massive scalar field of mass \(m\) describing the fluctuations around a spontaneously selected vacuum. In particular, Eq. (52) is consistent with the qualitative numerical observation (see Fig. 4 and the discussion in Section IV) that the exponent of the exponential decay of the mass is roughly twice as large in the staggered theory compared to the standard case. It is also possible to take a different continuum limit, where the string tension instead is held fixed. In a semi-classical approximation [38], one again finds that also in this case the string tension scales like \(\sigma\sim me^{2}\). In the dual theory, the string tension can be interpreted as the interface tension between different minima of the potential. Since the staggered model has physically inequivalent minima, this continuum limit corresponds to a non-trivial theory where it is still possible to tunnel between the two inequivalent minima or to have domains of opposite minima separated by domain walls. The picture of the string tension as the interface tension between different broken symmetry ground states is also clear from Fig. 8. ## VI Conclusions In the present work, we have shown that the usual U(1) lattice gauge theory may be extended by an additional parameter \(\alpha\) which modifies the Hilbert space per gauge link while preserving both the gauge symmetry and the remnant lattice Lorentz symmetry. We focused on the specific case of three spacetime dimensions with \(\alpha=\pi\), which preserves all symmetries of the standard U(1) theory, including charge conjugation and parity. After dualization, one obtains a height model which we have numerically simulated. We have also analytically derived an effective theory which provides predictions for the symmetry breaking structure as well as the scaling of the mass and string tension. We have compared our numerical results for the staggered theory (\(\alpha=\pi\)) with our analytical predictions as well as with the analytical and numerical predictions available for the standard theory (\(\alpha=0\)). Our results provide evidence that the \(\alpha=\pi\) theory has a spontaneously broken \(\mathbb{Z}_{2}\) single-site shift symmetry, which remains unbroken in the standard theory, indicating that the \(\alpha=\pi\) theory approaches a different continuum limit than the standard U(1) theory. We have also obtained results for the mass and string tension. In particular, both quantities scale differently in the continuum limit compared to the ordinary theory. By inserting the charges directly in the partition function, we have also obtained data for the local energy distribution of the system with static charges, which shows that the confining flux string fractionalizes into two strands. Our numerical results are compatible with the analytical predictions from the effective theory for the \(\alpha=\pi\) theory. The strands separate the system into two regions which live in different ground states of the broken \(\mathbb{Z}_{2}\) symmetry. The idea behind this work may be extended in several directions. It would be immediate to generalize these results to a different number of spacetime dimensions. The Hamiltonian construction is valid in any dimension, but the dual theory would no longer be a scalar theory. For example, in four dimensions, the dual theory would be a gauge theory with a discrete gauge field. One could also study the theory at values of \(\alpha\not\in\{0,\pi\}\), which explicitly breaks charge conjugation and parity, but may nonetheless be interesting gauge theories. Perhaps the most interesting direction would be the extension to the non-Abelian case. Just as the introduction of an \(\alpha\) angle in the U(1) theory is equivalent to threading a fictitious magnetic flux through the U(1) circle defining the Hilbert space at each link, an analogous construction is possible in the non-Abelian case as given in [40; 41]. In the case of SU(2), for example, this construction places a magnetic monopole at the center of the 3-sphere manifold of SU(2), resulting in a modification of the Hilbert space of integrable functions on SU(2) while preserving the gauge symmetry. ###### Acknowledgements. AB acknowledges financial support from the DAE, India. AB and DB acknowledge the computing resources of SINP. DB acknowledges assistance from SERB Starting grant SRG/2021/000396-C from the DST (Govt. of India). GK, AM and UJW acknowledge funding from the Schweizerischer Nationalfonds through grant agreement no. 200020_200424. UJW also acknowledges support from the Alexander von Humboldt Foundation and thanks Ulf Meissner for hospitality at the Helmholtz-Institut fur Strahlen-und-Kernphysik in Bonn. TR is supported by the Schweizerischer Nationalfonds through the grant no. TMPFP2_210064. Some calculations were performed on UBELIX ([http://www.id.unibe.ch/hpc](http://www.id.unibe.ch/hpc)), the HPC cluster at the University of Bern. ## Appendix A Dualization to a height model in 3D The path integral of the 3D U(1) gauge theory can be dualized to a model of discrete height variables for any value of \(\alpha\). The starting point for this dualization is given by the partition function in Eq. (21), reproduced below for convenience: \[Z=\left(\prod_{l}\int_{0}^{2\pi}d\varphi_{l}\right)\prod_{p} \Big{[}\sum_{m\in\mathbb{Z}}e^{-i\frac{\alpha}{2\pi}(d\varphi)_{p}}\times\\ \times\exp\left(-\frac{1}{2e^{2}}\left((d\varphi)_{p}-2\pi m \right)^{2}\right)\!e^{i\alpha m}\Big{]}. \tag{24}\] It is helpful to use Poisson summation to first rewrite this expression as \[Z=\left(\prod_{l}\int_{0}^{2\pi}d\varphi_{l}\right)\prod_{p}\left[\sum_{n\in \mathbb{Z}}e^{i(d\varphi)_{p}n}e^{-\frac{2}{2}\left(n+\frac{\alpha}{2\pi} \right)^{2}}\right]. \tag{25}\] Note that one could also arrive at the above form by skipping the Poisson summation originally applied to derive Eq. (20). To render the integrals over the continuous U(1) variables tractable, we next rewrite the path integral over link variables into a path integral over plaquette variables, \[\left(\prod_{l}\int_{0}^{2\pi}d\varphi_{l}\right)=\left(\prod_{p}\int_{0}^{2 \pi}d\Phi_{p}\right)\prod_{c}\left[\sum_{h\in\mathbb{Z}}e^{ih(d\Phi)_{c}} \right]. \tag{26}\] Here, the product \(\prod_{c}\) enumerates the three-dimensional unit cubes of the lattice, the Fourier series of the Dirac delta has been introduced using sums over \(h\), and \[(d\Phi)_{c}=\sum_{i<j}\left[\epsilon_{ijk}\Phi_{ij,x+\hat{k}}-\Phi_{ij,x}\right] \tag{27}\] is the exterior derivative of the plaquette variables \(\Phi\) associated with cube \(c\) rooted at site \(x\). With this rewriting, the variables \((d\varphi)_{p}\) are replaced with the new integration variables \(\Phi_{p}\), yielding \[Z=\left(\prod_{c}\sum_{h_{c}\in\mathbb{Z}}\right)\prod_{p}\Big{[} \int_{0}^{2\pi}d\Phi_{p}\times\\ \times\sum_{n\in\mathbb{Z}}e^{i\Phi_{p}(n-(d^{*}h)_{p})}e^{- \frac{\alpha^{2}}{2}(n+\frac{\alpha}{2\pi})^{2}}\Big{]}\, \tag{28}\] where \((d^{*}h)_{p}=\epsilon_{ijk}h_{x+\hat{k}}-h_{x}\) in terms of the orientation \((ij)\) and base site \(x\) of the plaquette \(p\). Integrating out the \(\Phi_{p}\) gives simple Kronecker delta functions over the integer sums, and resolving these results in the partition function of a height model, \[Z=\left(\prod_{c}\sum_{h_{c}\in\mathbb{Z}}\right)\prod_{p}\exp\left(-\frac{e^ {2}}{2}((d^{*}h)_{p}+\alpha/2\pi)^{2}\right). \tag{29}\] To complete the dualization, we go to the dual lattice. Cubes \(c\) in the original lattice are dual to sites \(x\) in the dual lattice, while plaquettes \(p\) are dual to lattice edges \(\langle xy\rangle\), giving \[Z=\left(\prod_{x}\sum_{h_{c}\in\mathbb{Z}}\right)\exp\left(-\frac{e^{2}}{2} \sum_{\langle xy\rangle}(h_{x}-h_{y}+\alpha/2\pi)^{2}\right)\, \tag{30}\] where now all quantities refer to the dual lattice. Note that there is an ambiguity in the direction of the links \(\langle xy\rangle\) in the sum. Reversing the direction in which the link \(\langle xy\rangle\) appears in the sum is equivalent to locally flipping the sign of \(\alpha\) on this link. One simple prescription, consistent with the largest translational symmetry group, is to choose the direction of the links in the sum to always be oriented in the forward \(\hat{\mu}\) direction for each lattice orientation, meaning we sum over all links \(\langle xy\rangle\) with \(y=x+\hat{\mu}\) for some \(\mu\in\{0,1,2\}\). However, this prescription explicitly breaks lattice rotational symmetry, requiring each rotation to be combined with spatial inversions to remain a good symmetry of the theory. Instead, we choose to orient links \(\langle xy\rangle\) in the sum such that \(x\) is always an odd site (i.e., \(x_{0}+x_{1}+x_{2}\) is odd) and \(y\) is therefore always an even site. This preserves lattice rotational symmetry and all translations by multiples of two lattice sites. The sum over odd sites can then be modified to absorb the \(\alpha/2\pi\) factor by instead summing over \(h_{x}\in\mathbb{Z}+\alpha/2\pi\), resulting in the final dual partition function \[Z=\left(\prod_{x\,\text{odd}}\sum_{h_{x}\in\mathbb{Z}+\frac{2}{2 \pi}}\right)\left(\prod_{y\,\text{even}}\sum_{h_{y}\in\mathbb{Z}}\right)\times\\ \times\exp\left[-\frac{e^{2}}{2}\sum_{\langle xy\rangle}(h_{x}-h_ {y})^{2}\right]\,. \tag{50}\] In this case, single-site translations are explicitly broken and must be combined with charge conjugation and a non-integer shift of the height variables to recover a symmetry transformation, as explained in Section III.3. Note that for \(\alpha\in\{0,\pi\}\) these two prescriptions are equivalent in infinite volume. ## Appendix B Derivation of the energy operator In order to measure the string tension, we insert static charges directly in the action and measure the resulting energy, similarly to [30, 4]. In particular, we know that the partition function \(Z[s]\) with the charges inserted may be written in terms of a Hamiltonian \(H\) as \[Z[s]=\operatorname{tr}\left(e^{-\epsilon TH}\right)\,, \tag{51}\] where \(\epsilon\) is the time step and \(T\) is the number of time steps. Then we see that the energy of the system, i.e. the expectation value of the Hamiltonian, is given by \[\langle H\rangle=\frac{1}{Z}\operatorname{tr}\left(H\,e^{-\epsilon TH}\right) =-\frac{1}{ZT}\frac{\partial}{\partial\epsilon}Z. \tag{52}\] Expressing the partition function in terms of the height variables and taking the appropriate derivative, one can then obtain an expression for \(\langle H\rangle\) which is amenable to Monte Carlo simulation. As we have seen in Section IV.4, the partition function with the insertion of the two charges is given by \[Z=\left(\prod_{x\in\text{sites}}\sum_{h_{x}}\right)\exp\left[-\frac{e^{2}}{2} \sum_{\langle xy\rangle}(h_{x}-h_{y}+s_{\langle xy\rangle})^{2}\right]\,, \tag{53}\] where \(s_{\langle xy\rangle}\) is defined in Eq. (32). In order to take the derivative, we restore the lattice spacing, which we set equal to \(\epsilon\) in the time direction and equal to \(a\) in the space directions. The sum over links discretizes an integral over volume, and is therefore associated with units of \(\epsilon a^{2}\). The finite difference \(h_{x}-h_{y}\) discretizes a derivative, and therefore comes with units of \(1/a\) in the space directions and \(1/\epsilon\) in the time direction; hence time links carry units of \(a^{2}\epsilon\times(1/\epsilon)^{2}=a^{2}/\epsilon\), while space links carry units of \(a^{2}\epsilon\times(1/a)^{2}=\epsilon\), so that overall the action becomes \[S =\frac{a^{2}e^{2}}{2\epsilon}\sum_{\langle xy\rangle_{\text{time }}}(h_{x}-h_{y}+s_{\langle xy\rangle})^{2} \tag{54}\] \[\quad+\frac{\epsilon e^{2}}{2}\sum_{\langle xy\rangle_{\text{ space}}}(h_{x}-h_{y}+s_{\langle xy\rangle})^{2}\,\] where \(\langle xy\rangle_{\text{time}}\) and \(\langle xy\rangle_{\text{space}}\) denote links in the time and space directions respectively. It is then straightforward to differentiate the partition function \(Z[s]\) with respect to \(\epsilon\). Setting back \(\epsilon=a\), one then obtains (in lattice units) \[\langle H\rangle =\frac{e^{2}}{2T}\bigg{\langle}-\sum_{\langle xy\rangle_{\text{ time}}}(h_{x}-h_{y}+s_{\langle xy\rangle})^{2} \tag{55}\] \[\quad+\sum_{\langle xy\rangle_{\text{space}}}(h_{x}-h_{y}+s_{ \langle xy\rangle})^{2}\bigg{\rangle}\.\] This expression can be computed via Monte Carlo simulation. It is important to note that, since we have a finite time step \(\epsilon=a\), the expression \(Z[s]=\operatorname{tr}\left(\exp\left(-\epsilon TH\right)\right)\) is valid only up to Trotterization artefacts. Therefore, we expect the equality in Eq. (55) to be valid only in the continuum limit \(a\to 0\).
2309.10666
Precomputable Trade-off Between Error and Breakpoints in Piecewise Linearization for First-Order Loss Functions
Stochastic optimization often involves calculating the expected value of a first-order max or min function, known as a first-order loss function. In this context, loss functions are frequently approximated using piecewise linear functions. Determining the approximation error and the number of breakpoints (segments) becomes a critical issue during this approximation. This is due to a trade-off: increasing the number of breakpoints reduces the error but also increases the computational complexity of the embedded model. As this trade-off is unclear in advance, preliminary experiments are often required to determine these values. The objective of this study is to approximate the trade-off between error and breakpoints in piecewise linearization for first-order loss functions. To achieve this goal, we derive an upper bound on the minimum number of breakpoints required to achieve a given absolute error. This upper bound can be easily precomputed once the approximation intervals and error are determined, and serves as a guideline for the trade-off between error and breakpoints. Furthermore, we propose efficient algorithms to obtain a piecewise linear approximation with a number of breakpoints below the derived upper bound.
Yotaro Takazawa
2023-09-19T14:49:49Z
http://arxiv.org/abs/2309.10666v2
Precomputable Trade-off Between Error and Breakpoints in Piecewise Linearization for First-Order Loss Functions ###### Abstract Stochastic optimization often involves calculating the expected value of a first-order max or min function, known as a first-order loss function. In this context, loss functions are frequently approximated using piecewise linear functions. Determining the approximation error and the number of breakpoints (segments) becomes a critical issue during this approximation. This is due to a trade-off: increasing the number of breakpoints reduces the error but also increases the computational complexity of the embedded model. As this trade-off is unclear in advance, preliminary experiments are often required to determine these values. The objective of this study is to approximate the trade-off between error and breakpoints in piecewise linearization for first-order loss functions. To achieve this goal, we derive an upper bound on the minimum number of breakpoints required to achieve a given absolute error. This upper bound can be easily precomputed once the approximation intervals and error are determined, and serves as a guideline for the trade-off between error and breakpoints. Furthermore, we propose efficient algorithms to obtain a piecewise linear approximation with a number of breakpoints below the derived upper bound. keywords: stochastic programming, inventory, piecewise linear approximation + Footnote †: journal: Journal of Computational Mechanics ## 1 Introduction Stochastic optimization often involves the calculation of an expected value of a max (min) function consisting of a decision variable and a random variable. One of the most fundamental functions among them can be expressed as a univariate function \(\ell_{X}:\mathbb{R}\rightarrow\mathbb{R}\), \[\ell_{X}(s)=\mathbb{E}[\max(a_{1}s+b_{1}X+c_{1},a_{2}s+b_{2}X+c_{2})], \tag{1}\] where \(X\) is a given one-dimensional random variable and \(a_{i},b_{i},c_{i}\in\mathbb{R}\) (\(i\in\{1,2\}\)). This function is used in many areas, especially in inventory control. For example, let \(X\) be an uncertain demand for some product \(P\), and \(s\) be the ordering quantity for \(P\). Then \(\mathbb{E}[\max(X-s,0)]\) and \(\mathbb{E}[\max(s-X,0)]\) are considered as shortage and inventory costs in inventory control. These functions are known as first-order loss functions (Snyder and Shen, 2019) and are used in inventory control and other applications (Rossi et al., 2014). Thus, we call \(\ell_{X}\) a general (first-order) loss function, as it is a generalization of them. As another example, \(\mathbb{E}[\min(s,X)]\) is regarded as the expected number of units of product P sold, which is often used in a profit function in newsvendor models. While a general first-order loss function is often embedded in mixed-integer linear programming (MILP) models, calculating these values is a challenging task for the following reasons. When a target random variable \(X\) is continuous, these expectation functions are often nonlinear and thus difficult to embed in MILP. On the other hand, when \(X\) is discrete, these expectation functions can be directly embedded in MILP. However, when \(X\)'s support has a large cardinality or is infinite, solving optimization problems becomes challenging. For the above reasons, loss functions are often approximated by a piecewise linear function, which is a tractable format for MILP. In a piecewise linear approximation, we choose \(n\) points in ascending order, known as breakpoints. We approximate the loss function using linear segments connecting each breakpoint to its corresponding function value. Determining the appropriate parameters, such as the acceptable error and the number of breakpoints for piecewise linear functions, is a crucial aspect of this approach. While increasing the number of breakpoints can decrease the approximation error, it also increases the computational complexity of MILP. Therefore, we must set a careful balance when choosing these parameters. Often, the appropriate parameters are determined through preliminary and sometimes heavy numerical experiments, as the theoretical relationship between these parameters is not always clear. This study aims to understand the trade-off between error and breakpoints in a piecewise linear approximation for general first-order loss functions. To achieve this objective, we derive a tight upper bound for the minimum number of breakpoints needed to achieve a given error. We also propose an efficient method for constructing a piecewise linear function with a number of breakpoints below this upper bound. As a result, we enable the determination of appropriate levels of error and breakpoints, using this upper bound as a guideline. Specifically, we have obtained the following results in this study for a general loss function: 1. Given an approximation error \(\epsilon>0\), we propose algorithms to make a piecewise linear function such that the maximum absolute error is within \(\epsilon\) and the breakpoints are bounded by \(M\sqrt{\frac{W}{\epsilon}}\), where \(M\) (\(\leq 1\)) is a parameter dependent on the setting and \(W\) is the width of the approximation interval. Among the proposed algorithms, one guarantees minimality in the number of breakpoints, while the others, although not guaranteeing minimality, offer the same upper bound on the number of breakpoints. We also demonstrate that this upper bound on the number of breakpoints is tight from a certain perspective. 2. Through computational experiments, we compare the actual number of breakpoints generated by our proposed algorithms with the derived bounds across vari ous distributions. In many cases, we find that the minimal number of breakpoints can be approximated by \(\frac{1}{2\sqrt{2}}\sqrt{\frac{W}{\epsilon}}\). We review related approaches for piecewise linear approximation, specifically focusing on methods that provide theoretical guarantees in terms of approximation error or the number of breakpoints. These can be broadly divided into two categories based on whether they fix the error or the number of breakpoints. Note that the setting of our study falls under fixing the error. As the foundation of our research, we first introduce the important work by Rossi et al. (2014), which focuses on minimizing error in piecewise linear approximation for first-order loss functions with a fixed number of breakpoints. Their method divides the domain into \(n\) intervals and uses a new discrete random variable to calculate the conditional expectation for each interval. Based on this, various heuristics have been proposed for different distributions (Rossi and Hendrix, 2014; Rossi et al., 2015). This approach is widely used, especially in the field of inventory management (Tunc et al., 2018; Kilic and Tunc, 2019; Gutierrez-Alcoba et al., 2023; Xiang et al., 2023). While the method in Rossi et al. (2014) focuses on guarantees for minimizing error, it does not fully explore the relationship between error and the number of breakpoints and is specifically tailored for normal distributions. In contrast, our research can determine the relationship between error and breakpoints a priori. Our study is applicable to both continuous and discrete distributions and can also be used for scenario reduction in scenario data. There is a history of research on minimizing error in piecewise linear approximation for convex functions with fixed breakpoints (Cox, 1971; Gavrilovic, 1975; Imamoto and Tang, 2008; Liu and Liang, 2021), which serve as the basis for Rossi et al. (2014). Similar to Rossi et al. (2014), most of these works focus solely on minimizing error without delving into the relationship between error and the number of breakpoints. The sole exception is the study by Liu and Liang (2021), which provides the first trade-off between error and breakpoints. Although their study achieves an error analysis and trade-off similar to ours, it is difficult to apply to the loss function for the following two reasons: 1.Their trade-off analysis relies on derivative information, making it inapplicable to general loss functions composed of discrete distributions. 2.Their algorithm requires simplicity in the derivative form of the target function for error computation, making it challenging to directly apply to general loss functions. Additionally, most of the studies mentioned here require solving non-convex optimization problems, leaving the computational complexity unknown. In contrast, our research has the advantage of bounding the number of iterations of our algorithms by the number of breakpoints. Next, we review studies that aim to minimize the number of breakpoints given a specified allowable error, which, as stated in Ngueveu (2019), are much fewer in number compared to the other setting. Ngueveu (2019) proposed a method for minimizing breakpoints that can be applied to a specific MILP. While their results are substantially different from ours, their motivation to focus on the trade-off between error and the number of breakpoints is similar to ours. Here, we discuss the study by Rebennack and Kallrath (2015), which is most closely related to our research. In Rebennack and Kallrath (2015), they formulated a MILP optimization problem to minimize the number of breakpoints for general univariate functions, given an allowable error \(\epsilon\). Additionally, they proposed a heuristic for determining an adjacent breakpoint such that the error within an interval does not exceed \(\epsilon\) when a specific breakpoint is given. Even though the heuristic algorithm's approach is similar to ours, it does not provide bounds on the number of breakpoints, making our results non-trivial. Although it is slightly outside the scope, we discuss the research on the scenario generation approach, which is another popular method for dealing with a random variable \(X\) in stochastic optimization. In the scenario generation approach, a simpler discrete random variable \(Y\) is generated to approximate a random variable \(X\), with each value of \(Y\) referred to as a scenario. Sample average approximation, such as Monte Carlo Sampling, is one of the most widely used methods in scenario generation (Shapiro, 2003). Recall that our proposed method approximates \(X\) by a new discrete distribution \(\tilde{X}\) using conditional expectations based on Rossi et al. (2014). Thus, it can also be viewed as a scenario generation approach that guarantees the number of generated scenarios and the error. Note that, in existing studies in scenario generation, the function incorporating \(X\) (in our case, \(\ell_{X}\)) is not specified, and the error is evaluated using some form of distance, such as the Wasserstein distance, between the target random variable \(X\) and the generated variable \(Y\). This is significantly different from our study, which evaluates the absolute error between \(\ell_{X}\) and \(\ell_{Y}\) when the function is specified. For the main methods related to scenario generation, please refer to Lohndorf (2016). An overview and structure of this study can be found in the next section; please refer to it for details. ## 2 Research Setting and Outline In this section, we outline the research, discuss its structure, and introduce the notation used throughout the study. ### Scope and Setting The following box summarizes the settings of this study. **Input:** 1. a half open interval \((a,b]\subseteq\mathbb{R}\) 2. a one-dimensional random variable \(X\) 3. a univariate function \(f_{X}:(a,b]\to\mathbb{R};\ f_{X}(s)=\mathbb{E}[\min(s,X)]\) **Task:** 1. Create a new discrete random variable \(\tilde{X}\) from \(X\) such that \(f_{\tilde{X}}\) is a piece-wise linear approximation of \(f_{X}\), which is based on Rossi et al. (2014). 2. Analyze \(f_{\tilde{X}}\) based on the following evaluation criteria. **Evaluation:** 1. the absolute approximation error defined by \[e_{X,\tilde{X}}\coloneqq\max_{s\in(a,b]}|f_{\tilde{X}}(s)-f_{X}(s)|\] 2. the number of the breakpoints of \(\tilde{f}\) For the setting, the following two points need further clarification: 1. Target Function: We can show that a general loss function \(\ell_{X}\) can be expressed as the sum of an affine function and \(\mathbb{E}[\min(s,X)]\) (please refer to A.1). Without loss of generality, this reduction allows us to focus on the piecewise linear approximation of a simple function \(f_{X}(s):=\mathbb{E}[\min(s,X)]\). Thus, in the rest of paper, we focus on \(f_{X}\). We use the half-open interval \((a,b]\) as the approximation domain in order to simplify the discussion when dealing with discrete distributions. 2. Method of Piecewise Linear Approximation: In this study, we employ the method proposed by Rossi et al. (2014) for piecewise linear approximation. In their method, we transform the target random variable \(X\) into a discrete random variable \(\tilde{X}\) that takes on fewer possible values than \(X\) as follows. First, we divide \(\mathbb{R}\) into \(n\) consecutive regions \(\mathcal{I}=(I_{1},\ldots,I_{n})\). We construct \(\tilde{X}\) from \(\mathcal{I}\), which takes the conditional expectation \(\mathbb{E}[X\mid X\in I]\) with probability \(P(X\in I)\) for each \(I\in\mathcal{I}\). Then we consider the loss function \(f_{\tilde{X}}\) obtained by replacing \(X\) with \(\tilde{X}\) in \(f_{X}\), which can be shown to be a piecewise linear function with \(n+1\) breakpoints. It implies that the piecewise linear function \(f_{\tilde{X}}\) is uniquely determined once a partition \(\mathcal{I}\) is fixed. ### Structure of the Paper The structure of the rest of this paper is as follows: In Section 3, we analyze the error \(e_{X,\tilde{X}}\) under the given partition \(\mathcal{I}\). In Section 4, we propose algorithms for generating a partition \(\mathcal{I}\) with an error less than \(\epsilon\) and derive upper bounds on the number of breakpoints for the induced piecewise linear functions. In Section 5, we derive lower bounds on the number of breakpoints based on the framework of our analysis. In Section 6, we compare the actual errors and the number of breakpoints with their theoretical values through numerical experiments and discuss the results. In Section 7, we give our conclusion. ### Notation and Assumptions We assume that all random variables treated in this paper are real-valued random variables with expected values. Random variables can be either continuous or discrete unless explicitly stated. For a random variable \(X\), its distribution function is denoted by \(p_{X}:\mathbb{R}\to[0,1]\). For a half-open interval \(I=(a,b]\subseteq\mathbb{R}\), we define the part of expectation of \(\mathbb{E}[X]\) as \[\mathbb{E}_{a,b}[X]:=\left\{\begin{array}{ll}\int_{a}^{b}\ p_{X}(x)x\,dx& \text{if $X$ is continuous,}\\ \sum_{x\in(a,b]\cap S}P(X=x)x&\text{if $X$ is discrete,}\end{array}\right.\] where \(S\) is the support of \(X\). ## 3 Analysis of Approximation Error In this section, we introduce the piecewise linear method used in the study and evaluate the error in its approximation. ### Piecewise Linear Approximation Framework First, we introduce a piecewise linear approximation framework for the function \(f_{X}(s)=\mathbb{E}[\min(s,X)]\) on some half interval \((a,b]\subseteq\mathbb{R}\), which is based on Rossi et al. (2014). Let \(\mathcal{I}=(I_{0},I_{1},\ldots I_{n},I_{n+1})\) be a partition of \(\mathbb{R}\) such that * \(I_{j}=(a_{j},b_{j}]\) for \(j\in\{0,1,\ldots,n,n+1\}\) such that \(b_{j}=a_{j+1}\) for \(j\in\{0,1,\ldots,n\}\) * \(a_{0}=-\infty\), \(b_{0}=a\), \(a_{n+1}=b\) and \(b_{n+1}=\infty\). To simplify the discussion, we assume that for all \(I\in\mathcal{I}\), \(P(X\in I)\) is positive. We consider a new discrete random variable \(\tilde{X}\) with \(\mathcal{I}\) to approximate \(X\) as follows. **Definition 3.1**.: A discrete random variable \(\tilde{X}\) is said to be induced by a random variable \(X\) with \(\mathcal{I}\) if \(\tilde{X}\) takes \(\mathbb{E}[X\mid X\in I_{j}]\) with probability \(P(X\in I_{j})\) for \(j\in\{0,1,\ldots,n,n+1\}\). With \(\tilde{X}\) instead of \(X\) of \(f_{X}\), \(f_{\tilde{X}}\) is written as \[f_{\tilde{X}}(s)\coloneqq\mathbb{E}[\min(s,\tilde{X})]=\sum_{j=0}^{n+1}(P(X\in I _{j})\cdot\min(s,\mu_{j})), \tag{2}\] where for \(j\in\{0,1,\ldots,n,n+1\}\) we define \[\mu_{j}\coloneqq\mathbb{E}[X\mid X\in I_{j}]. \tag{3}\] We easily show that \(f_{\tilde{X}}\) is a continuous piecewise linear function as follows. **Proposition 3.2**.: \(f_{\tilde{X}}\) on \((a,b]\) is a continuous piecewise linear function on with \(n\) breakpoints. Proof.: Assume that \(\mu_{i}<s\leq\mu_{i+1}\) for some \(i\in\{0,1,\ldots,n\}\). Then we have \[f_{\tilde{X}}(s)=\sum_{j=0}^{i}(P(X\in I_{j})\cdot\mu_{j})+s\cdot\sum_{j=i+1}^ {n+1}P(X\in I_{j}).\] Therefore, \(f_{\tilde{X}}\) is a continuous piecewise linear function, whose breakpoints are \(\mu_{1},\ldots,\mu_{n}\). Based on the above results, the piecewise linear function \(f_{\tilde{X}}\) is uniquely determined once a partition \(\mathcal{I}\) of \((a,b]\) and the associated random variable \(\tilde{X}\) have been specified. ### Analysis of Approximation Error Now, we evaluate the absolute error between the piecewise linear function \(f_{\tilde{X}}\) and \(f_{X}\) defined as \[e_{X,\tilde{X}}=\max_{s\in(a,b]}|f_{\tilde{X}}(s)-f_{X}(s)|. \tag{4}\] Since \((I_{1},\cdots,I_{n})\) is a partition of \((a,b]\), by taking the maximum of the errors in each region, \(e_{X,\tilde{X}}\) is rewritten as \[e_{X,\tilde{X}} =\max_{j\in\{1,\ldots,n\}}\max_{s\in I_{j}}|f_{\tilde{X}}(s)-f_{X }(s)|\] \[=\max_{j\in\{1,\ldots,n\}}\Delta_{X}(I_{j}),\] where we define \[\Delta_{X}(I_{j})\coloneqq\max_{s\in I_{j}}|f_{\tilde{X}}(s)-f_{X}(s)|\quad( j\in\{1,\ldots,n\}). \tag{5}\] **Remark 3.3**.: Due to strict concavity of \(f_{X}\), \(e_{X,\tilde{X}}\) is minimized when all \(\Delta_{X}(I_{j})\) have the same value (Imamoto and Tang, 2008). Utilizing this property, research led by Rossi et al. (2014) and others employ an approach that solves a nonlinear optimization problem to make all \(\Delta_{X}(I_{j})\) equal. These analyses are only valid for the optimal partition. In our study, we evaluate the approximation error for any partition for \((a,b]\), which makes it possible to evaluate the piecewise linear approximation function that provides a good but non-optimal approximation. From here on, we will evaluate \(\Delta_{X}(I_{k})\) for fixed \(k\in\{1,\ldots n\}\). For \(s\in(a,b]\), we transform \(f_{X}\) as follows. \[f_{X}(s) =\mathbb{E}[\min(s,X)]=\sum_{j=0}^{n+1}\mathbb{E}_{(a_{j},b_{j}]}[ \min(s,X)]\] \[=\sum_{j=0}^{n+1}P(X\in I_{j})\mathbb{E}[\min(s,X_{I_{j}})],\] where we define the conditional random variable when \(X\in I_{j}\) as \[X_{I_{j}}\coloneqq X\mid X\in I_{j}\quad(j\in\{1,\ldots,n\}). \tag{6}\] When \(t\in I_{k}\), for any \(j\neq k\), we have \[\mathbb{E}[\min(t,X_{I_{j}})]=\min(t,\mu_{j})\] because we see \[\mathbb{E}[\min(t,X_{I_{j}})]=\begin{cases}\mu_{j}\quad(\leq b_{j}\leq b_{k} \leq t)&\text{if }j<k,\\ t\quad(\leq b_{k}\leq a_{j}\leq\mu_{j})&\text{if }j>k.\end{cases}\] Comparing \(f_{\tilde{X}}(s)\) as defined in (2) with \(f_{X}\), each term corresponding to \(j\neq k\) has the same value in both \(f_{X}\) and \(f_{\tilde{X}}\). Thus, by subtracting \(f_{X}(t)\) from \(f_{\tilde{X}}(t)\), we obtain the following. \[f_{\tilde{X}}(t)-f_{X}(t)=P(X\in I_{k})(\min(t,\mu_{k})-\min(t, \mathbb{E}[\min(t,X_{I_{k}})]))\geq 0,\] where the last inequality is from Jensen's inequality since \(f_{X_{I_{k}}}\) is proven to be concave in Lemma A.1. By substituting the above equation into the definition of \(\Delta_{X}\) given by (5), we obtain the following lemma. **Lemma 3.4**.: For each \(j\in\{1,\cdots,n\}\), \[\Delta_{X}(I_{j})=P(X\in I_{j})\cdot\max_{s\in I_{j}}(\min(s,\mu _{j})-\mathbb{E}[\min(s,X_{I_{j}})]). \tag{7}\] We can show that the maximum value in \(\Delta_{X}(I_{j})\) is attained at \(s=\mathbb{E}[X\mid X\in I_{j}]\) and its value is calculated as follows, which is proved in the next subsection. The relationship is illustrated in Figure 1. **Lemma 3.5**.: For an interval \(I_{j}=(a_{j},b_{j}]\), \[\max_{s\in I_{j}}(\min(s,\mu_{j})-\mathbb{E}[\min(s,X_{I_{j}})])= \mathbb{E}_{a,\mu_{j}}[\mu_{j}-X_{I_{j}}]\leq\frac{b_{j}-a_{j}}{4}.\] From the above lemma, we have the following theorem, where the error in each interval can be analytically computed and is bounded by a value proportional to the product of the length of each region and the probability within that region. **Theorem 3.6**.: For a interval \(I_{j}=(a_{j},b_{j}]\)\((j\in\{1,\ldots,n\})\), \[\Delta_{X}(I_{j})=\mathbb{E}_{a_{j},\mu_{j}}[\mu_{j}-X]\leq P(X\in I_{j})\cdot \frac{b_{j}-a_{j}}{4}.\] Proof.: From the inequality in Lemma 3.5 and Lemma 3.4, we have \[\Delta_{X}(I_{j})\leq P(X\in I_{j})\cdot\frac{b_{j}-a_{j}}{4}.\] Next, we show the equality. From the equality in Lemma 3.5 and Lemma 3.4, we have \[\Delta_{X}(I_{j})=P(X\in I_{j})\cdot\mathbb{E}_{a_{j},\mu_{j}}[\mu_{j}-X_{I_{j }}].\] Since \[\mathbb{E}_{a_{j},\mu_{j}}[\mu_{j}-X_{I_{j}}]=\frac{\mathbb{E}_{a_{j},\mu_{j} }[\mu_{j}-X]}{P(X\in I_{j})},\] we obtain \[\Delta_{X}(I_{j})=\mathbb{E}_{a_{j},\mu_{j}}[\mu-X].\] We finally derive the following result. **Corollary 3.7**.: \[e_{X,\bar{X}}=\max_{j\in\{1,\ldots n\}}\mathbb{E}_{a_{j},\mu_{j}}[\mu_{j}-X] \leq\max_{j\in\{1,\ldots n\}}\left(P(X\in I_{j})\cdot\frac{b_{j}-a_{j}}{4} \right).\] (8) ### Proof of Lemma 3.5: Approximation Error of Piecewise Linear Approximation with One Breakpoint In this subsection, we provide a proof of Lemma 3.5. Let \(Y\) be a real-valued random variable whose support is a subset of \((a,b]\subseteq\mathbb{R}\). Let \(\mu\) be the mean value of \(Y\), that is, \(\mu:=\mathbb{E}[Y]\). Consider the piecewise linear approximation function to \(f_{Y}(s)=\mathbb{E}[\min(s,Y)]\), whose breakpoint is only the mean value \(\mathbb{E}[Y]\). Then, this function is \(\min(s,\mathbb{E}[Y])\) and we define its approximation error at a point \(s\in(a,b]\) as \[\delta_{Y}(s)\coloneqq|\min(s,\mathbb{E}[Y])-f_{Y}(s)|. \tag{9}\] To show the claim of Lemma 3.5, it suffices to show that the maximum value of \(\delta_{Y}(s)\) is equal to \(\mathbb{E}_{a,\mu}[\mu-Y]\) and it is bounded by \(\frac{b-a}{4}\). The following lemma shows that \(\delta_{Y}(s)\) attains its maximum value at \(s=\mu\) and its maximum value is \(\mathbb{E}_{a,\mu}[\mu-Y]\). The relationship is illustrated in Figure 2. **Lemma 3.8**.: \[\max_{s\in(a,b]}\delta_{Y}(s)=\delta_{Y}(\mu)=\mathbb{E}_{a,\mu}[\mu-Y]\] (10) Proof.: Assume \(s\in(a,b]\). Then, \(f_{Y}\) is written as \[f_{Y}(s)=\mathbb{E}_{a,s}[Y]+\mathbb{E}_{s,b}[s].\] Thus, if \(s\in(a,\mu]\), from \(\min(s,\mu)=s\), we see that \[\delta_{Y}(s) =s-(\mathbb{E}_{a,s}[Y]+\mathbb{E}_{s,b}[s])\] \[=\mathbb{E}_{a,s}[s]+\mathbb{E}_{s,b}[s]-\mathbb{E}_{a,s}[Y]- \mathbb{E}_{s,b}[s]\] \[=\mathbb{E}_{a,s}[s-Y].\] Similarly, if \(s\in(\mu,b]\), from \(\min(s,\mu)=\mu\), we see that \[\delta_{Y}(s) =\mu-(\mathbb{E}_{a,s}[Y]+\mathbb{E}_{s,b}[s])\] \[=\mathbb{E}_{a,s}[Y]+\mathbb{E}_{s,b}[Y]-\mathbb{E}_{a,s}[Y]- \mathbb{E}_{s,b}[s]\] \[=\mathbb{E}_{s,b}[Y-s].\] Hence, we obtain \[\delta_{Y}(s)=\left\{\begin{array}{ll}\mathbb{E}_{a,s}[s-Y]&\text{if }s\in(a, \mu],\\ \mathbb{E}_{s,b}[Y-s]&\text{if }s\in(\mu,b].\end{array}\right.\] For \(s\in(a,\mu)\) and \(s+\epsilon\in(a,\mu]\) such that \(\epsilon>0\), we see that \[\mathbb{E}_{a,s+\epsilon}[s+\epsilon-Y] =\mathbb{E}_{a,s}[s+\epsilon-Y]+\mathbb{E}_{s,s+\epsilon}[s+ \epsilon-Y]\] \[=\mathbb{E}_{a,s}[s-Y]+\mathbb{E}_{a,s}[\epsilon]+\mathbb{E}_{s, s+\epsilon}[s+\epsilon-Y]\] \[\geq\mathbb{E}_{a,s}[s-Y].\] Thus, \(\delta_{Y}(s)\) is increasing for \(s\in(a,\mu]\), and decreasing for \(s\in(\mu,b]\) as well. Also, it is continuous since \(f_{Y}\) and \(f_{\bar{Y}}\) are continuous from Lemma A.1. Therefore, it attains its maximum value at \(s=\mu\). We finally obtain the upper bound as follows. **Theorem 3.9**.: \[\max_{s\in(a,b]}\delta_{Y}(s)\leq\frac{b-a}{4}.\] (11) Proof.: Since \(\max_{s\in(a,b]}\delta_{Y}(s)=\delta_{Y}(\mu)\) from Lemma 3.8, we have \[\delta_{Y}(\mu) =\mathbb{E}_{a,\mu}[\mu-Y]\] \[=P(a<Y\leq\mu)\mathbb{E}[\mu-Y|a<Y\leq\mu]\] \[=P_{A}(\mu-\mu_{A}), \tag{12}\] where \(P_{A}=P(Y\leq\mu)\), \(\mu_{A}=\mathbb{E}[Y\mid Y\leq\mu]\). Define \(P_{B}=P(\mu<Y\leq b)\) and \(\mu_{B}=\mathbb{E}[Y\mid\mu<Y\leq b]\), where we assume that \(P_{B}>0\) since \(\delta_{Y}(\mu)=0\) holds when \(P_{B}=0\). Then, we have the following relation: \[P_{A}+P_{B} =1,\] \[P_{A}\mu_{A}+P_{B}\mu_{B} =\mu.\] Thus, we have \(P_{A}=\frac{\mu_{B}-\mu}{\mu_{B}-\mu_{A}}\). Therefore, substituting \(P_{A}\) into (12), we obtain the following equation: \[\delta_{Y}(\mu)=\frac{(\mu-\mu_{A})(\mu_{B}-\mu)}{\mu_{B}-\mu_{A}}.\] Moreover, we obtain \[\frac{(\mu-\mu_{A})(\mu_{B}-\mu)}{\mu_{B}-\mu_{A}}\leq\max_{\mu^{ \prime}\in[\mu_{A},\mu_{B}]}\frac{(\mu^{\prime}-\mu_{A})(\mu_{B}-\mu^{\prime}) }{\mu_{B}-\mu_{A}}=\frac{\mu_{B}-\mu_{A}}{4},\] where the last equation is from that the maximum value is attained when \(\mu^{\prime}=\frac{\mu_{B}-\mu_{A}}{2}\). Since \(\mu_{B}-\mu_{A}\leq b-a\) holds, we finally get \(\delta_{Y}(\mu)\leq\frac{b-a}{4}\). Under certain assumptions, we can approximate \(\delta_{Y}(\mu)\approx\frac{b-a}{8}\), which is useful in the design of algorithms in the next section. **Remark 3.10**.: Let \(Z\) be a random variable having the probability density function \(f:[a,b]\rightarrow\mathbb{R}\) and assume \(f\) is absolutely continuous on \([a,b]\) and its first derivative \(f^{\prime}\) belongs to the Lebesgue space \(L_{\infty}[a,b]\). From Theorem 2 in Barnett and Dragomir (2000), we have \[\left|\mathbb{E}[Z]-\frac{a+b}{2}\right|\leq\frac{(b-a)^{3}}{12}\|f^{\prime} \|_{\infty}.\] Thus, when the first derivative of the probability function in \((a,b]\) is small enough, in the proof above, we can approximate the error as \[\delta_{Y}(\mu)=\frac{(\mu-\mu_{A})(\mu_{B}-\mu)}{\mu_{B}-\mu_{A}}\approx \frac{b-a}{8},\] where \[\mu\approx\frac{a+b}{2},\ \mu_{A}\approx\frac{a+\mu}{2}\ \text{and}\ \mu_{B}\approx \frac{\mu+b}{2}.\] Under this assumption, we can also approximate \(\Delta_{X}(I_{j})\) as follows: \[\Delta_{X}(I_{j})\approx P(X\in I_{j})\cdot\frac{b_{j}-a_{j}}{8}. \tag{13}\] ## 4 Partition Algorithms In this section, we propose a partition algorithm that guarantees a bounded number of breakpoints while keeping the error below \(\epsilon\). Our algorithm is shown in Algorithm 1 and based on the results of Section 3. ``` 0: 0:- error bound function, \(B\in\{B_{\text{exact}},B_{1/4},B_{1/8}\}\) defined by (14) 0:- support of the random variable, \(\text{S}\subseteq\mathbb{R}\) 0:- target interval \((a,b]\subseteq\mathbb{R}\) 0:- acceptable error \(\epsilon>0\) 0: a partition of \(\mathbb{R}\), \(I=(I_{0},I_{1},\ldots,I_{n},I_{n+1})\), such that \((I_{1},\ldots,I_{n})\) is also a partition of \((a,b]\) 1:\(a_{1}\gets a\) 2:\(j\gets 1\) 3:while\(B(a_{j},b)>\epsilon\)do 4:\(b_{j}\leftarrow\max\limits_{y\in(a_{j},b)\cap S}y\text{ s.t. }B(a_{j},y)\leq\epsilon\) 5:\(a_{j+1}\gets b_{j}\) 6:\(I_{j}\leftarrow(a_{j},b_{j}]\) 7:\(j\gets j+1\) 8:endwhile 9:\(b_{j}\gets b\) 10:\(I_{j}\leftarrow(a_{j},b_{j}]\) 11:\(n\gets j\) 12:\(I_{0}\leftarrow(-\infty,a]\) 13:\(I_{n+1}\leftarrow(b,-\infty)\) 14:return\(\mathcal{I}=(I_{0},I_{1},\ldots,I_{n},I_{n+1})\) ``` **Algorithm 1** Partition Algorithm The input to the Partition Algorithm consists of a tolerance \(\epsilon>0\), an interval \((a,b]\), and a bound function \(B(x,y)\) that roughly represents the error in the piecewise linear approximation over the interval \((x,y]\). \(B\) is chosen in \(\{B_{\text{exact}},B_{1/4},B_{1/8}\}\) defined as \[\begin{split} B_{\text{exact}}(x,y)&:=\mathbb{E}_{x,\mu}[\mu-X]\ \ (\mu=\mathbb{E}[X\mid X\in(x,y]])\\ B_{1/4}(x,y)&:=P(X\in(x,y])\cdot\frac{y-x}{4}\\ B_{1/8}(x,y)&:=P(X\in(x,y])\cdot\frac{y-x}{8},\end{split} \tag{14}\] where \(B_{\text{exact}}\) and \(B_{1/4}\) are derived from the equality and inequality of Theorem 3.6, respectively and \(B_{1/8}\) is derived from Remark 3.10. For simplicity, we refer to the Partition algorithm using the bound function \(B\) as Algorithm \(B\). The output is a partition of \(\mathbb{R}\). The idea behind the algorithm is quite simple. We start with \(a_{1}=a\), and in the \(j\)-th iteration, with \(a_{j}\) already determined, we choose \(b_{j}\) to be the largest value such that an error function \(B(a_{j},b_{j})\) does not exceed \(\epsilon\). In this study, we assume that we can find \(b_{j}\) exactly. The validity of this assumption and an actual method for finding \(b_{j}\) will be discussed later. Our algorithms bear a resemblance to the heuristic algorithms presented by Rebennack and Kallrath (2015), in which the next maximum breakpoint is selected to ensure that the error does not exceed \(\epsilon\). The key distinction is that, in our algorithms, we determine the breakpoints indirectly by defining the intervals. From Theorem 3.6, we see the following relation \[\Delta_{X}((x,y])=B_{\mathrm{exact}}(x,y)\leq 2B_{1/8}(x,y)=B_{1/4}(x,y).\] Therefore, we have the following result about approximation errors from Colorollary 3.7. **Theorem 4.1**.: Let \(\tilde{X}\) be the discrete random variable induced by piecewise linear approximation with the output \(\mathcal{I}\) of Algorithm 1. Depending on the setting of \(B\), the following results are obtained: \[e_{X,\tilde{X}} \leq\epsilon\] if \[B=B_{\mathrm{exact}}\text{ or }B=B_{1/4},\] \[e_{X,\tilde{X}} \leq 2\epsilon\] if \[B=B_{1/8}.\] The remaining issue is how large the upper bound of the number of breakpoints for each algorithm is. To summarize, the results obtained in this study can be compiled in the following table (Table 1). As can be seen from Table 1, the bounds on the number of breakpoints differ depending on whether the target random variable is continuous or discrete. Furthermore, \(B_{\mathrm{exact}}\) and \(B_{1/4}\) yield equivalent results in terms of theoretical guarantees. Apart from the above differences, each algorithm has its own unique characteristics. We can show that Algorithm \(B_{\mathrm{exact}}\) is guaranteed to have the minimum number of breakpoints. However, for continuous random variables, the calculation of \(B_{\mathrm{exact}}\) involves numerical integration for conditional expectations, making its implementation costly and potentially introducing numerical errors. Algorithm \(B_{1/8}\) theoretically could have an error up to \(2\epsilon\), but in practice, it behaves almost identically to \(B_{\mathrm{exact}}\) (see the section on numerical experiments for details). Finally, we outline the remaining structure of this section. In Section 4.1, we discuss the specific implementation methods for finding \(b_{j}\). In Section 4.2, we prove the optimality of Algorithm \(B_{\mathrm{exact}}\). Lastly, in Section 4.3, we provide bounds on the number of breakpoints for each algorithm. \begin{table} \begin{tabular}{c c c c} \hline \hline Algorithm & error & \multicolumn{2}{c}{ breakpoints} \\ & & continuous & discrete \\ \hline \(B_{\mathrm{exact}}\) and \(B_{1/4}\) & \(\epsilon\) & \(\frac{1}{2}\sqrt{\frac{b-a}{\epsilon}}\) & \(\sqrt{\frac{b-a}{\epsilon}}\) \\ \(B_{1/8}\) & \(2\epsilon\) & \(\frac{1}{2\sqrt{2}}\sqrt{\frac{b-a}{\epsilon}}\) & \(\frac{1}{\sqrt{2}}\sqrt{\frac{b-a}{\epsilon}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of Properties for Each Algorithm ### Implementation of Finding Next Point We discuss the actual implementation methods for finding \(b_{j}\) even though we assume that \(b_{j}\) is determined exactly in line 4 of Algorithm 1, Unless otherwise specified, \(B\) can be any of (14), where we assume \(B(x,x)=0\) for any \(x\). Here, we describe the implementation method for finding \(b_{j}\) as performed in line 4 of the algorithm. First, for \(x\in(a,b)\), we define the following function \(L_{x}\colon(x,b]\to\mathbb{R}\): \[L_{x}(y)=B(x,y)-\epsilon. \tag{15}\] At the time of the \(j\)-th iteration in line 4, \(L_{a_{j}}(a_{j})=-\epsilon<0\) and \(L_{a_{j}}(b)>0\) hold. We first consider the case where \(X\) is continuous. For simplicity, we assume that the support of \(X\) is \(\mathbb{R}\). We assume that \(L_{a_{j}}\) is continuous and strictly monotonically increasing. This is an assumption that holds for standard distributions. At this time, there exists only one \(y\) such that \(L_{a_{j}}(y)=0\), that is, \(B(a_{j},y)=\epsilon\), within \((a_{j},b)\). This can be realized by calling \(B\)\(O(\log(\frac{b-a}{d}))\) times using binary search for a small allowable error \(d>0\). Next, we consider the case where \(X\) is discrete. Let \(Z=\{x_{1},\ldots,x_{K}\}\) where \(x_{1}<\cdots<x_{K}\) be the intersection of the support \(S\) of \(X\) and \((a,b]\). We assume that \(a_{j}=x_{k[j]}\in Z\). We can find the largest \(y\) from \(\{x_{k[j]+1},\ldots,x_{K-1}\}\) satisfying \(L_{a_{j}}(y)\leq\epsilon\) by \(O(\log(b-a))\) calls to \(B\). Finally, we note that special handling is needed for \(B_{1/4}\) and \(B_{1/8}\) when \(\epsilon\) is extremely small. In such cases, it is possible that no \(y\) satisfying \(B(a_{j},b_{j})\leq\epsilon\) exists within \(\{x_{k[j]+1},\ldots,x_{K-1}\}\). In this situation, we set \(b_{j}=x_{k[j]+1}\). Although this would make \(B(a_{j},b_{j})>\epsilon\), the actual error on \((a_{j},b_{j}]\) can still be bounded below \(\epsilon\) as the exact error \(B_{\text{exact}}(a_{j},b_{j})=0\). ### Optimality In this section, we show the optimality of Algorithm \(B_{\text{exact}}\). Specifically, we show that the output of this algorithm is an optimal solution to the following optimization problem as follows: for given \(\epsilon>0\) and \((a,b]\subseteq\mathbb{R}\): \[\begin{array}{ll}\text{minimize}&n\\ \text{subject to}&\Delta_{X}(I_{j})\leq\epsilon\quad\forall i\in\{1,\ldots,n\}, \end{array} \tag{16}\] where \(n\in\mathbb{N}\) and a partition \(\mathcal{I}=(I_{0},\ldots,I_{n+1})\) are decision variables. **Theorem 4.2**.: Let \(\mathcal{I}=(I_{0},I_{1},\ldots,I_{n},I_{n+1})\) be the output of Algorithm \(B_{\text{exact}}\). Then, \(\mathcal{I}\) is an optimal solution of (16). Proof.: Let \(\mathcal{I}=(I_{0},I_{1},\ldots,I_{n},I_{n+1})\) be the output of Algorithm 1, where we use the notations \(I_{j}=(a_{j},b_{j}]\) for \(j\in\{1,\ldots,n\}\). We derive a contradiction by assuming that there is a feasible solution \(\mathcal{I}^{*}=(I_{0}^{*},\ldots,I_{m+1}^{*})\) such that \(m<n\), where we also use the notation \(I_{j}^{*}=(a_{j}^{*},b_{j}^{*})\). Without loss of generality, we assume that \(a_{j}^{*},b_{j}^{*}\) are chosen in the support \(S\) of \(X\). We prove for any \(j\in\{1,\ldots,m\}\), \(b_{j}^{*}\leq b_{j}\) by induction. When \(j=1\), we see that \(a_{1}=a_{1}^{*}=a\), \(\Delta(a,b_{1})\leq\epsilon\) and \(\Delta(a,b_{1}^{*})\leq\epsilon\). Since \(b_{1}\) is the maximum in \(S\cap(a,b]\) such that \(\Delta(a,b_{1})\leq\epsilon\), we have \(b_{1}^{*}\leq b_{1}\). Now, for \(j\in\{2,\ldots,m\}\), we assume that \(b_{j-1}\leq b_{j-1}^{*}\). First, we consider the case when \(b_{j}^{*}\leq a_{j}\). In this case \(b_{j}^{*}\leq a_{j}<b_{j}\) holds. Second, we assume that \(a_{j}\leq b_{j}^{*}\). From the assumption of induction, we have \[a_{j}^{*}=b_{j-1}^{*}\leq b_{j-1}=a_{j}.\] Therefore, \(a_{j}^{*}\leq a_{j}\leq b_{j}^{*}\) holds. From this relation, then it leads \[\Delta_{X}(a_{j},b_{j}^{*})\leq\Delta_{X}(a_{j}^{*},b_{j}^{*})\leq\epsilon.\] where we use Lemma A.2. When \(b_{j}^{*}>b_{j}\), it contradicts that \(b_{j}\) is maximal in \((a_{j},b]\cap S\) such that \(\Delta_{X}(a_{j},b_{j})\leq\epsilon\). Thus, we have \(b_{j}^{*}\leq b_{j}\) for all \(j\in\{1,\ldots,m\}\). When \(j=m\) in the result above, we have the following contradiction \[b=b_{m}^{*}\leq b_{m}<b_{n}=b.\] Hence, for any feasible solution \(\mathcal{I}^{*}=(I_{0}^{*},\ldots,I_{m+1}^{*})\), \(m\geq n\) holds, which implies \(\mathcal{I}\) is optimal. ### Upper Bounds of Breakpoints In this section, we provide bounds on the number of output intervals for each algorithm. Note that we give proof of the bounds only for \(B_{1/4}\) and \(B_{1/8}\) since the number of the breakpoints with Algorithm \(B_{\text{exact}}\) is less than or equal to that of Algorithm \(B_{1/4}\) from Theorem 4.2. First, we derive bounds for the case when \(X\) is continuous. **Theorem 4.3**.: Assume that \(X\) is continuous. Let \(\mathcal{I}=(I_{0},I_{1},\ldots I_{n},I_{n+1})\) be the outputs of Algorithm 1. Let \(\tilde{X}\) be the discrete random variable induced by piecewise linear approximation with \(\mathcal{I}\). Depending on the setting of \(B\), the following inequalities holds: \[n\leq\frac{1+P}{4}\sqrt{\frac{b-a}{\epsilon}}+1\leq\frac{1}{2} \sqrt{\frac{b-a}{\epsilon}}+1\qquad\text{ if }\quad B\in\{B_{\text{exact}},B_{1/4}\},\] \[n\leq\frac{1+P}{4\sqrt{2}}\sqrt{\frac{b-a}{\epsilon}}+1\leq\frac {1}{2\sqrt{2}}\sqrt{\frac{b-a}{\epsilon}}+1\quad\text{ if }\quad B=B_{1/8},\] where \(P=P(X\in(a,b])\). Proof.: Let \(\mathcal{I}=(I_{0},I_{1},\ldots,I_{n},I_{n+1})\) be the output of Algorithm 1, where we use the notations \(I_{j}=(a_{j},b_{j}]\) for \(j\in\{1,\ldots,n\}\). Let \(B\in\{B_{1/4},B_{1/8}\}\) and \[M=\begin{cases}1&\text{if}\quad B=B_{1/4},\\ 2&\text{if}\quad B=B_{1/8}.\end{cases}\] For \(j\in\{1,\ldots,n\}\), define \(p_{j}=P(X\in I_{j})\) and \(r_{j}=\frac{b_{j}-a_{j}}{b-a}\). From \(\sum_{j=1}^{n-1}p_{j}\leq P\) and \(\sum_{j=1}^{n-1}r_{j}\leq 1\), we have \[\sum_{j=1}^{n-1}(p_{j}+r_{j})\leq 1+P.\] Now, we will get an upper bound of \(n\) by estimating a lower bound of \(p_{j}+r_{j}\). For any \(j\in\{1,\ldots,n-1\}\), since \(B(a_{j},b_{j})=\epsilon\) holds in Line 4 in Partition Algorithm, we have \[\epsilon=P(X\in I_{j})\frac{b_{j}-a_{j}}{4M}=\frac{p_{j}r_{j}(b-a)}{4M}.\] Thus, we have \[p_{j}r_{j}=\frac{4M\epsilon}{b-a}.\] From the relationship of the geometric mean, we obtain \[p_{j}+r_{j}\geq 2\sqrt{p_{j}r_{j}}=4\sqrt{\frac{M\epsilon}{b-a}}.\] By summing \(j\) from \(1\) to \(n-1\), we obtain the following expression: \[4(n-1)\sqrt{\frac{M\epsilon}{b-a}}\leq\sum_{j=1}^{n-1}(p_{j}+r_{j})\leq 1+P\] Therefore, \[n\leq\frac{1+P}{4\sqrt{M}}\sqrt{\frac{b-a}{\epsilon}}+1.\] Next, we derive bounds for the case when \(X\) is discrete. **Theorem 4.4**.: Assume that \(X\) is discrete. Let \(\mathcal{I}=(I_{0},I_{1},\ldots I_{n},I_{n+1})\) be the outputs of Algorithm 1. Let \(\tilde{X}\) be the discrete random variable induced by piecewise linear approximation with \(\mathcal{I}\). Depending on the setting of \(B\), the following inequalities holds: \[n \leq\frac{1+P}{2}\sqrt{\frac{b-a}{\epsilon}}+1\leq\sqrt{\frac{b- a}{\epsilon}}+1\qquad\quad\text{if }\quad B=B_{\text{exact}}\text{ or }B=B_{1/4},\] \[n \leq\frac{1+P}{2\sqrt{2}}\sqrt{\frac{b-a}{\epsilon}}+1\leq\frac{1 }{\sqrt{2}}\sqrt{\frac{b-a}{\epsilon}}+1\quad\text{ if }\quad B=B_{1/8},\] where \(P=P(X\in(a,b])\). Proof.: Recall that the endpoints of the interval \(I_{j}=(a_{j},b_{j}]\), \(a_{j}\) and \(b_{j}\), are chosen from within the support of \(X\), denoted as \(\{x_{1},\ldots,x_{K}\}\). In the \(j\)-th iteration, the index selected is represented by \(k[j]\) such that \(x_{k[j]}=b_{j}\). Define \(p_{j}=P(X\in I_{j})+P(X=x_{k[j]+1})\) and \(r_{j}=\frac{b_{j}-a_{j}}{b-a}+\frac{x_{k[j]+1}-x_{k[j]}}{b-a}\) for \(j\in\{1,\ldots,n-1\}\). From \(\sum_{j=1}^{n-1}p_{j}\leq 2P\) and \(\sum_{j=1}^{n-1}r_{j}\leq 2\), we have \[\sum_{j=1}^{n-1}(p_{j}+r_{j})\leq 2(1+P).\] Now, we will get an upper bound of \(n\) by estimating a lower bound of \(p_{j}+r_{j}\). For any \(j\in\{1,\ldots,n-1\}\), since \(k[j]\) is the maximum index in \(\{x_{k[j-1]},\ldots,x_{K-1}\}\) such that \(B(a_{j},x_{k[j]})\leq\epsilon\), we have \[B(a_{j},x_{k[j]+1})=(P(X\in I_{j})+P(X=x_{k[j]+1}))\frac{x_{k[j]+1}-a_{j}}{4M}>\epsilon.\] From \(b_{j}=x_{k[j]}\) and the definitions of \(p_{j}\) and \(r_{j}\), we have \[\epsilon <(P(X\in I_{j})+P(X=x_{k[j]+1}))\frac{b_{j}-a_{j}+x_{k[j]+1}-x_{ k[j]}}{4M}\] \[=p_{j}r_{j}\frac{b-a}{4M}.\] Thus, we have \[\frac{4M\epsilon}{b-a}<p_{j}r_{j}.\] Following the almost same procedure as in the continuous case, we obtain the following result. \[n\leq\frac{1+P}{2\sqrt{M}}\sqrt{\frac{b-a}{\epsilon}}+1.\] ## 5 Lower bounds of the breakpoints In this section, we derive a lower bound on the number of breakpoints to achieve a given absolute error \(\epsilon>0\) when we use a piecewise linear approximation based on the approach in Section 2. Table 2 summarizes the results from Section 3 and this section. From these results, we see that the upper bounds of the number of breakpoints in our proposed algorithm are close to the lower bounds. We first introduce a common setting in discrete or continuous cases. For any given \(a\), \(b\), \(\epsilon\) such that \(a<b\) and \(\epsilon>0\), consider a necessary number of intervals, whose approximation error is within \(\epsilon\). Define \[N\coloneqq\left\lceil\frac{\sqrt{2}}{2}\sqrt{\frac{W}{\epsilon}}\right\rceil +1>\frac{\sqrt{2}}{2}\sqrt{\frac{W}{\epsilon}}, \tag{17}\] where \(W=b-a\). Define the following \(N\) equally spaced points in \((a,b]\): \[x_{k}=a+\frac{W}{N}k\quad(k=1,\dots N).\] Roughly speaking, we consider a random variable that takes the value \(x_{k}\) with probability \(1/N\) in both the discrete and continuous cases. ### Discrete Case Let \(X\) be the discrete random variable, whose support is \(\{x_{1},\dots,x_{N}\}\) and probability function is \[p_{X}(x)=\left\{\begin{array}{ll}\frac{1}{N}&\mbox{if }x=x_{k}\ (k=1,\dots,N), \\ 0&\mbox{otherwise.}\end{array}\right.\] The distribution of \(X\) is illustrated in Figure 3. **Lemma 5.1**.: For any \(I=(a^{\prime},b^{\prime}]\subseteq(a,b]\), \[P(X\in I)>\frac{1}{N}\Rightarrow\Delta_{X}(I)>\epsilon.\] Proof.: Let \(I\subseteq(a,b]\) be an interval such that \(|I\cap\{x_{1},\dots,x_{N}\}|\geq 2\). Without loss of generality, from Lemma A.2, for some small positive \(o>0\) and \(k\in\{1,\dots,N-1\}\), we assume \(I=(x_{k}-o,x_{k+1}]\). From Lemma 3.4, by defining \(\mu=\mathbb{E}[X\mid X\in I]=\frac{x_{k}+x_{k+1}}{2}\), we have \[\Delta_{X}(I)=\mathbb{E}_{x_{k}-o,\mu}[\mu-X]=\mathbb{E}_{x_{k},\mu}[\mu-X]\] \begin{table} \begin{tabular}{l c c} \hline & Continuous & Discrete \\ \hline Lower Bound & \(\frac{\sqrt{2}}{4}\approx 0.36\) & \(\frac{\sqrt{2}}{2}\approx 0.71\) \\ \hline Upper Bound of \(B_{\text{exact}}\) or \(B_{1/4}\) & \(\frac{1}{2}\) & \(1\) \\ \hline \end{tabular} \end{table} Table 2: Coefficients of the Upper and Lower Bounds in Terms of \(\sqrt{\frac{b-a}{\epsilon}}\) where the last equality holds since \(X\) can only take the value \(x_{k}\) with probability \(1/N\) between \(x_{k}-o\) and \(\mu\). Finally we have \[\mathbb{E}_{x_{k},\mu}[\mu-X]=\left(\frac{x_{k}+x_{k+1}}{2}-x_{k}\right)\cdot \frac{1}{N}=\frac{W}{2N^{2}}>\epsilon,\] where we use \(x_{k+1}-x_{k}=\frac{W}{N}\) and the last inequality is from (17). **Theorem 5.2**.: Let \(\tilde{X}_{L}\) be a discrete random variable induced by piecewise linear approximation with \(\mathcal{I}\) of the random variable \(X\). Then, \[e_{X,\tilde{X}}\leq\epsilon\Rightarrow|\mathcal{I}|>N.\] Proof.: Let \(\tilde{X}\) be a discrete random variable induced by piecewise linear approximation with \(\mathcal{I}\) of a random variable \(X\) such that \(\max_{s\in(a,b]}|f_{\tilde{X}}(s)-f_{X}(s)|\leq\epsilon\). For any \(I\in\mathcal{I}\), from \(\Delta_{X}(I)\leq\epsilon\) and Lemma 5.1, \[1=\sum_{I\in\mathcal{I}}P(X\in I)<\frac{|\mathcal{I}|}{N}.\] ### Continuous Case For some small \(w>0\), let \(X\) be the continuous random variable, whose support is \(S\coloneqq\cup_{k=1}^{N}(x_{k}-w,x_{k}]\) and probability function is \[p_{X}(x)=\left\{\begin{array}{ll}\frac{1}{Nw}&\mbox{if }x\in S,\\ 0&\mbox{otherwise.}\end{array}\right. \tag{18}\] The distribution of \(X\) is illustrated in Figure 4. **Lemma 5.3**.: For \(I=(a^{\prime},b^{\prime}]\subseteq(a,b]\), \[P(X\in I)\geq\frac{2}{N}\Rightarrow\Delta_{X}(I)>\epsilon.\] Figure 3: This is a caption. Proof.: From Lemma A.2, without loss of generality, we assume that \(I=(x_{k}-w,x_{k+1}]\), where \(P(X\in I)=\frac{2}{N}\). From Lemma 3.4, we have \[\Delta_{X}(I)=\mathbb{E}_{x_{k}-w,\mu}[\mu-X],\] where \(\mu=\mathbb{E}[X\mid X\in I]=\frac{x_{k}+x_{k+1}}{2}-\frac{w}{2}\). For each term, we have \[\mathbb{E}_{x_{k}-w,\mu}[\mu] =\frac{1}{Nw}\cdot w\cdot\mu=\frac{\mu}{N}\] \[\mathbb{E}_{x_{k}-w,\mu}[X] =\mathbb{E}[X\mid X\in(x_{k}-w,\mu]]\cdot P(X\in(x_{k}-w,\mu])\] \[=\left(x_{k}-\frac{w}{2}\right)\cdot\frac{w}{Nw}.\] Finally, we get \[\mathbb{E}_{x_{k}-w,\mu}[\mu-X] =\left(\frac{x_{k}+x_{k+1}}{2}-\frac{w}{2}-x_{k}+\frac{w}{2} \right)\cdot\frac{1}{N}\] \[=\frac{W}{2N^{2}}>\epsilon\] where we use \(x_{k+1}-x_{k}=\frac{W}{N}\) and the last inequality is from the definition of \(N\) in (17). **Theorem 5.4**.: Let \(\tilde{X}\) be a discrete random variable induced by piecewise linear approximation with \(\mathcal{I}\) of the random variable \(X\). Then, \[e_{X,\tilde{X}}\leq\epsilon\Rightarrow|\mathcal{I}|>\frac{N}{2}.\] Proof.: Let \(\tilde{X}\) be a discrete random variable induced by piecewise linear approximation with \(\mathcal{I}\) of a random variable \(X\) such that \(\max_{s\in(a,b]}|f_{\tilde{X}}(s)-f_{X}(s)|\leq\epsilon\). For any \(I\in\mathcal{I}\), from \(\Delta_{X}(I)\leq\epsilon\) and Lemma 5.3, \[1=\sum_{I\in\mathcal{I}}P(X\in I)<\frac{2|\mathcal{I}|}{N}.\] Figure 4: distribution of \(X\) ## 6 Numerical Experiments In this section, through numerical experiments, we compare the actual number of breakpoints generated by our proposed algorithms with the derived bounds in Section 4 across various distributions1. Footnote 1: We have released the proposed algorithm as a Python package at [https://github.com/takazawa/piecewise-linearization-first-order-loss](https://github.com/takazawa/piecewise-linearization-first-order-loss), where the experimental code is also included. ### Implementation We implemented our algorithms using Python 3.9 on a Macbook Pro laptop equipped with an Apple M1 Max CPU. We implemented all procedures required for the algorithms and generating distributions using the free scientific computing library, SciPy version 1.9.1 (Virtanen et al., 2020). For distribution generation, we utilized the distribution classes available in scipy.stats. We performed the calculation of conditional expectations for continuous distributions using the numerical integration function quad in scipy.integrate, with a numerical absolute tolerance set to \(10^{-8}\). Furthermore, we conducted the process of finding the \(b_{j}\) that satisfies \(B(a_{j},b_{j})-\epsilon=0\) in continuous distributions using the find_root function in scipy.optimize, through binary search, with an absolute tolerance related to \(b_{j}\) also set to \(10^{-8}\). ### Target Distributions Table 3 summarizes the distributions that are the subject of focus in this study. The instance column lists the instance names, while the distribution column specifies the distribution names. In the instance column, the prefix C or D signifies a continuous or discrete distribution, respectively. In the parameter column, we show the parameters set for each instance. We utilized representative parameters for each distribution; however, for discrete distributions, we set the mean to be approximately 100. Additionally, we represent the approximation intervals in the columns for \(a\) and \(b\). For this study, we set them to be about \(\pm 3\) times the standard deviation from the mean. ### Results and Discussion #### 6.3.1 Details of Table Fields Table 4 presents the results obtained in this study. We conducted experiments by varying the allowable error for each instance as \(\epsilon\in\{0.1,0.05,0.01\}\) (as shown in the column \(\epsilon\)). We conducted experiments on Algorithm \(B\in\{B_{\text{exact}},B_{1/4},B_{1/8}\}\) and compared the following 2 items: **Number of Interval Column:**: We compared the number of intervals dividing the interval \((a,b]\) output by each algorithm to the upper bound obtained in this study. These are displayed in the intervals column. The \(B\in\{B_{\text{exact}},B_{1/4},B_{1/8}\}\) columns indicate the actual number of intervals output by each algorithm \(B\). The columns for \(B\in\{B_{\text{exact}},B_{1/4},B_{1/8}\}\) indicate the actual number of intervals output by each algorithm \(B\). More precisely, this refers to \(n\) in Algorithm 1. The \(UB\in\{UB_{1/4},UB_{1/8}\}\) columns indicate the upper bounds of the breakpoints, which are rounded to integers, shown in Table 1 in Section 3. Here, \(UB_{1/4}\) and \(UB_{1/8}\) are the upper bounds corresponding to Algorithm \(B_{1/4}\) (\(B_{\text{exact}}\)) and \(B_{1/8}\), respectively. Error Column: We conducted a comparison between the input \(\varepsilon\) and the error \(e_{X,\tilde{X}}\) associated with \(\tilde{X}\) obtained from the partition output by each algorithm. We analytically calculated \(e_{X,\tilde{X}}\) based on Corollary 3.7. Although the results are not included in the paper, we confirmed that the error calculated analytically based on Corollary 3.7 matched the error calculated by SciPy's optimization library. In the table, the error column lists the ratio obtained by dividing the actual error by \(\epsilon\). #### 6.3.2 Results 1. Difference between Algorithms: Although not shown in the table, the computation time for each algorithm was less than 0.2 seconds. First, we discuss the number of intervals. For any distribution, the number of intervals output by \(B_{\text{exact}}\) and \(B_{1/8}\) differed by only 1-2. The number of intervals for \(B_{1/4}\) was approximately 40% greater in each instance compared to \(B_{\text{exact}}\) or \(B_{1/8}\). Next, we discuss the error. For \(B_{\text{exact}}\), the error values for continuous distributions were all 1.0. This is because, except for the last interval, we selected intervals where the error was precisely \(\epsilon\). On the other hand, in the case of discrete distributions, it was not possible to select a point where the error was exactly \(\epsilon\), resulting in slightly smaller values. The errors for \(B_{1/4}\) and \(B_{1/8}\) were slightly less than 0.5 and 1 for most distributions, respectively. 2. Difference between Actual Values and Upper Bounds: \begin{table} \begin{tabular}{l l l r r} \hline \hline instance & distribution & parameter & \(a\) & \(b\) \\ \hline C-N1 & Normal & \(\mu=0,\sigma=1\) & -3.0 & 3.0 \\ C-N2 & Normal & \(\mu=0,\sigma=5\) & -15.0 & 15.0 \\ C-Exp & Exponential & \(\lambda=1\) & 0.0 & 4.0 \\ C-Uni & Uniform & \(a=0,b=1\) & 0.0 & 1.0 \\ C-Bet & Beta & \(\alpha=2,\beta=5\) & 0.0 & 0.8 \\ C-Gam & Gamma & \(k=2,\theta=1\) & 0.0 & 6.2 \\ C-Chi & Chi-Squared & \(k=3\) & 0.0 & 10.3 \\ C-Stu & Student’s t & \(\nu=10\) & -3.4 & 3.4 \\ C-Log & Logistic & \(\mu=0,s=1\) & -5.4 & 5.4 \\ C-Lgn & Lognormal & \(\mu=0,\sigma=1\) & 0.0 & 8.1 \\ D-Bin & Binomial & \(n=200,p=0.5\) & 78.0 & 121.0 \\ D-Poi & Poisson & \(\lambda=100\) & 70.0 & 130.0 \\ D-Geo & Geometric & \(p=0.01\) & 1.0 & 398.0 \\ D-Neg & Negative Binomial & \(r=100,p=0.5\) & 57.0 & 142.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Target Distributions and Intervals \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline instance & \(\epsilon\) & \multicolumn{6}{c}{the number of intervals} & \multicolumn{6}{c}{error (=\(e_{X,\widehat{X}}/\epsilon\))} \\ & & \(B_{\text{exact}}\) & \(B_{1/8}\) & \(B_{1/4}\) & \(UB_{1/8}\) & \(UB_{1/4}\) & \(B_{\text{exact}}\) & \(B_{1/4}\) & \(B_{1/8}\) \\ \hline C-N1 & 0.100 & 3 & 3 & 4 & 3 & 4 & 1.000 & 0.486 & 0.949 \\ & 0.050 & 4 & 4 & 6 & 4 & 6 & 1.000 & 0.495 & 0.973 \\ & 0.010 & 8 & 8 & 11 & 9 & 13 & 1.000 & 0.499 & 0.996 \\ \hline C-N2 & 0.100 & 6 & 6 & 8 & 7 & 9 & 1.000 & 0.498 & 0.991 \\ & 0.050 & 8 & 8 & 11 & 9 & 13 & 1.000 & 0.499 & 0.996 \\ & 0.010 & 17 & 18 & 25 & 20 & 28 & 1.000 & 0.500 & 0.999 \\ \hline C-Exp & 0.100 & 2 & 3 & 3 & 3 & 4 & 1.000 & 0.490 & 0.955 \\ & 0.050 & 3 & 3 & 4 & 4 & 5 & 1.000 & 0.496 & 0.981 \\ & 0.010 & 7 & 7 & 9 & 8 & 10 & 1.000 & 0.499 & 0.997 \\ \hline C-Uni & 0.100 & 2 & 2 & 2 & 2 & 2 & 1.000 & 0.500 & 1.000 \\ & 0.050 & 2 & 2 & 3 & 2 & 3 & 1.000 & 0.500 & 1.000 \\ & 0.010 & 4 & 4 & 5 & 4 & 6 & 1.000 & 0.500 & 1.000 \\ \hline C-Bet & 0.100 & 1 & 1 & 2 & 1 & 2 & 0.641 & 0.418 & 0.641 \\ & 0.050 & 2 & 2 & 2 & 2 & 2 & 1.000 & 0.425 & 0.837 \\ & 0.010 & 3 & 3 & 5 & 4 & 5 & 1.000 & 0.495 & 0.976 \\ \hline C-Gam & 0.100 & 3 & 3 & 4 & 3 & 4 & 1.000 & 0.491 & 0.939 \\ & 0.050 & 4 & 4 & 6 & 4 & 6 & 1.000 & 0.496 & 0.983 \\ & 0.010 & 8 & 9 & 12 & 9 & 13 & 1.000 & 0.499 & 0.997 \\ \hline C-Chi & 0.100 & 4 & 4 & 5 & 4 & 6 & 1.000 & 0.495 & 0.974 \\ & 0.050 & 5 & 5 & 7 & 6 & 8 & 1.000 & 0.497 & 0.991 \\ & 0.010 & 11 & 11 & 15 & 12 & 16 & 1.000 & 0.499 & 0.998 \\ \hline C-Stu & 0.100 & 3 & 3 & 4 & 3 & 5 & 1.000 & 0.483 & 0.947 \\ & 0.050 & 4 & 4 & 6 & 5 & 6 & 1.000 & 0.494 & 0.967 \\ & 0.010 & 8 & 9 & 12 & 10 & 13 & 1.000 & 0.499 & 0.995 \\ \hline C-Log & 0.100 & 4 & 4 & 5 & 4 & 6 & 1.000 & 0.491 & 0.961 \\ & 0.050 & 5 & 5 & 7 & 6 & 8 & 1.000 & 0.496 & 0.983 \\ & 0.010 & 11 & 11 & 15 & 12 & 17 & 1.000 & 0.499 & 0.997 \\ \hline C-Lgn & 0.100 & 3 & 3 & 4 & 4 & 5 & 1.000 & 0.480 & 0.892 \\ & 0.050 & 4 & 4 & 6 & 5 & 7 & 1.000 & 0.492 & 0.960 \\ & 0.010 & 8 & 9 & 12 & 10 & 15 & 1.000 & 0.498 & 0.993 \\ \hline D-Bin & 0.100 & 7 & 7 & 12 & 15 & 21 & 0.979 & 0.445 & 0.979 \\ & 0.050 & 11 & 12 & 17 & 21 & 30 & 0.922 & 0.497 & 0.889 \\ & 0.010 & 27 & 27 & 33 & 47 & 66 & 0.970 & 0.452 & 0.931 \\ \hline D-Poi & 0.100 & 9 & 9 & 13 & 18 & 25 & 0.978 & 0.440 & 0.969 \\ & 0.050 & 12 & 13 & 19 & 25 & 35 & 0.938 & 0.437 & 0.880 \\ & 0.010 & 33 & 34 & 42 & 55 & 78 & 0.995 & 0.433 & 0.995 \\ \hline D-Geo & 0.100 & 20 & 20 & 29 & 44 & 63 & 0.996 & 0.497 & 0.996 \\ & 0.050 & 29 & 29 & 41 & 63 & 88 & 0.995 & 0.500 & 0.995 \\ & 0.010 & 66 & 68 & 98 & 139 & 197 & 0.983 & 0.496 & 0.995 \\ \hline D-Neg & 0.100 & 10 & 10 & 15 & 21 & 30 & 0.986 & 0.496 & 0.992 \\ & 0.050 & 15 & 15 & 22 & 30 & 42 & 0.992 & 0.451 & 0.992 \\ & 0.010 & 41 & 42 & 55 & 66 & 93 & 0.961 & 0.470 & 0.948 \\ \hline \hline \end{tabular} \end{table} Table 4: Number of Intervals and Errors In continuous distributions, we found that the difference between the actual measured values of intervals and the upper bounds was small. Specifically, the difference between the values of \(B_{\text{exact}}\) or \(B_{1/8}\) and \(UB_{1/8}\), as well as \(B_{1/4}\) and \(UB_{1/4}\), was generally 2 or less. In the case of discrete distributions, the upper bounds were approximately twice the actual measurements. #### 6.3.3 Discussion We observe that many values in the error column of Algorithm \(B_{1/8}\) are close to 1. This suggests that, in many cases, the approximation formula for (13) can be valid and a almost tight upper bound of the approximation error. Furthermore, considering that the number of breakpoints output by \(B_{1/8}\) hardly differs from \(B_{\text{exact}}\), we conclude that \(B_{1/8}\) is an algorithm that outputs near-optimal solutions. Regarding continuous distributions, the obtained upper bounds \(UB_{1/4}\) and \(UB_{1/8}\) can be regarded as good approximations to the output values of the corresponding algorithms. In other words, \(UB\) can be considered as an expression representing the trade-off between error and the number of breakpoints. Therefore, it is conceivable for users to refer to this trade-off when setting an appropriate acceptable error. On the other hand, for discrete distributions, we confirm a discrepancy of about twice between the upper bound of intervals and the algorithm output. We speculate that this is because, under the worst-case analysis due to the nature of discrete distributions within the proof of Theorem 4.4, the upper bound becomes twice as large compared to continuous distributions. However, looking at the experimental results, the number of intervals output by the algorithm is close to \(\frac{1}{2\sqrt{2}}\sqrt{\frac{b-a}{\epsilon}}\) which is the upper bound obtained for continuous distributions. Thus, for the discrete case, \(\frac{1}{2\sqrt{2}}\sqrt{\frac{b-a}{\epsilon}}\) may also be useful as a practical guideline for the minimum number of intervals for given error \(\epsilon\). Note that this upper bound may not always align with the actual output. If we prepare the number of intervals for the number of possible values of the discrete random variable, the error at that time can become 0. Thus, even when the number of possible values for the random variable is small, if \(b-a\) is large, the actual number of intervals can be much smaller than the bound. Lastly, the following table summarizes these observations. ## 7 Conclusions This study investigated the trade-off between error and the number of breakpoints when performing a piecewise linear approximation of \(f_{X}(s)=\mathbb{E}[\min(s,X)]\). As a re \begin{table} \begin{tabular}{c c c} \hline \hline algorithm & actual error & actual intervals \\ \hline \(B_{\text{exact}}\) and \(B_{1/8}\) & \(\epsilon\) & \(\frac{1}{2\sqrt{2}}\sqrt{\frac{b-a}{\epsilon}}\) \\ \(B_{1/4}\) & \(0.5\epsilon\) & \(\frac{1}{2}\sqrt{\frac{b-a}{\epsilon}}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Overview of Experimental Results for Allowable Error \(\epsilon\) sult, when conducting piecewise linear approximation based on the method proposed by Rossi et al. (2014), we obtained \(\frac{1}{2\sqrt{2}}\sqrt{\frac{W}{\epsilon}}\) as an approximately upper bound for the minimum number of breakpoints required to achieve an error less than \(\epsilon\) through theoretical analysis and numerical experiments, where \(W\) is the width of the approximation interval. Subsequently, we also proposed efficient algorithms to obtain a piecewise linear approximation with a number of breakpoints close to this bound. These results provide a guideline for the error and number of breakpoints to consider when we use piecewise linear approximation for a general first-order loss function. ## Acknowledgment This work was partially supported by JSPS KAKENHI Grant Numbers JP21K14368.
2309.08029
Uniform Distribution Technique for Neutrino Beam Scan Simulation
In Fermilab's neutrino facilities such as the Neutrinos at the Main Injector (NuMI) and the upcoming Long Baseline Neutrino Facility (LBNF), a proton beam strikes high-power target, producing positively and negatively charged pions and kaons. There is a need for detailed simulations in order to capture all particle interactions and beam propagation from protons on target to short-lived mesons decaying into muons and neutrinos. The generation of individual beam simulations is a resource-intensive and time-consuming process. In this paper, we describe a method through which many simulation samples with high statistics can be generated to study the effects of beam scan across a target for given beam configurations.
D. A. Wickremasinghe, S. Ganguly, K. Yonehara, R. Zwaska, P. Snopok, Y. Yu
2023-09-14T21:06:24Z
http://arxiv.org/abs/2309.08029v2
# Uniform Distribution Technique for Neutrino Beam Scan Simulation ###### Abstract In Fermilab's neutrino facilities such as the Neutrinos at the Main Injector (NuMI) and the upcoming Long Baseline Neutrino Facility (LBNF), a proton beam strikes high-power target, producing positively and negatively charged pions and kaons. There is a need for detailed simulations in order to capture all particle interactions and beam propagation from protons on target to short-lived mesons decaying into muons and neutrinos. The generation of individual beam simulations is a resource-intensive and time-consuming process. In this paper, we describe a method through which many simulation samples with high statistics can be generated to study the effects of beam scan across a target for given beam configurations. ## I Introduction The neutrino mass hierarchy and CP asymmetry in the lepton sector can be tested using long-baseline neutrino oscillation experiments that provide precision measurements [1; 2; 3; 4]. Neutrino flux at the Near Detector strongly correlates with proton beam misalignment. The alignment of the primary proton beam, target, and focusing horn greatly impacts the neutrino energy spectrum. Neutrinos at the Main Injector (NuMI) [5] neutrino beam at Fermilab is generated by aiming high energy protons into a carbon target. The NuMI beamline as depicted in Fig. 1 produces an intense muon neutrino beam for NuMI neutrino experiments. A 120-GeV proton beam from the main injector collides with a fixed graphite target to provide neutrinos. Using two focusing horns operating at 200 kA horn current, charged particles produced from proton interactions with target nuclei are focused into a 675-m long, 2-m diameter cylindrical decay pipe. As the mesons decay, they can produce neutrinos and muons before being absorbed by the hadron absorber. High-energy muons produced by meson decay could pass through muon monitors downstream of the hadron absorber. A world-leading neutrino beam will be produced by the upcoming Long Baseline Neutrino Facility (LBNF) [6] as shown in Fig. 2. The baseline beamline design consists of a 1.2-MW, 120-GeV primary proton beam impinging on a cylindrical graphite target that measures 1.8 m in length and 1.6 cm in diameter. Using 300 kA currents, three magnetic horns focus hadrons produced in the target, which is supported inside horn 1. In addition to the target chase, there is a 194-m long helium-filled decay pipe and a hadron absorber. In both NuMI and LBNF, neutrinos and anti-neutrinos can be created by operating the focusing horns in forward or reverse current configurations. We can estimate the effects of changes in the beam parameters by scanning the primary proton beam across the target. Typical beam scan processes [7] involve systematically changing proton beam position on target, beam spot size and horn currents. In NuMI, these changes are visible in the response of each of the three muon monitors (MM1-MM3) that are identical to each other. Due to intervening shielding between two consecutive detectors, each detector sees a different range of muon momenta. Note that these initial ranges are subsets rather than distinct ranges. Each muon monitor in the NuMI facility consists of a \(9\times 9\) array of ionization chambers as shown in Fig. 3. There are two parallel-plate electrodes separated by a 0.3-cm gap in each ionization chamber filled with helium gas. LBNF will monitor its beam quality through several detectors, including the Muon Monitor System (MuMS). A NuMI-based approach will be used in the conceptual design of MuMS. For the purpose of establishing a correlation between Figure 1: Beamline layout for NuMI. Figure 2: An illustration of the upstream portion of the LBNF neutrino beamline. A horn-protection baffle, three focusing horns, and the decay pipe are shown inside the target chase respectively from left to right (the beam direction). data and simulations, beam scan data must be compared with simulation results. Combining data and simulation also provides valuable beam diagnostics. High-statistics simulations with "realistic" Gaussian distributions for various beam and horn current settings generally require substantial computing resources and long simulation times. A Gaussian distribution characterizes the intensity profile of a Gaussian beam. In a Gaussian beam, the intensity is highest at the center and gradually decreases while moving away from the center in a transverse direction (perpendicular to the beam axis). Simulation time and computing resources can be significantly reduced by simulating a single high-statistics sample with a uniform distribution for each beam parameter where the uniform beam's intensity remains approximately the same from the center to the edges. After that, a Gaussian weight can be calculated and applied to the sample in post-processing. To understand neutrino beam variations from muon monitor data and simulations, Monte Carlo samples with high statistics are essential. Using a single high-statistics simulation sample with a uniform distribution, many Gaussian distributions can be created that are similar to those found in real beam scan studies. This proposed technique can generate a significant number of Monte Carlo (MC) data samples by varying the incident beam parameters and horn current settings. Section II provides a brief description of the simulation tools used in the studies. Section III presents the details of the simulation technique we have proposed. Section IV contains information about the simulation data that have been generated. In Sec. V we compare uniform beam simulation results with the nominal beam simulation results and validate the uniform beam simulation technique. Section VI gives details on the computer resources needed for these simulations. Section VII illustrates the application of the technique with a few beam scan examples with NuMI simulations. In Sec. VIII examples of uniform beam simulation technique are illustrated for the LBNF simulation. Sec. IX highlights the usefulness of the technique for machine learning applications. Sec. X briefly summarizes the details of the current state and the future applications of the proposed technique. ## II Simulation tools For NuMI and LBNF neutrino studies with simulation, g4numi and g4lbnf use end-to-end Monte Carlo simulation based on Geant4 [8; 9]. The simulations account for beamline geometry, including the targets, focusing horns, and decay volumes. Both secondary meson production and subsequent interactions with beamline components are considered as part of the full sequence of particles produced in the first collisions between protons and targets. When a neutrino is produced in a particle decay, information about its momentum, position, and ancestor is stored in the "dk2nu" format. A dk2nu tuple is essentially a list of neutrinos generated by a beam simulation. It is a flux ntuple format and library with methods for analyzing flux ntuples (e.g., location weights). DUNE, NOvA, and MINERvA are some of the Fermilab experiments that are using this ntuple format. Neutrinos are associated with dk2nu objects that contain detailed information about their kinematics and their hadron ancestors. Based on the ancestry information provided in the g4numi/g4lbnf output, we extract the momentum and position of each pion/kaon that decayed to a neutrino. In the NuMI beamline, the intersection of Horn 1's front face and the primary proton beamline's trajectory marks the origin of the coordinate system used in this simulation which begins with a 120-GeV primary proton beam. The Z-axis points towards the beamline and the Y-axis points vertically up. Geometrical features such as the target hall, decay pipe, absorber and muon monitors are included in the simulation that provides the location and kinematics of each decay into a neutrino. All particles in g4numi have a default energy threshold of 0.02 GeV. The simulation cuts off protons outside the beamline. In the muon monitor simulation, muons with energies less than 1 GeV, as well as other particles, are omitted. Using a 1 GeV muon cut for the second stage of the simulation (i.e., the muon monitor simulation) saves computing time and space. The threshold of the hadron absorber in front of the MM1 is about 5 GeV. Therefore, cutting muons below 1 GeV is safe in the decay pipe. g4lbnf simulation uses the same tracking cuts as g4numi. ## III Simulation technique In this section, we first describe a technique to reproduce NuMI beam scans with simulated data. The pions generated from g4numi simulations are over-sampled to decay into multiple muons in this study. Then each muon is simulated through the absorber and the muon moni Figure 3: Schematics of the NuMI muon monitors. tors. In order to predict the muon flux at muon monitors for selected beam and horn current settings, the output of the g4numi simulation is combined with the output of the muon monitor simulation. Using multiple muons as decay products is intended to decrease statistical uncertainty in the analysis of each muon monitor pixel. The simulation samples used here have all been generated in forward horn current (FHC) or neutrino mode, during which the horn current flows in the inner conductor in the same direction as the beam. As the first step of the proposed technique, a uniformly distributed single simulation data sample is generated for the selected beam scan variable. With the real beam width and beam positions, the uniformly distributed sample is then used to generate Gaussian samples for the selected Gaussian functions. Using this method, we can run the simulation scan through a set of beam variable values comparable to the actual beam scans. In this process, the weights (\(w_{i}\)) are calculated based on selected two-dimensional normalized Gaussian functions as shown in Eq. (1): \[w_{i}=\frac{1}{2\pi\sigma_{x}\sigma_{y}}\cdot\exp\left\{-\frac{(x_{i}-\mu_{x}) ^{2}}{2\sigma_{x}^{2}}-\frac{(y_{i}-\mu_{y})^{2}}{2\sigma_{y}^{2}}\right\} \tag{1}\] The \(w_{i}\) is the calculated weight at the \(i^{th}\) beam position (\(x_{i},y_{i}\)). The beam spot size (\(\sigma_{x}\) and \(\sigma_{y}\)) and the beam centroid position at the target (\(\mu_{x},\mu_{y}\)) have been fixed for selected beam settings. These weights are applied during the process of filling the histograms on the observable variables. Steps involved in the proposed simulation technique are as follows: * Prepare uniformly distributed simulation samples on a selected beam scan variable for a selected variable range. * Calculate the event weights according to a selected Gaussian function. * Apply the calculated weights on the observable variables. * Repeat the weight calculation algorithm by varying the Gaussian mean through the selected beam scan range. Every event from the uniformly distributed sample is contributing to each selected Gaussian sample based on their weight distribution. To demonstrate the technique, a uniformly distributed sample of protons on the target as shown in Fig. 4 is used to generate possible interactions with matter. From this uniformly distributed sample, a two-dimensional distribution of horizontal and vertical proton beam positions on the NuMI target is drawn as shown in Fig. 5, for each recorded beam interaction that has a neutrino candidate at the downstream near detector. The recorded events include beam interactions with the carbon target, aluminum baffle and downstream matter. The same exercise has been performed with the g4lbnf simulation in the forward horn current mode to draw two-dimensional proton beam horizontal vs. vertical positions on the LBNF target as shown in Fig. 6. A detailed description of the LBNF optimized beam design, performed using the g4lbnf simulation has been documented in the conceptual design report for the optimized LBNF beamline [6]. As shown in the Table. 1 below, g4numi and g4lbnf use different nominal beam spot sizes. As per NuMI and LBNF specifications, these values have been chosen in all our studies since they represent the recommended beam spot sizes Figure 4: Random throws along the horizontal beam positions to generate a uniform proton beam sample. Figure 5: Two-dimensional horizontal vs. vertical proton beam positions with uniform distribution sample for recorded beam interactions on the NuMI target, that has a neutrino candidate at the downstream neutrino detectors. for high-power beam operations at the respective beamlines. Next, the Gaussian weights for different proton beam configurations are calculated using Eq. (1). The top plot in Fig. 7 shows two examples of the calculated weights for a given beam setting with mean \(\mu=0.0\) cm and widths \(\sigma=0.10\) cm and \(\sigma=0.15\) cm. The bottom plot in Fig. 7 shows two examples of the calculated weights with beam \(\sigma\) fixed at 0.15 cm and varying \(\mu\) between 0.0 cm and 0.2 cm. The third step is to weight each event in the distribution to select corresponding proton beam interactions horizontally and vertically as shown in Fig. 8. Five Gaussian distributions as shown in Fig. 8 for the proton beam in the horizontal direction are drawn by changing the proton beam mean \(\mu\) from \(-0.2\) cm to 0.2 cm while keeping the Gaussian width \(\sigma\) fixed at 0.15 cm. The vertical proton beam distribution is asymmetric since the target geometry is asymmetrical in y. In each Gaussian profile, the weights are distinct. This demonstrates that starting with a single uniformly distributed sample of proton beam in both horizontal and vertical directions, for which there is a recorded beam interaction, as many Gaussian distributions in either horizontal or vertical proton beam direction can be selected. In the simulation, pions decay into multiple muons, which are propagated through the absorber and the muon monitors. The corresponding weights have been applied to the observed muon flux from the Gaussian proton beam slice with \(\mu=0.0\) cm to obtain the two dimensional event distributions on the muon monitors as shown in Fig. 9. The uniform beam simulation allows us to generate as many Gaussian samples as we require. It is possible to draw samples with means \(\mu_{i}\) with i ranging from \(-\infty\) to \(+\infty\). Figure 10 illustrates an example case with three Gaussian samples with means of \(\mu_{-1}\), \(\mu_{0}\), and \(\mu_{+1}\). The \(\mu_{-1}\) value in this example is \(-0.1\) cm, the \(\mu_{0}\) value is 0.0 cm, and the \(\mu_{+1}\) value is \(+0.1\) cm. For all three samples, the width \(\sigma\) has been fixed at 0.15 cm. Our reference \begin{table} \begin{tabular}{l|c} \hline & Beam spot size (cm) \(\sigma\) \\ \hline g4numi & 0.15 \\ \hline g4lbnf & 0.27 \\ \hline \end{tabular} \end{table} Table 1: Nominal beam spot size values for the g4numi and g4lbnf simulations. Figure 6: Two-dimensional horizontal vs. vertical proton beam positions with uniform distribution sample for recorded beam interactions on the LBNF target that has a neutrino candidate at the downstream neutrino detector. Figure 7: Example of calculated weights for different beam settings with varying beam width (top) and mean (bottom). Gaussian is the Gaussian sample with mean \(\mu_{0}\). By using the uniform beam method, we start with a static pool of randomly thrown events interacting with matter to produce neutrinos. Gaussian samples are generated based on weights by selecting events from the same static pool. However, this is not a concern, because we give different statistical weights to each event. These statistical weights determine with what probability these events will contribute to the Gaussian samples. Within one Gaussian sample, no two events will have the same statistical weight, as the weight is calculated using Eq. (1). In the case of two Gaussian samples, there might be a scenario in which the same event could be shared with the same statistical weight between these two samples as shown in Fig. 10. In Fig. 10, we can see that by plotting two Gaussian samples with \(\mu_{-1}=-0.1\,\mathrm{cm}\) and \(\mu_{+1}=+0.1\,\mathrm{cm}\), an overlapping volume is created between these two Gaussian samples, as shown by the shaded blue region. It is possible for two events belonging to the above two Gaussian samples to have the same weight in this overlap region, but they are really two different events since Figure 8: The horizontal (top) and vertical (bottom) proton beam positions for recorded beam interactions that have a neutrino candidate at the downstream neutrino detectors. Figure 10: Two selected Gaussian profiles from the uniform simulation, showing the volume of overlap between them. Figure 9: The observed two-dimensional muon distribution on NuMI muon monitor 1 for a given beam \(\mu=0.0\,\mathrm{cm}\) and \(\sigma=0.15\,\mathrm{cm}\). they each have a unique beam position and will interact with matter differently. Only the event at proton beam position \(x_{0}\), shown by the black dotted line will have the same interactions. So, it is only the event shown by the black dotted line which will be shared by the Gaussian with \(\mu_{-1}=-0.1\,\mathrm{cm}\) as well as the Gaussian with \(\mu_{+1}=+0.1\,\mathrm{cm}\). Within one generated Gaussian sample (for example the Gaussian with mean \(\mu_{-1}\)), we are using the same event only once. Equation (2) estimates the ratio of the overlapping volume \(R_{i}\) between a Gaussian sample with mean \(\mu_{i}\) and a given beam width \(\sigma\) as compared to the reference Gaussian sample with mean \(\mu_{0}\). \[R_{i}=\frac{2\int_{-\infty}^{x_{0}}\mathcal{N}(\mu_{i},\,\sigma^{2})dx}{\int_{ -\infty}^{+\infty}\mathcal{N}(\mu_{0},\,\sigma^{2})dx} \tag{2}\] Figure 11 plots \(R_{i}\) as a function of \(\mu_{i}\) indicating that the overlapping volume will be maximum when the mean of the \(i\)-th Gaussian aligns with the mean of the central Gaussian slice (with \(\mu_{0}\)). As the mean of the Gaussian \(\mu_{i}\) moves away from \(\mu_{0}\), the overlapping volume between two consecutive Gaussian samples will decrease according to Eq. 2. The larger \(|\mu_{i}-\mu_{0}|\) is, the smaller the overlap will be, reducing the weight of the shared event between two consecutive Gaussian samples. Consequently, the statistical contribution of this shared event will decrease according to Eq. 2. Using Figure 11, we can determine the overlap between two Gaussian samples at a given separation between their means. By separating Gaussian means by \(2\sigma\), for example, the overlap volume is reduced from \(100\%\) to \(30\%\), compared to the case when the samples have the same mean. ## IV Simulation data Validation and demonstration of the proposed technique has been performed by generating several data samples. We have generated a uniformly simulated beam sample of 1 billion protons on target (POT). Using FermiGrid, this sample was the largest we were able to produce under the limit of available storage per user. FermiGrid is a collection of computing resources that Fermilab Computing Division makes available through grid protocols. It would be possible to create multiple 1 billion uniform beam samples by utilizing supercomputers. While real measurements utilize the three muon monitors to measure integrated flux of muon beams that exceed \(5\,\mathrm{GeV}\) and two other energies separated by at least \(7\,\mathrm{GeV}\), simulated data samples can only be generated using muon monitors 1 and 2 as computing resources are limited. Simulating muon monitor 3 responses will require supercomputer resources in order to generate large statistics. In each Gaussian beam sample, 250 million POTs were generated with different beam positions while maintaining a large beam spot size of \(0.15\,\mathrm{cm}\). Based on the beam spot size (\(\sigma=0.15\,\mathrm{cm}\)) and the cross sectional area of the target and the baffle, the vertical and horizontal limits of the uniform beam simulation data are chosen to be \(1.0\,\times\,1.0\,\mathrm{cm}\) perpendicular to the beam direction. The uniform beam window was chosen to cover approximately \(7\sigma\) of the Gaussian beam, which allowed us to incorporate the interaction of the protons with matter in the Gaussian beam tail. The target and a significant portion of the baffle are covered by this limit. In order to prepare the horizontal and vertical beam scan data samples, Gaussian beams with mean values in the range of \([-0.2,0.2]\,\mathrm{cm}\) are generated from uniform beam simulation data. These beams cover a minimum of \(5\,\sigma\) beam profiles for each beam configuration. The randomly thrown protons in the uniform beam simulation interact with matter to produce secondary particles and are only recorded if there is a neutrino candidate at the near detector from the hadron decay. For each proton beam position along the horizontal direction, Gaussian samples were created by applying weights with a beam width of \(0.15\,\mathrm{cm}\). As a result, every Gaussian sample contains \(\sim\)250 million POTs. This is estimated based on the generated 1 billion POTs in the uniform beam sample. By applying the Gaussian weight to random throws of \(N\) POTs, we calculated statistically that we would select \(\sim\frac{1}{4}\) of the starting N POTs in one Gaussian beam. In order to verify the calculation, we generated one uniform beam profile with 1 billion POTs from which we drew Gaussian beam profiles by applying weights. Our calculation shows that each of these Gaussian beam profiles will contain 250 million POTs. Following this, we compared these Gaussian beam profiles with Gaussian beams from nominal beam simulations each containing 250 million POTs with the same beam width of \(0.15\,\mathrm{cm}\). Therefore, we would generate a uniform simulation sample just once with four times as many POTs as a nominal simulation sample. On the other hand, we would have to generate a new Gaussian with 250 million POTs every time if we were to generate a new reference simulation sample with different beam parameter settings. Figure 12 shows a two-dimensional distribution of total hadron momentum vs. hadron angle at decay for a Figure 11: Ratio of the overlapping volume of Gaussian beam profiles for different \(|\mu-\mu_{0}|\). Gaussian slice of 250 million POT with a beam mean \(\mu\) = 0.0 cm. By weighting the uniform simulation sample of 1 billion POT, this Gaussian slice was created. The same two-dimensional distribution of total hadron momentum vs. hadron angle at decay is plotted for a Gaussian slice of 250 million particles with a beam mean \(\mu\) = 0.0 cm using the nominal simulation, as shown in Fig. 13. These two plots only show events for which muon candidates with energies greater than 5 GeV are present in muon monitor 1. ## V Validation Validation studies of the proposed beam simulation technique are an important requirement to prove the capability of reproducing the nominal simulation results. Therefore, two studies with the NuMI simulation have been carried out to validate the uniform beam technique against the nominal simulation: * Testing statistical reproducibility. * Testing kinematics reproducibility. In the validation studies, the uniform beam technique and multiple decays of hadrons were combined to increase the statistics of the muon events in the simulation. The uniqueness of the uniform beam simulation is in that we can draw as many Gaussian beam profiles horizontally or vertically. ### Testing Statistical Reproducibility We generated five Gaussian beams using the nominal simulation, each containing 250 million POT, and one uniform simulation containing a total of 1 billion POT to compare beam profiles. We drew five Gaussian weighted proton beam samples from the uniform simulation sample with beam centers at -0.2 cm, -0.1 cm, 0 cm, 0.1 cm, and 0.2 cm along the horizontal direction and compared them to the nominal Gaussian beams. Figure 14 shows a Figure 14: Comparison of the proton beam profiles drawn from uniform and nominal simulations in the horizontal direction with the mean spanning from \(-0.2\) to \(+0.2\) cm. All plots shown are area normalized. Figure 12: Distributions of hadron angle and hadron momentum in the longitudinal direction at decay with uniform simulation for a Gaussian beam with \(\mu\) = 0.0 cm. Figure 13: Distributions of hadron angle and total hadron momentum at decay with nominal simulation for a Gaussian beam of \(\mu\) = 0.0 cm. comparison of the proton beam profiles from the uniform beam simulation (cyan) and Nominal beam simulation (red) at five different beam positions along the horizontal direction. The beam profiles have been plotted only for the incident protons that have corresponding muon candidates from the decay of hadrons which were created by the proton-matter interactions. The uniform and nominal simulations match up with all five beam profiles. The bottom plot shows the ratio of two beam profiles for the mean of 0.0 cm based on uniform and nominal simulations. A ratio of one confirms an excellent agreement. Figure 15 illustrates an example of NuMI muon monitor 1 response with a given Gaussian beam generated from the uniform beam simulation. The same Gaussian beam was also generated using the nominal beam simulation. A ratio between those two simulations for each pixel has also been shown. We plotted muon monitor responses only for incident protons decaying into muon candidates. Statistical fluctuations can explain a maximum difference of \(\sim\)4% between two simulations in some of the edge pixels of a muon monitor. We demonstrated that the statistical fluctuations affect muon monitor pixel responses even within the same nominal simulation. We generated five different nominal simulation samples using five unique random seeds and then calculated the ratio of the muon monitor 1 responses between two Nominal simulation samples with two different seeds. It has been shown that even within the same nominal simulation, some edge pixels fluctuated based on the random seed by \(\sim\) 4% at most. It is consistent with the difference in edge pixel response between uniform and nominal simulations on muon monitor 1. Hence, the \(\sim\) 4% difference is not an artifact of the uniform simulation technique, rather stems from statistical fluctuations. Increasing statistics will reduce this fluctuation. Figure 16 top distribution shows muon monitor 1 pixel responses for two nominal simulation samples created with different random seeds, while the bottom distribution shows the ratio between the two simulations. There is a maximum difference of \(\sim\) 4% in one of the corner left edge pixels. Finally, Fig. 17 illustrates the NOvA Near Detector neutrino energy spectrum plotted from nominal and uniform simulations. As can be seen from the bottom plot, in the region of interest, the ratio of the two spectra stays close to 1 for the energy range of interest. ### Testing Kinematics Reproducibility We have tested the reproducibility of the kinematics of the secondary particles after applying the uniform beam technique. Pion decay kinematics and decay pipe size determine the size of the muon beam at the Muon Monitors. In this study, we have used 0.15 cm width Gaussian beam samples, with mean set at \(\mu=0.0\) cm from uniform and nominal simulations to compare the muon momentum distributions at the hadron decay along \(x\), \(y\), and \(z\) directions at the muon monitor 1. Figure 18 shows the Figure 15: NuMI muon monitor 1 responses from two different simulations: uniform (top) and nominal (middle). Ratio of the muon monitor 1 responses between the uniform and the nominal simulations (bottom). distributions of the momenta of the muons produced at the hadron decay. In the red distributions, the momenta are plotted using the uniform beam simulation, whereas in the blue distributions, they are plotted using the nominal simulation. Muon production momenta are shown to be consistent between uniform and nominal simulations, which is validated by the ratio 1 between the momenta distributions from these simulations. ## VI Computing resources As a result of using uniform beam simulations to generate large simulation samples, the amount of computing resources needed to run the simulations can be reduced considerably. Here we show a comparison between the amount of computing resources used to generate nominal Gaussian and uniform beam simulation samples, which is shown in Table. 2. The table shows that running a nominal simulation to generate one random sample takes about an hour, whereas running a uniform simulation to generate four times as many statistics takes almost the same time. The fractions of protons interacting with matter are smaller in uniform beam simulation than in nominal simulation. For uniform simulations, we create Gaussian beam post-processing. Using nominal simulation, however, it would take approximately 1000 hours of CPU time to generate 1000 random samples. Nevertheless, if the proposed weighting technique is applied to a high POT single uniform beam simulation sample, the CPU time required to generate 1000 random Gaussian samples can be significantly reduced. ## VII Applications of the technique Simulation samples were generated for selected proton beam position configurations with different horn current settings to test the uniform beam simulation technique. Figure 16: NuMI muon monitor 1 responses from nominal simulations with two different seeds (top and middle, respectively). Ratio of the muon monitor 1 responses between nominal simulations with two different seeds (bottom). Figure 17: Neutrino energy spectra at the NOvA Near Detector from nominal and uniform beam simulations (top). Ratio of the neutrino energy spectra at the Near Detector between nominal and uniform simulations (bottom). This section presents the simulation technique for scanning targets by generating uniform beams along horizontal and vertical positions at the target for 200 kA and 180 kA horn currents, respectively. Simulated samples are generated using the combined g4numi and muon monitor packages. ### Total Muon Scan With and Without the Target To check the alignment of the target-baffle system, horizontal and vertical target scans are generally performed with the primary proton beam at the beginning of a run period. To align the horn, the target is removed and the beam is scanned along the horn cross-hair to observe the responses on the hadron monitor. A similar scan is presented here using our proposed technique to generate 100 Gaussian beam profiles with \(\sigma=0.15\) cm along the horizontal axis from -0.8 cm to +0.8 cm. The demonstration has been performed with and without the target in the beamline. The horn current has been set to +200kA for this study. Muon monitor 1 shows a variation in the total number of muons as a function of beam position on the target. Fig. 19 shows the total number of muons detected at muon monitor 1 during the beam scan with (left) and without (left) the target. By simulating the interactions of the beam with the baffle, target, and other beamline materials, we are able to observe muons at muon monitor 1. A similar study of the target scan with the NuMI beam has been described in chapter 10: Beam Monitoring Measurements of the Accelerator Systems and Instrumentation for the NuMI Neutrino Beam of reference 10.[10]. ### Horizontal and Vertical Beam Scan To demonstrate horizontal and vertical beam scans, we have generated Monte Carlo samples each with 1 billion protons on target (POT) for +200 kA horn current setting. We generate horizontal beam scan samples by throwing protons uniformly along the horizontal axis from -1.0 cm to +1.0 cm while keeping the vertical beam as a random Gaussian with a 0.15 cm beam width and zero centroid. In order to generate a vertical beam scan sample, the axis selections for the uniform beam and Gaussian beam are changed. Each iteration uses a centroid separation of 0.02 cm and a Gaussian width of 0.02 cm. The Gaussian width of the proton beam has been \begin{table} \begin{tabular}{c|c|c|c} \hline & & \begin{tabular}{c} \textless{CPU} \\ Time\textgreater{} (min) \\ \end{tabular} & \begin{tabular}{c} \textless{Memory}\textgreater{} \\ (MiB) \\ \end{tabular} \\ \hline Nominal & 250M & 59 & 51.2 \\ Uniform & 1000 M & 67 & 52.7 \\ \hline \end{tabular} \end{table} Table 2: A comparison of the average CPU time and memory usage required to generate one sample of uniform beam simulation versus one sample of nominal beam simulation. Figure 18: A comparison of the muon momenta along the X (top),Y (middle) and Z (bottom) axes produced at the hadron decay with uniform (red) and Nominal (blue) simulations for a Gaussian beam of \(\mu\) = 0.0 cm. set to 0.15 cm for the current scan. To verify the uniform beam simulation technique, we compare the results of the uniform simulation beam scan with the results of the nominal simulation beam scan. Fig. 20 shows the responses of the muon flux centroid along the horizontal (top) and the vertical (bottom) axes at the muon monitor 1 (red) and 2 (blue) as a function of the proton beam positions for +200 kA horn current plotted with the uniform beam simulation. The equivalent plots with the nominal simulation are shown at muon monitor 1 (green) and 2 (purple). The muon flux centroid on each muon monitor is calculated by taking the average of the projected distribution on the horizontal axis from 81 muon monitor pixels of ionization chamber signals. The distributions have been fitted to a linear function. The fitted slopes from the uniform beam simulation are compared with the slopes calculated using nominal simulation. Another example of the muon monitor response to the primary beam position variation can be demonstrated as the ratio of the muon flux at muon monitor 1 for proton beam at \(\mu\) = -0.2 cm to the beam at \(\mu\) = +0.2 cm in the horizontal direction as shown in fig. 21. By observing this ratio we can see the muon flux behavior at the muon monitor 1 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue) and the muon monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 2 (blue). The muon flux centroid is calculated using the uniform monitor 2 (blue). monitor 1 due to the horn focusing mechanism at work [11]. The same flux behavior is observed in both uniform and nominal simulations. It can be seen that the muon flux moves from right to left on muon monitor 1 as the primary beam moves from left to right. This is due to the fact that hadrons are over-focused at muon monitor 1 hence produces the negative slope as shown in fig. 20 (top). In order to repeat the beam scan studies for a different horn current setting of +180 kA, we repeat the same analysis procedure described earlier. Fig. 22 shows the muon flux centroid along the horizontal (left) and vertical (right) axes at muon monitor 1 as a function of the proton beam positions for a horn current of +180 kA, using both the uniform beam and nominal simulations. Two different horn current settings produced consistent slopes in both horizontal and vertical scans of both simulations. ### Beam Spot Size Scans In order to establish a correlation between neutrino and muon beam profiles, it is essential to understand Figure 21: Ratio of muon flux in each pixel on muon monitor 1 for proton beam \(\mu\) = -0.2 cm and \(\mu\) = +0.2 cm respectively in the horizontal direction for a +200 kA horn current. Uniform beam simulation (top). Nominal beam simulation (bottom). Figure 22: The uniform simulation estimates of the muon flux centroid along the horizontal (top) and vertical (bottom) projections on muon monitor 1 for 180 kA horn current setting as a function of the proton beam position at the target. how beam spot size affects muon monitor observations. With the uniform beam simulation technique, we have produced Gaussian beams by gradually changing the Gaussian width \(\sigma\). We have generated a simulation sample with 1 billion POT of uniformly distributed proton throws along horizontal and vertical directions ranging from -1.0 cm to +1.0 cm. We have studied the muon flux distributions at monitor 1 for beam spot size change of 0.08 cm, and 0.15 cm. Fig. 23 shows the beam spot sizes of 0.08 cm (top left) and 0.15 cm (top right) and the corresponding muon monitor responses in the contour plots (bottom left and right) respectively. Fig. 24 shows the muon flux centroid estimation for different proton beam positions with changing beam spot sizes. According to these observations, muon flux centroids and standard deviations do not change as beam spot size changes. Simulation studies indicate that changing the beam spot size will not affect muon monitor measurements. ## VIII Application in LBNF simulation Neutrino beam instrumentation (NBI) for LBNF provides measurements of the secondary beam for commissioning, alignment, monitoring, and hardware protection. This instrumentation complements the primary beam instrumentation as well as the neutrino detectors. It includes a hadron alignment detector system (HADeS) for measuring the remaining secondary particles in the Decay Pipe and a muon monitor detector system (MuMS) for monitoring muons downstream of the Hadron Absorber. HaDeS is based on the NuMI hadron monitor [7], which will be used to align the beamline components by analyzing the residual beam after scanning the primary beam across the target, baffle, and horn. The NuMI hadron monitor consists of a linear array of parallel-plate ionization chambers of the size of 1 m\({}^{2}\), \(7\times 7\) array. HaDeS will implement LBNF-specific modifications in terms of higher channel counts to account for smaller pixel sizes compared to NuMI. During beam commissioning, beam alignment, and diagnosing failures, the HaDeS will be lowered into the beam for low-intensity alignment scans. A number of purposes are accomplished by the MuMS, including monitoring of long-term target degradation under beam irradiation by comparing muon fluxes at different energies. In addition to monitoring the condition of a target in detail, this monitor bypasses the need to wait for neutrinos or other tertiary beam monitors. Similar detector technology will be used for MuMS as for HaDeS, following NuMI's lead. The LBNF beam simulation (g4lbnf) is a dedicated Geant4 beamline simulation based on the optimized design of the beamline. In the nominal g4lbnf simulation, a Gaussian beam is used with beam \(\sigma\) set at 0.27 cm. We have implemented the uniform beam simulation technique in g4lbnf and drawn five Gaussian distributions for the proton beam in the horizontal direction from the uniform beam by changing the proton beam mean \(\mu\) from -0.2 cm to +0.2 cm while keeping the Gaussian width \(\sigma\) fixed at 0.27 cm as shown in Fig. 25 top. The plots shown below as a demonstration of the uniform method have been generated using a simulation sample that contains 1000 jobs, each containing 10000 POT. This is just Figure 23: The beam spot size selections: \(\sigma=0.08\) cm (top left) and \(\sigma=0.15\) cm (top right) and corresponding muon monitor responses for \(\sigma=0.08\) cm (bottom right) and \(\sigma=0.15\) cm (bottom left). Figure 24: The muon flux centroid estimated along the horizontal projection on Muon monitor 1 and 2 for 200 kA for two different beam spot sizes: 0.15 cm and 0.08 cm. an example. It should be noted that this is only an example. POT can be simulated in different amounts, depending on the need of the user. The bottom two plots here show the distributions of the charged particles in a virtual detector at the location of the HaDeS for the Gaussian beams with \(\mu\) = -0.2 cm, and, 0.0 cm respectively. Only those events for which a beam interaction has been recorded are included in the Gaussian distribution. During beam-based alignment, low intensity beam will be scanned across the face of the HaDeS to determine beam direction. In order to determine if the pixel map of the hadron monitor is accurate, we can compare the HaDeS distributions from simulation with data. In Fig. 26, the top plot illustrates how Gaussian distributions are drawn horizontally after applying weights, only for the events for which beam interactions are recorded. On a virtual detector placed downstream of the absorber at the location of MuMS in muon alcove 1, the bottom plot shows the muon beam profiles after applying the same corresponding weights for each of the Gaussian beams. Each MuMS system detects muons that are focused differently depending on the energy of the parent pion. The two-dimensional event distribution on the alcove 1 tracking plane from a Gaussian proton beam slice with mean \(\mu\) = 0.0 cm is shown in Fig. 27, left. The right plot shows the muon flux centroid estimation in a horizontal proton beam scan in the alcove 1 location. The horn current is 300 kA. The ability to perform beam scan studies using the uniform beam simulation technique will be valuable to understand impacts of the changing LBNF beam parameters on the beam response measurements. ## IX Application of Machine Learning: Predict Neutrino Flux The application of Machine learning (ML) algorithms is becoming increasingly critical in particle accelerators and beamlines. Due to their efficiency and flexibility, artificial neural networks are ideally suited for complex simulations. A multi-layered artificial neural network (ANN) consists of an input layer, an output layer, and multiple hidden layers, each layer containing a number of nodes that are connected by randomly assigned weights. Every node in the layers produces an output according to its "activation function". The neural networks are then trained through repeated iterations to reduce estimated errors in the output by adjusting the weights of the connections between the nodes. Besides modeling nonlinear behavior, neural networks can also adapt to changes in the system over time. In addition to being useful for Fermilab's current neutrino beamlines, the development of machine learning algorithms for beam quality monitoring, anomaly detection, neutrino beam systematic studies, and neutrino beam quality monitoring will also be useful for future beamlines such as LBNF. In order to understand some of the rare "anomaly" scenarios in the NuMI beamline, we need to generate simulated data. For example, g4numi simulation is used to generate simulated data for ML applications in the NuMI beamline. We can pre Figure 25: The horizontal proton beam positions (top) for recorded beam interactions that have a neutrino candidate at the downstream neutrino detectors. Beam \(\sigma\) = 0.27 cm. Distribution of charges particles at the LBNF HaDeS location (middle) for a Gaussian proton beam with \(\mu\) = -0.2 cm. Equivalent distribution (bottom) for a Gaussian proton beam with \(\mu\) = 0.0 cm. dict some "hard-to-measure" beam parameters ML algorithms based on simulations such as Horn tilting angle, target offset. The muon monitors are sensitive to changes in the primary beam and variations in horn current. Machine learning applications can be built to monitor the quality of the NuMI neutrino beam based on the unique responses of muon monitors. The fig. 28. below illustrates how different ML algorithms can be used to predict different beam parameters and neutrino fluxes based on muon monitor pixel information. A uniform beam simulation sample was used for the training data samples, while three independent uniform beam simulation samples were used as the test data samples. There are 6000 points in the training data sample, each containing beam parameters, muon events at pixels, and neutrino flux at the near detector. Muon monitor observations depend on a number of parameters. Taking pixel information from the muon monitor and disentangling it from other correlations is important. Simulated data should be divided into training and test sets. To understand correlations between muon monitor observations and beamline incidents, we need MC samples with large statistics, \(\sim\)300 million POTs in each sample. In the past, the bottleneck has been obtaining large simulated samples running the nominal g4numi simulation, due to the lengthy sample preparation process. Creating a nominal g4numi simulation sample with more than \(\sim\) 300 million events takes \(\sim\) 1 day. Since we need to scan over parameter space, we need many large samples. Each time we change a beam parameter, we need to run the simulation again with large statistics for each change. Solutions such as GPU processing have been considered, but it becomes non-trivial as Geant4 does not support GPU processing. CPU processing on the DOE supercomputer National Energy Research Scientific Computing Center (NERSC) with 16000 cores would be an easier option. However, by using uniform beam simulation, we can create large simulation samples more quickly and efficiently. A Monte Carlo data sample with uniform beam simulation has been generated for ML applications to demonstrate the power of uniform beam simulation. The uni Figure 27: The observed two dimensional muon distribution on a tracking plane at the LBNF alcove 1 location (top) for a given proton beam with \(\mu=0.0\) cm and \(\sigma=0.27\) cm. Muon flux centroid estimated at the alcove 1 location as a function of the horizontal proton beam position (bottom) at the target. Figure 26: Top: the Gaussian slices of the LBNF proton beam with mean at -0.2 cm, 0.0 cm and +0.2 cm from left to right. Bottom: muon beam profiles at the location of alcove 1 for the corresponding proton beams. form beam simulation and multiple decay simulation have been combined to increase simulation efficiency. In this technique, we generate a uniformly distributed single simulation data sample on the grid with large statistics for the selected beam variable range. The uniformly distributed sample is then used to generate Gaussian beam profiles for different selected beam parameters. Beam positions, widths etc. can be varied post-processing once we have generated the uniform sample. In this way, computational overhead is greatly reduced. Based on the uniform beam simulation data, a linear regression model was tested to predict the horizontal proton beam position and the horn current. To train the linear model, we used muon monitor pixels as inputs. The model shows high accuracy in predicting beam position (left) and horn current (right) after training as shown in fig. 29. The ML algorithm and uniform beam simulations have also been shown to be successful in predicting neutrino flux. We have predicted neutrino flux at the NOvA Near detector using simulated muon events from 81 pixels from each muon monitor. A neural network has been used to make prediction of the NuMI neutrino flux. Fig. 30 shows the predicted and true neutrino energy spectra at the NOvA Near detector where the prediction matches the training sample well. The ML predictions provide an additional layer of monitoring of neutrino beam behavior and horn current behavior. The results show potential for monitoring beam performance, developing trends, and identifying issues during regular beam operations. The uniform beam simulation has enabled ML models to be used to predict rare incidents and anomalies due to its efficiency and cost-effectiveness in generating large simulation samples. ## X Summary and Outlook In this paper, we have presented a simulation technique that can be applied to generate as many random beam simulation options desired for beam scan studies. In contrast to generating multiple Gaussian samples with very large statistics, this technique produces large simulation samples with a uniform distribution of beam parameters. From a test sample of 1 billion POT with uniformly distributed beam X and Y with g4numi simulation, each generated Gaussian beam with 0.15 cm beam width will contain 540 million weighted interactions. For the same number of interactions, a sample of 250 mil Figure 28: An example of application of Machine Learning Algorithms with Muon Monitor pixel inputs. Figure 30: A comparison of predicted (blue) and true (orange) neutrino energy spectra at the NOvA Near detector. Figure 29: Top: Horizontal beam position prediction, bottom: horn current prediction using a linear regression model. lion POT must be generated every time from a nominal sample. Therefore, instead of generating multiple Gaussian beams with 250 million POT every time, one can generate a uniform simulation with 1 billion POT once and generate as many Gaussian beams from it as desired. To generate a single Gaussian beam with 250 million POT with nominal simulation takes more than 2 hrs of wall time on the Fermigrid parallel processing resource at Fermilab computing facility. Grid-submitted jobs are subject to varying and lengthy waiting times. By using uniform beam simulation with high statistics, one can generate weights for many beam simulation combinations without requiring a large computing resource and a long simulation time. In order to demonstrate the application of the uniform simulation technique to NuMI studies, a number of examples have been presented. A similar exercise has been performed with g4lbnf simulation, and the studies have been extended to show that the technique can be used to study LBNF beam scans through simulation. The results of the uniform simulations are shown to be consistent with the nominal simulation. A combination of uniform beam simulations and multiple decays has been shown to be applicable to multiple neutrino beamline simulations. The high statistics of Gaussian throws from the uniform simulation technique can be leveraged not only for neutrino experiments but also for various simulation requirements related to the upcoming Mu2e experiment at Fermilab or its upgrade, known as Mu2e-II, which is the next generation muon conversion experiment. Due to its efficiency and cost-effectiveness in generating large simulation samples, the utilization of the uniform beam simulation technique has facilitated the use of machine learning models for predicting rare incidents and anomalies. Consequently, this technique will play a crucial role in the application of machine learning algorithms across diverse research contexts. ###### Acknowledgements. This work is supported by the Fermi Research Alliance, LLC manages and operates the Fermi National Accelerator Laboratory pursuant to Contract number DE-AC02-07CH11359 with the United States Department of Energy. This work is partially supported by the U.S. Department of Energy grant DE-SC0019264.
2309.04613
Modeling Nuclear Quantum Effects on Long Range Electrostatics in Nonuniform Fluids
Nuclear quantum effects play critical roles in a variety of molecular processes, especially in systems that contain hydrogen and other light nuclei, such as water. For water at ambient conditions, nuclear quantum effects are often interpreted as local effects resulting from a smearing of the hydrogen atom distribution. However, the orientational structure of water at interfaces determines long range effects like electrostatics through the O-H bond ordering that is impacted by nuclear quantum effects. In this work, I examine nuclear quantum effects on long range electrostatics of water confined between hydrophobic walls using path integral simulations. To do so, I combine concepts from local molecular field (LMF) theory with path integral methods at varying levels of approximation to develop an efficient and physically intuitive approaches for describing long range electrostatics in nonuniform quantum systems. Using these approaches, I show that quantum water requires larger electrostatic forces to achieve the same level of interfacial screening as the corresponding classical system. This work highlights subtleties of electrostatics in nonuniform classical and quantum molecular systems, and the methods presented here are expected to be of use to efficiently model nuclear quantum effects in large systems.
Richard C. Remsing
2023-09-08T22:07:58Z
http://arxiv.org/abs/2309.04613v1
# Modeling Nuclear Quantum Effects on Long Range Electrostatics in Nonuniform Fluids ###### Abstract Nuclear quantum effects play critical roles in a variety of molecular processes, especially in systems that contain hydrogen and other light nuclei, such as water. For water at ambient conditions, nuclear quantum effects are often interpreted as local effects resulting from a smearing of the hydrogen atom distribution. However, the orientational structure of water at interfaces determines long range effects like electrostatics through the O-H bond ordering that is impacted by nuclear quantum effects. In this work, I examine nuclear quantum effects on long range electrostatics of water confined between hydrophobic walls using path integral simulations. To do so, I combine concepts from local molecular field (LMF) theory with path integral methods at varying levels of approximation to develop an efficient and physically intuitive approaches for describing long range electrostatics in nonuniform quantum systems. Using these approaches, I show that quantum water requires larger electrostatic forces to achieve the same level of interfacial screening as the corresponding classical system. This work highlights subtleties of electrostatics in nonuniform classical and quantum molecular systems, and the methods presented here are expected to be of use to efficiently model nuclear quantum effects in large systems. ## I Introduction The quantum mechanical behavior of atomic nuclei can have important effects on molecular processes. Chemical kinetics, isotope effects, and the structure of molecular systems is influenced by nuclear quantum effects (NQEs). Therefore, NQEs must be adequately described in theoretical or computational models of molecular systems, especially when light nuclei or low temperatures are involved, because NQEs are prevalent at these conditions. In water at ambient conditions, NQEs primarily impact hydrogen atoms and affect the structure, dynamics, and chemical reactivity of liquid water [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. Quantum mechanical treatment of water is often considered to smear out the positions of the hydrogens, although non-trivial electronic quantum effects can couple to the nuclear fluctuations in ab initio treatments. In all of these situations, NQEs are considered mainly local effects, impacting the hydrogen bond structure and dynamics of water. However, any changes in the hydrogen bond structure can also impact non-local properties of water, especially in nonuniform systems where changes in liquid structure are magnified by interfaces. In particular, water preferentially adopts specific orientations at interfaces [17; 18; 19; 20; 21; 22; 23; 24]. These orientational preferences control interfacial electrostatics and are dictated by the water hydrogen bond network. This hydrogen bond network, in turn, is sensitive to NQEs. Ultimately, this suggests that NQEs can indirectly impact orientational properties of water in nonuniform systems and the consequent electrostatic properties of the system. Here, I investigate NQEs on the long range electrostatic properties of water in nonuniform systems using molecular simulations and quantum statistical mechanics combined with liquid state theory. A standard approach to modeling NQEs is path integral molecular dynamics (PIMD) simulations [25; 26; 27], as well as the closely related ring polymer molecular dynamics (RPMD) simulation approach [28]. In these approaches, each quantum particle is replaced by an isomorphic classical _ring polymer_ composed of \(P\) beads (or monomers). For large enough \(P\), one can determine exact statistical averages of static quantities. Using RPMD, one can model approximate quantum dynamics by evolving the system in time using classical dynamics, but in the extended phase space of the ring polymer system [28]. Modeling the ring polymer system amounts to simulating \(P\) coupled copies or replicas of the classical system, enabling the use and extension of standard computational algorithms. Replicating the system \(P\) times leads to significant computational cost, especially when \(P\) is large. This is especially true for long range interactions, like electrostatics, which are already expensive in the classical system (\(P=1\)). However, by judiciously separating electrostatic interactions into short and long range components, the number of beads needed to evaluate the long range part can be significantly reduced, even down to the \(P=1\) limit [29; 30]. This approach -- ring polymer contraction (RPC) -- evaluates electrostatic interactions on a subset of beads and distributes the result over all the beads of the polymer, reducing the number of evaluations of long range interactions from \(P\) to \(P^{\prime}<P\). Here, I focus on the limit where long range interactions are evaluated only at the centroid of each ring polymer \(P^{\prime}=1\). While RPC can significantly reduce the computational cost of PIMD and RPMD simulations, this approach is still plagued with the usual problems associated with evaluating long range electrostatics -- Ewald summations are expensive and conceptually difficult. The conceptual issues are particularly problematic, because Ewald sums and other lattice summation techniques can often lead to significant geometry-dependent finite size effects, as well as other artifacts associated with periodic boundary conditions (PBCs) [31; 32; 33]. An appealing alternative to lattice sums is local molecular field (LMF) theory and related developments [34; 35; 36; 37; 38]. LMF theory relies on a physically intuitive separation of electrostatic interactions into short and long range components, much like RPC, and replaces all electrostatic interactions by the short range part. The averaged effects of the long range interactions are then modeled through an effective external field that is chosen to produce the same structure as the full system with long range interactions. Accurate structure, thermodynamics, and even dynamics can be efficiently and accurately predicted by LMF simulations, making it a useful alternative to lattice summations. In this work, I combine LMF theory and the path integral isomorphism to develop RPC-based approximations and model nuclear quantum effects in water confined between model nonpolar walls. After a brief review of RPC and LMF theory, I discuss strategies for combining the two to obtain approaches for the efficient and accurate predictions of water structure between model hydrophobic walls. These approaches can aid in reducing the cost of PIMD and RPMD simulations while also helping to build physical intuition regarding the effects of long range interactions in heterogeneous quantum systems. ## II Theory ### Ring Polymer Contraction In standard PIMD and RPMD simulation approaches, Feynman's path integral formulation of quantum mechanics is used to model a system of \(N\) distinguishable particles with Hamiltonian \(\mathcal{H}(\mathbf{P},\mathbf{R})\), where \(\mathbf{P}\) and \(\mathbf{R}\) represent the momenta and positions of all \(N\) particles at a single point in phase space [39; 28; 40]. The partition function, \(\mathcal{Z}\), of this system can be approximated by that of an isomorphic system composed of classical ring polymers, each composed of \(P\) beads, \[\mathcal{Z}\approx\mathcal{Z}_{P}=\frac{1}{(2\pi\hbar)^{3NP}}\int d\mathbf{P} \int d\mathbf{R}e^{-\beta\mathcal{H}_{P}(\mathbf{P},\mathbf{R})/P}, \tag{1}\] which becomes an equality in the \(P\rightarrow\infty\) limit. Here, \(\beta=(k_{\mathrm{B}}T)^{-1}\) and the ring polymer Hamiltonian is \[\mathcal{H}_{P}(\mathbf{P},\mathbf{R}) =\sum_{i=1}^{N}\sum_{\alpha=1}^{P}\left(\frac{\left|\mathbf{p}_{ i}^{(\alpha)}\right|^{2}}{2m_{i}}+\frac{1}{2}m_{i}\omega_{P}^{2}\left| \mathbf{r}_{i}^{(\alpha)}-\mathbf{r}_{i}^{(\alpha-1)}\right|^{2}\right)\] \[+\sum_{\alpha=1}^{P}V(\mathbf{R}^{(\alpha)}), \tag{2}\] where the last term is the sum of the potential energy over all beads, \(\mathbf{r}_{i}^{(\alpha)}\) and \(\mathbf{p}_{i}^{(\alpha)}\) are the position and momentum of bead \(\alpha\) of site \(i\), \(\mathbf{R}^{(\alpha)}\) represents the position vector for bead \(\alpha\) of all \(N\) particles in a single configuration, \(m_{i}\) is the mass of site \(i\), and \(\omega_{P}=P/\beta\hbar\) is the spring constant. In general, this Hamiltonian can involve many-body interactions, but here I focus on one- and two-body interactions, \[V(\mathbf{R}^{(\alpha)})=\sum_{i=1}^{N}\phi(\mathbf{r}_{i}^{(\alpha)})+\sum_{ i=1}^{N-1}\sum_{j=i+1}^{N}w\left(\left|\mathbf{r}_{i}^{(\alpha)}-\mathbf{r}_{j}^{( \alpha)}\right|\right), \tag{3}\] where the one-body potential \(\phi(\mathbf{r})\) arises from an external field. The two-body potential \(w(r)\) involves Lennard-Jones (LJ) and electrostatic interactions, \[w(r_{ij})=u_{\mathrm{LJ}}(r_{ij})+q_{i}q_{j}v(r_{ij}), \tag{4}\] where \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\), \(u_{\mathrm{LJ}}(r)\) is the LJ potential, and \(v(r)=1/r\). Lennard-Jones interactions are typically truncated at some distance and their effects beyond the cutoff accounted for with a correction. In contrast, electrostatic interactions are long-ranged and are typically evaluated using lattice summation techniques like Ewald summation. Lattice summations are generally expensive and significantly increase the cost of PI simulations when the system is replicated \(P\) times to construct the ring polymers. Ring polymer contraction (RPC) can be used to reduce this cost by lowering the number of beads on which long range interactions need to be evaluated [30; 29]. RPC splits the interparticle interactions into two components, \(V(r)=V_{\mathrm{S}}(r)+V_{\mathrm{L}}(r)\), where \(V_{\mathrm{S}}(r)\) is the short range part of the potential and \(V_{\mathrm{L}}(r)\) is the long range part of the potential. The splitting is chosen such that \(V_{\mathrm{L}}(r)\) varies slowly on the length scale of the largest ring polymer, estimated by the ensemble averaged radius of gyration. For water at ambient conditions, this length scale is close to the free particle limit, \(\lambda_{\mathrm{free}}=\sqrt{\beta\hbar^{2}/4m}\). When the potential varies slowly on the scale of \(\lambda_{\mathrm{free}}\), the total interaction between two ring polymers can be approximated by \(P\) times the interaction between their centroids, \[\sum_{\alpha=1}^{P}V_{\mathrm{L}}\left(\left|\mathbf{r}_{i}^{(\alpha)}- \mathbf{r}_{j}^{(\alpha)}\right|\right)\approx PV_{\mathrm{L}}(|\mathbf{\bar{ r}}_{i}-\mathbf{\bar{r}}_{j}|) \tag{5}\] where \(\mathbf{\bar{r}}_{i}\) is the centroid of ring polymer \(i\). Forces can be readily evaluated following previous work [29; 30]. The centroid RPC approximation, Eq. 5, significantly reduces the cost of evaluating long range interactions without sacrificing accuracy [29; 30]. Using local molecular field theory, summarized in the next section, we can further reduce the cost of evaluating long range interactions. ### Local Molecular Field Theory LMF theory accounts for the averaged effects of long range electrostatics with a renormalized or effective electrostatic potential [34; 35]. The first step in determining this potential is to separate intermolecular Coulomb interactions into short and long range components. For LMF theory to be valid, long range interactions must vary slowly over typical nearest neighbor distances. As such, LMF theory separates the \(1/r\) portion of the Coulomb potential according to \[v(r) =\frac{1}{r} \tag{6}\] \[=\frac{\mathrm{erfc}(r/\sigma)}{r}+\frac{\mathrm{erf}(r/\sigma)}{r}\] (7) \[\equiv v_{0}(r)+v_{1}(r), \tag{8}\] where \(\sigma\) is the LMF smoothing length that is on the order of intermolecular correlations, \(v_{0}(r)\) is the short-range component of the electrostatic interactions, and \(v_{1}(r)\) is the long-range component. For water at ambient conditions, previous work has shown that \(\sigma\geq 3\) A [41], and here I use a conservative value of \(\sigma=4.5\) A. In LMF theory, the full model is replaced by its Gaussian-truncated (GT) counterpart, in which \(v(r)\) is replaced by \(v_{0}(r)\) for all sites in the system. The averaged effects of fluid-fluid long range electrostatics are taken into account through the renormalized electrostatic potential \[\mathcal{V}_{\mathrm{R}}(\mathbf{r}) =\mathcal{V}(\mathbf{r})+\int d\mathbf{r}^{\prime}\rho^{q}( \mathbf{r}^{\prime})v_{1}(|\mathbf{r}-\mathbf{r}^{\prime}|) \tag{9}\] \[\equiv\mathcal{V}(\mathbf{r})+\mathcal{V}_{\mathrm{S}}(\mathbf{ r}), \tag{10}\] where \(\mathcal{V}(\mathbf{r})\) is the external electrostatic potential present in the full system, equal to zero for the systems studied here, \(\rho^{q}(\mathbf{r})\) is the ensemble averaged singlet charge density of the system, given by \[\rho^{q}(\mathbf{r})=\left\langle\rho^{q}(\mathbf{r};\mathbf{R})\right\rangle, \tag{11}\] and \(\rho^{q}(\mathbf{r};\mathbf{R})\) is the charge density operator evaluated in configuration \(\mathbf{R}\). The charge density operator is defined in the next section. In general, the external field contains can be split into short and long range parts, \(\mathcal{V}(\mathbf{r})=\mathcal{V}_{0}(\mathbf{r})+\mathcal{V}_{1}(\mathbf{ r})\), such that the LMF can be rewritten in a way that isolates the long range interactions as \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})=\mathcal{V}_{0}(\mathbf{r})+\mathcal{V}_ {\mathrm{R1}}(\mathbf{r})\). Equation 10 is self-consistent and can be solved through brute-force simulations or using methods based in linear response theory [42]. Simulating the GT model in the presence of \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})\) yields structures in agreement with the full system [43; 44; 45], and thermodynamics can be obtained with previously-derived corrections [35; 37; 46; 47; 48]. ### Solving the Local Molecular Field Equation for Quantum Systems The LMF potential can be obtained by writing the ring polymer expression for the charge density operator of the system in configuration \(\mathbf{R}\) as [28; 49] \[\rho^{q}(\mathbf{r};\mathbf{R}) =\frac{1}{P}\sum_{\alpha=1}^{P}\sum_{i=1}^{N}q_{i}^{(\alpha)} \delta\left(\mathbf{r}-\mathbf{r}_{i}^{(\alpha)}(\mathbf{R})\right) \tag{12}\] \[=\frac{1}{P}\sum_{\alpha=1}^{P}\rho^{q\alpha}(\mathbf{r};\mathbf{ R}). \tag{13}\] Using this expression for the charge density, the LMF potential is given by \[\mathcal{V}_{\mathrm{R}}(\mathbf{r}) =\mathcal{V}(\mathbf{r})+\frac{1}{P}\sum_{\alpha=1}^{P}\int d \mathbf{r}^{\prime}\rho^{q\alpha}(\mathbf{r}^{\prime})v_{1}(|\mathbf{r}- \mathbf{r}^{\prime}|) \tag{14}\] \[\equiv\mathcal{V}(\mathbf{r})+\frac{1}{P}\sum_{\alpha=1}^{P} \mathcal{V}_{\mathrm{S}}^{(\alpha)}(\mathbf{r}) \tag{15}\] I will refer to this as the path integral local molecular field (PI-LMF). The PI-LMF Equation 15 must be solved self-consistently. A self-consistent solution for \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})\) can be found by iterating with molecular simulations, but this is expensive, especially for quantum systems. This can be circumvented by iterating to self-consistency using linear response theory (LRT) instead of simulations to predict the density induced by each field, as described by Hu and Weeks for classical fluids [42]. I now describe how to extend this LRT approach to solving the LMF equation to path integral models. In a system of quantum particles described within the path integral formalism, the Hamiltonian is replaced by the corresponding approximation involving ring polymers composed of \(P\) beads each. Ignoring the momenta -- we are only concerned with configurational averages here -- the path integral Hamiltonian can be written as \[\mathcal{H}_{P}(\mathbf{R}) =\frac{1}{P}\sum_{\alpha=1}^{P}\left[U_{0}^{(\alpha)}(\mathbf{R})+ \Phi_{0}^{(\alpha)}(\mathbf{R})+\Phi_{\mathrm{R1}}^{(\alpha)}(\mathbf{R})\right] \tag{16}\] \[=\mathcal{H}_{P,0}(\mathbf{R})+\frac{1}{P}\sum_{\alpha=1}^{P} \Phi_{\mathrm{R1}}^{(\alpha)}(\mathbf{R}), \tag{17}\] where the bond potentials between neighboring beads are included in \(U_{0}^{(\alpha)}(\mathbf{R})\). The Hamiltonian \(\mathcal{H}_{P,0}(\mathbf{R})=\frac{1}{P}\sum_{\alpha=1}^{P}\left[U_{0}^{( \alpha)}(\mathbf{R})+\Phi_{0}^{(\alpha)}(\mathbf{R})\right]\) represents the purely short ranged (reference) system, \(U_{0}^{(\alpha)}(\mathbf{R})\) is the total potential energy of the short range pair interactions for bead \(\alpha\), and \(\Phi_{0}^{(\alpha)}(\mathbf{R})\) is the corresponding total short range one-body potential energy. The total potential energy of the long range interactions for each bead is contained within \(\Phi_{\mathrm{R1}}^{(\alpha)}(\mathbf{R})=\int d\mathbf{r}\rho^{q\alpha}( \mathbf{r};\mathbf{R})\mathcal{V}_{\mathrm{R1}}(\mathbf{r})\). Using this separation of the Hamiltonian into short and long range components, the average charge density can be written as an ensemble average in the short range system according to \[\rho^{q}(\mathbf{r}) =\left\langle\frac{1}{P}\sum_{\alpha=1}^{P}\rho^{q(\alpha)}(\mathbf{r };\mathbf{R})\right\rangle\] \[=\frac{\left\langle\frac{1}{P}\sum_{\alpha=1}^{P}\rho^{q(\alpha)}( \mathbf{r};\mathbf{R})e^{-\frac{\beta}{P}\sum_{\gamma=1}^{P}\Phi_{\mathrm{R1}}^ {(\gamma)}(\mathbf{R})}\right\rangle_{0}}{\left\langle e^{-\frac{\beta}{P} \sum_{\alpha=1}^{P}\Phi_{\mathrm{R1}}^{(\alpha)}(\mathbf{R})}\right\rangle_{0}}. \tag{18}\] Now, noting that the instantaneous bead-averaged field energy is \[\Phi_{\mathrm{R1}}(\mathbf{R})=\frac{1}{P}\sum_{\alpha=1}^{P}\Phi_{\mathrm{R1} }^{(\alpha)}(\mathbf{R}), \tag{19}\] we can rewrite the charge density as an average over configurations in the short range system, \[\rho^{q}(\mathbf{r})=\frac{\left\langle\rho^{q}(\mathbf{r};\mathbf{R})e^{- \beta\Phi_{\mathrm{R1}}(\mathbf{R})}\right\rangle_{0}}{\left\langle e^{-\beta \Phi_{\mathrm{R1}}(\mathbf{R})}\right\rangle_{0}}. \tag{20}\] We can then linearize this expression for the charge density of quantum particles to obtain the linear response approximation \[\rho^{q}(\mathbf{r})\approx\left\langle\rho^{q}(\mathbf{r};\mathbf{R})\right\rangle _{0}-\beta\left\langle\delta\rho^{q}(\mathbf{r};\mathbf{R})\delta\Phi_{ \mathrm{R1}}(\mathbf{R})\right\rangle_{0}, \tag{21}\] where \(\delta\rho^{q}(\mathbf{r};\mathbf{R})=\rho^{q}(\mathbf{r};\mathbf{R})-\left \langle\rho^{q}(\mathbf{r};\mathbf{R})\right\rangle_{0}\) and \(\delta\Phi_{\mathrm{R1}}(\mathbf{R})=\Phi_{\mathrm{R1}}(\mathbf{R})-\left \langle\Phi_{\mathrm{R1}}(\mathbf{R})\right\rangle_{0}\). Equation 21 is analogous to the classical result of Hu and Weeks [42], except the classical operators are replaced by their bead-averaged counterparts [28; 49]. Equations 15 and 21 are the main results of this section and are used to obtain a self-consistent solution to the LMF equation through iteration. The benefits of using Eq. 21 instead of the more traditional form of the linear response approximation are magnified in path integral treatments of quantum systems. In this case, the more traditional expression \[\rho^{q}(\mathbf{r})\approx\left\langle\rho^{q}(\mathbf{r};\mathbf{R})\right \rangle_{0}-\frac{\beta}{P^{2}}\sum_{\alpha=1}^{P}\sum_{\gamma=1}^{P}\int d \mathbf{r}^{\prime}\phi_{\mathrm{R1}}(\mathbf{r}^{\prime})\chi_{\alpha\gamma }^{qq}(\mathbf{r},\mathbf{r}^{\prime}) \tag{22}\] involves pair correlations between all beads in the system, including those on different slices of imaginary time (\(\alpha\) and \(\gamma\)), through the quantum charge-charge linear response function [50; 51; 40] \[\chi_{\alpha\gamma}^{qq}(\mathbf{r},\mathbf{r}^{\prime})=\left\langle\delta \rho^{q(\alpha)}(\mathbf{r};\mathbf{R})\delta\rho^{q(\gamma)}(\mathbf{r}^{ \prime};\mathbf{R})\right\rangle. \tag{23}\] In addition to the difficulties of evaluating a six-dimensional correlation function in a nonuniform system, the need to evaluate correlations between different points in imaginary time further increases the expense of using Eq. 22. Because of these difficulties, the much more efficient Eq. 21 is preferred to solve the LMF equation for path integral models. ### Combining Local Molecular Field Theory and Ring Polymer Contraction While the solution to the LMF equation in the previous section can be obtained from simulation results and linear response theory, the slowly-varying nature of the long range potentials suggests that simpler approximations can be exploited to more efficiently solve the LMF equation. One approach is to combine LMF theory with RPC. A fundamental concept in both LMF theory and RPC is the separation of interaction potentials into short and long range components based on physical principles. In RPC, electrostatic interactions are separated so that the long range component is slowly-varying over the size of the ring polymer [29; 30]. In LMF theory, electrostatic interactions are separated so that the long range component is uniformly slowly-varying over typical nearest-neighbor distances (or a correlation length) in the liquid [34; 35]. In liquids like water at ambient conditions, the typical separation length scales are similar, and I will indicate this by the LMF smoothing length \(\sigma\). I follow these principles and use the typical LMF smoothing length of \(\sigma=4.5\) A to separate the potential into short and long range components, \(v_{0}(r)\) and \(v_{1}(r)\), respectively, as described in the previous section. In RPC, the electrostatic (pair) potential \(v_{1}(r)\) is evaluated between centroid positions. By combining RPC and LMF, long range pair interactions are completely removed, \(V_{\mathrm{L}}=0\). The averaged effects of long range interactions are instead accounted for via the effective one-body electrostatic (LMF) potential \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})\), and RPC can be used to evaluate the long range part of the LMF potential at centroid positions only; \[\sum_{i=1}^{N-1}\sum_{j>i}^{N}PV_{\mathrm{L}}(|\bar{\mathbf{r}}_{i}-\bar{ \mathbf{r}}_{j}|)\rightarrow\sum_{i=1}^{N}P\mathcal{V}_{\mathrm{R1}}(\bar{ \mathbf{r}}_{i}). \tag{24}\] This strategy results in the LMF-RPC scheme for evaluating long range interactions. To determine the effective field \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})\) within the LMF-RPC scheme using Eq. 21, the long range potential energy is evaluated only at the location of the centroid. Defining the centroid charge density operator, \[\bar{\rho}^{q}(\mathbf{r};\mathbf{R})=\sum_{i=1}^{N}q_{i}\delta\left(\mathbf{r }-\bar{\mathbf{r}}_{i}(\mathbf{R})\right), \tag{25}\] where \(\bar{\mathbf{r}}_{i}\) is the position of the centroid of particle \(i\), the total long range potential energy within the LMF-RPC approximation is \[\Phi_{\mathrm{R1}}^{\mathrm{RPC}}(\mathbf{R})=\int d\mathbf{r}\bar{\rho}^{q}( \mathbf{r};\mathbf{R})\mathcal{V}_{\mathrm{R1}}(\mathbf{r}). \tag{26}\] This corresponds to evaluating the field \(\mathcal{V}_{\mathrm{R1}}(\mathbf{r})\), determined using all beads, at the location of the centroids only. As a result, the linear response approximation for the LMF-RPC charge density is \[\rho^{q}(\mathbf{r})\approx\left\langle\rho^{q}(\mathbf{r};\mathbf{R})\right\rangle _{0}-\beta\left\langle\delta\rho^{q}(\mathbf{r};\mathbf{R})\delta\Phi_{\mathrm{ R1}}^{\mathrm{RPC}}(\mathbf{R})\right\rangle_{0}, \tag{27}\] with an analogous expression for the centroid charge density. The converged charge density and \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})\) are obtained within the LMF-RPC approximation by iterating Eqs. 15 and 27 to self consistency. ### Centroid Approximation The LMF-RPC approach involves evaluating the charge density and centroid charge density, as well as the full \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})\). Such a complicated approach might not be necessary. Because \(v_{1}(r)\) essentially smears the charge distribution over the length scale \(\sigma\), one might anticipate that a completely centroid approximation to the LMF potential is \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})\approx\bar{\mathcal{V}}_{\mathrm{R}}( \mathbf{r})\), where \[\bar{\mathcal{V}}_{\mathrm{R}}(\mathbf{r}) =\mathcal{V}(\mathbf{r})+\int d\mathbf{r}^{\prime}\bar{\rho}^{q} (\mathbf{r};\mathbf{R})v_{1}(|\mathbf{r}-\mathbf{r}^{\prime}|) \tag{28}\] \[\equiv\mathcal{V}(\mathbf{r})+\bar{\mathcal{V}}_{\mathrm{S}}( \mathbf{r}). \tag{29}\] The centroid approximation is then inserted into Eq. 21 to iterate the LMF equation to self-consistency in conjunction with the linear response approximation for the centroid charge density \[\bar{\rho}^{q}(\mathbf{r})=\left\langle\bar{\rho}^{q}(\mathbf{r};\mathbf{R}) \right\rangle_{0}-\beta\left\langle\delta\bar{\rho}^{q}(\mathbf{r};\mathbf{R}) \delta\bar{\Phi}_{\mathrm{R1}}(\mathbf{R})\right\rangle_{0}, \tag{30}\] where \(\bar{\Phi}_{\mathrm{R1}}(\mathbf{R})=\int d\mathbf{r}\bar{\rho}^{q}(\mathbf{r };\mathbf{R})\bar{\mathcal{V}}_{\mathrm{R1}}(\mathbf{r})\) is the instantaneous energy from the centroid approximation to the long range field evaluated at the centroid positions. ### Feynman-Kleinert Approximation The LMF-RPC and centroid approaches reduce the number of sites needed to evaluate \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})\), but both still require a path integral simulation in the purely short range GT system to evaluate ensemble averages. Instead, we could first model a classical (\(P=1\)) GT system and use the Feynman-Kleinert (FK) procedure to estimate the quantum LMF from its classical counterpart [52]. This second approximation, here called the FK approximation, in essence, corresponds to approximating a quantum observable by a Gaussian smoothing of its classical counterpart [52]. This can be used to determine the LMF for the quantum system. First, I determine the LMF potential for the classical system, \(\mathcal{V}_{\mathrm{R}}^{\mathrm{cl}}(\mathbf{r})\), by self-consistent iteration using the classical linear response approximation [42]. The linear response approximation is then used to predict the oxygen and hydrogen site densities in the LMF system, \(\rho_{\mathrm{O}}(\mathbf{r})\) and \(\rho_{\mathrm{H}}(\mathbf{r})\), respectively. I then smooth these densities over the lengths \(l_{\mathrm{O}}\) and \(l_{\mathrm{H}}\) to convert the classical charge density into an approximation of the quantum charge density, \[\rho^{ql}(\mathbf{r}) =\int d\mathbf{r}^{\prime}\big{[}q_{\mathrm{O}}\rho_{\mathrm{O}}( \mathbf{r}^{\prime})\rho_{G}(|\mathbf{r}-\mathbf{r}^{\prime}|\,;l_{\mathrm{O}})\] \[+q_{\mathrm{H}}\rho_{\mathrm{H}}(\mathbf{r}^{\prime})\rho_{G}(| \mathbf{r}-\mathbf{r}^{\prime}|\,;l_{\mathrm{H}})\big{]}, \tag{31}\] where \[\rho_{G}(r;l)=\frac{1}{l^{3}\pi^{3/2}}e^{-r^{2}/l^{2}} \tag{32}\] is a spherical Gaussian of width \(l\). Physically, \(l\) corresponds to the average size of a ring polymer, quantified by its radius of gyration, for example. Because of the different masses of oxygen and hydrogen, and consequently different spreads, we need different smoothing lengths for each, \(l_{\mathrm{O}}\) and \(l_{\mathrm{H}}\), respectively, and we need to separate the charge density into its components from oxygen and hydrogen sites. Here, I approximate the size of the ring polymers by their free particle values, \(l_{\mathrm{O}}\approx\lambda_{\mathrm{free,O}}\approx 0.05\) A and \(l_{\mathrm{H}}\approx\lambda_{\mathrm{free,H}}\approx 0.2\) A. This crude approximation is reasonable for water at ambient conditions, where the average radius of gyration of the ring polymers is close to their free particle values [30]. After the densities are smoothed to account for nuclear quantum effects within the FK approximation, I then perform a second smoothing over the length scale \(\sigma\) by convoluting the quantum charge density with \(v_{1}\). The resulting FK approximation to the LMF potential is \(\mathcal{V}_{\mathrm{R}}(\mathbf{r})\approx\mathcal{V}_{\mathrm{R}}^{\mathrm{ FK}}(\mathbf{r})\), where \[\mathcal{V}_{\mathrm{R}}^{\mathrm{FK}}(\mathbf{r}) =\mathcal{V}(\mathbf{r})+\int d\mathbf{r}^{\prime}\rho^{ql}( \mathbf{r}^{\prime})v_{1}(|\mathbf{r}-\mathbf{r}^{\prime}|) \tag{33}\] \[\equiv\mathcal{V}(\mathbf{r})+\mathcal{V}_{\mathrm{S}}^{\mathrm{FK }}(\mathbf{r}). \tag{34}\] In water at room temperature, the long range electrostatic interactions vary much more slowly than the ring polymers for the nuclei, \(\sigma\gg l_{\mathrm{H}}>l_{\mathrm{O}}\). Therefore, one may anticipate that any deficiencies in the FK approximation -- deviations of the ring polymer from a spherical Gaussian -- will be washed out by smoothing over \(\sigma\), and the FK approximation to the LMF potential will be reasonably accurate at these conditions. ## III Results and Discussion I demonstrate the utility of the LMF-RPC scheme using what has become the canonical system for examining LMF theory-based methods -- water confined between idealized smooth hydrophobic walls. Near the walls, the effects of long range interactions do not cancel, as they do in bulk, but instead create forces near the water-wall interface. The averaged effects of long range electrostatics provides a torque on interfacial water molecules, resulting in dipolar screening and the proper orientational ordering of water molecules near the wall [43; 44; 45]. Neglecting long range electrostatics in this system results in over-orientation of interfacial water molecules and a non-zero electric field in the bulk due to the absence of dielectric screening in the purely short range system. Here, I show that the same general physics arises in path integral representations of water confined between hydrophobic walls and that LMF theory adapted to path integral methods can account for the averaged effects of long range electrostatics. ### Ring Polymer Local Molecular Field Theory can Account for Long Range Electrostatics The role of long range electrostatics in determining the structure of confined water can be observed by comparing simulation results obtained with the Full and truncated models, Fig. 1. The charge density of water is similar in the Full, GT, and PI-LMF systems. However, previous work has shown that differences relevant to long range electrostatics are often hidden under large, atomic-scale fluctuations in the charge density. [34; 43; 53] These differences influence the orientational preferences of interfacial water. In the GT systems, O-H bonds point toward the wall more than in the full systems, consistent with expectations from classical simulations. The orientation of interfacial water molecules alters the electrostatic potential, given by \[\Phi(z)=-\int_{-\infty}^{z}dz^{\prime}\int_{-\infty}^{z^{\prime}}dz^{\prime \prime}\rho^{q}(z^{\prime\prime}). \tag{35}\] The resulting electrostatic potential determined from GT configurations does not plateau in the bulk region. Including long range electrostatics, through Ewald summation (Full) or through the PI-LMF potential, corrects this behavior, resulting in less orientational ordering of water at the interface and the expected plateau of the electrostatic potential in the bulk region. The long range part of the LMF potential satisfies a Poisson equation, \[\nabla^{2}\mathcal{V}_{\mathrm{R1}}(\mathbf{r})=-\frac{4\pi}{\varepsilon} \rho^{q\sigma}(\mathbf{r}), \tag{36}\] involving the Gaussian-smoothed charge density [34] \[\rho^{q\sigma}(\mathbf{r})=\int d\mathbf{r}^{\prime}\rho^{q}(\mathbf{r}^{ \prime})\rho_{G}(\left|\mathbf{r}-\mathbf{r}^{\prime}\right|;\sigma), \tag{37}\] which is shown in Fig. 1c for the Full, GT, and LMF systems. The Gaussian-smoothed charge density therefore represents the portion of the charge density that is relevant for the long range response of water, and one can think of \(\rho^{q\sigma}(\mathbf{r})\) as a macroscopic charge density [54; 55; 56]. The Full \(\rho^{q\sigma}(z)\) displays a (macroscopic) dipole layer at the interface. In contrast, GT water overpolarizes and produces a large positive peak in \(\rho^{q\sigma}(z)\) at the interface. Moreover, \(\rho^{q\sigma}(z)\) does not go to zero in the bulk, again due to overpolarization in the GT system. The PI-LMF potential corrects this overpolarization and results in a \(\rho^{q\sigma}(z)\) consistent with the dipole layer produced in the full system. This indicates that the PI-LMF system reproduces the long range behavior of confined quantum water. In addition to the PI-LMF solution to the LMF equation, I described three approximate solutions to the LMF equation: the LMF-RPC approximation, the centroid approximation, and the Feynman-Kleinert approximation (FK-LMF). The results obtained using these approximations are compared to the PI-LMF results in Fig. 2. The charge densities agree for all systems but the FK approximation, which produces slightly smaller first peaks. The discrepancy in the FK results stems from the inability of the linear response approximation to sufficiently shift the first peaks in the atomic densities; direct simulation of GT water in the presence of \(\mathcal{V}_{\mathrm{R}}^{\mathrm{FK}}(z)\) produces a charge Figure 1: (a) Charge density, (b) Electrostatic potential, \(\Phi(z)\), and (c) Gaussian smoothed charge density for the full, Gaussian-truncated (GT), and PI-LMF mimic systems for \(P=32\) beads. Results are shown for the left wall only. density in agreement with the others. Despite this difference in the results of the FK approximation, the electrostatic potentials and smoothed charge densities all agree, indicating that these approximations for the long range interactions are reasonable for water at ambient conditions. This good agreement among the various approximations is a consequence of washing out molecular-scale details in the coarse-graining inherent within the LMF approach. However, one might expect that these approximations could break down when the thermal radius of the quantum particles is comparable to or greater than the smoothing length, \(\sigma\), as is the case for light nuclei at low temperatures [57; 27], or for light particles like electron or hole quasiparticles [58; 59]. In cases like this, approximating a quantum particle by its centroid could prove inaccurate, especially if the ring polymers are aspherical. ### Quantum Water Works Harder for Dipolar Screening The impact of NQEs on long range electrostatics can be assessed by examining the extent to which the quantum and classical GT systems deviate from our expectations for the full system. The charge densities for the quantum and classical GT systems, shown in Fig. 3a, are qualitatively similar but display small differences near the wall. In particular, the magnitudes of the first two peaks are slightly larger for the quantum GT model. These small differences between the charge densities lead to large differences when integrated to obtain the polarization potential, as shown in Fig. 3b. The potential in the bulk region of the quantum system is significantly larger than that of the classical one. However, the differences in the electric fields (derivative of the potential) are localized near the interface, and the fields are similar in the bulk for both quantum and classical systems. The overpolarization of interfacial water results from the lack of long range forces that reorient molecules near the wall. The LMF potential provides these long range forces and corrects interfacial water structure. Because quantum GT water overpolarizes more than classical GT water, the forces needed to reorient quantum GT water should be larger. Indeed, the corresponding LMF potentials and forces are larger in quantum GT water due to NQEs, Fig. 3c,d. The LMF, \(\mathcal{V}_{\text{R}}(z)\), exhibits a larger change across the interface for the quantum systems at all levels of approximation. Moreover, the LMF force, \(-\nabla\mathcal{V}_{\text{R}}(\mathbf{r})\), is larger in magnitude for the quantum systems. This suggests that the forces required to achieve proper dielectric screening are larger in quantum systems than their classical counterparts, by several \(k_{\text{B}}T\) for water at room temperature, which may be anticipated from Figure 3: (a) The charge density and (b) polarization potential for the classical (\(P=1\)) and quantum (\(P=32\)) GT systems. The corresponding (c) LMF potential and (d) its gradient (negative of force) for the same two systems. Results are shown for the left wall only. Figure 2: (a) Charge density, (b) Electrostatic potential, \(\Phi(z)\), (c) Gaussian smoothed charge density, and (d) gradient of the LMF potential (negative of force) for the various methods of solving the LMF equation: the “exact” PI-LMF, LMF-RPC, the centroid approximation, and the Feynman-Kleinert approximation (FK-LMF), all for \(P=32\) beads. Results are shown for the left wall only. Vertical dashed line shows the location of the wall, \(z=0\). the larger zero point energy of the quantized nuclei. ### Computational Efficiency Most of the results above were obtained using linear response theory and simulations of a short range system. However, simulations can be performed with the presence of the LMF field to also obtain accurate predictions of the structure of nonuniform fluids. Compared to typical lattice summation techniques for evaluating long range electrostatics, the LMF-RPC scheme reduces the cost significantly in two ways. First, RPC reduces the number of sites at which long range interactions need to be evaluated -- here, from 32 to 1. Second, LMF theory replaces \(N^{2}\) two-body interactions with \(N\) one-body interactions. This reduction in scaling is beneficial when simulating large numbers of molecules characteristic of biological and materials systems. To illustrate the increased efficiency of the LMF-RPC approach in comparison to Ewald summation-based approaches for evaluating long range electrostatic interactions in path integral models, I evaluated the time required to perform a PIMD time step as a function of the number of water molecules in the system, shown in Fig. 4. The number of water molecules was varied by replicating the simulation box in the lateral directions (\(x\) and \(y\)). The computational time was evaluated for the LMF-RPC, particle-particle-particle mesh (PPPM) Ewald, and GT systems with \(P=32\) using a single 2.3 GHz Intel Xeon W core of a 2017 iMac Pro. For the small system size used in the majority of the text (1024 molecules), the simulation time is similar for all approaches. However, as the size of the system grows, the increased efficiency of the LMF-based approaches becomes apparent. Moreover, the timings for the GT system are nearly identical to the LMF systems, indicating that the evaluation of the (one-body) LMF potential requires minimal overhead and the calculation is dominated by the evaluation of short range pairwise interactions. Of course, this means that there is negligible speedup gained by using RPC with LMF theory for the system sizes studied here, but differences may appear for systems with large numbers of beads. In contrast to the LMF results, the PPPM timings are slowed by the evaluation of the long range electrostatic interactions. This suggests that LMF-based approaches can drastically reduce the computational cost of PIMD calculations in large-scale molecular systems. ## IV Conclusions In this work, I have examined nuclear quantum effects on long range electrostatic properties of confined water. To do so, I demonstrated that LMF theory can be used to efficiently and accurately account for long range electrostatics in path integral simulations of nonuniform liquids. Moreover, a RPC-based approximations were introduced that leverage the complementary ideas that underlie the separation of short and long range interactions in both LMF theory and RPC -- long range interactions are slowly varying over relevant molecular length scales. I expect that the LMF-RPC scheme will be useful for modeling NQEs in large systems with many light particles at low temperatures (many beads). The LMF-RPC scheme can be readily combined with developments in LMF theory to evaluate NQEs on free energy differences [35; 48]. The general ideas presented here may also be valuable for modeling NQEs with the symmetry preserving mean field (SPMF) theory, which replaces pairwise long range electrostatics with a symmetry-dependent effective field in each configuration [60; 36]. The LMF-RPC approach may be particularly powerful when combined with the short solvent model of Gao _et al._[37; 38] for molecular assembly. In the short solvent model, long range solvent-solvent and solute-solvent interactions are truncated everywhere, and the averaged effects of all long range interactions are accounted for with effective solute-solute interactions. Therefore, the only possible long range interactions are between solutes, greatly reducing the number of charged sites and the associated cost of evaluating long range interactions. Combining the short solvent model with RPC will result in a model where long range interactions only need to be evaluated for the solute centroids. All other interactions are short range. The resulting SSM-RPC scheme could be of great use for modeling NQEs in self-assembly processes. The results reported here use empirical force fields to represent intra- and intermolecular interactions, which neglect coupling of electronic polarization to nuclear quantum fluctuations. These effects can be taken into Figure 4: Computation time per MD simulation time step as a function of the number of water molecules in the system. Lines are linear fits to the data. Error bars are smaller than the symbol size. account with ab initio simulations. Previous work has shown that RPC can significantly speed up ab initio simulations by using a cheap approach to evaluate interactions on all beads, e.g. density functional tight binding (DFTB) methods, while higher level ab initio methods are evaluated only on the centroid and the resulting bead-bead interactions are obtained from the differences between these two levels of theory [3; 61]. Such an approach does not readily lend itself to treatment with LMF theory. However, if LMF theory or similar approaches can be extended to ab initio models, this would facilitate the use of the LMF-RPC scheme in ab initio path integral simulations. An alternative to costly ab initio simulations is to use machine learning approaches to develop neural network potentials that can produce ab initio accuracy with much smaller cost [62; 63; 64]. Traditional neural network potentials lack a good description of long range interactions [65], but recent developments include some description of long electrostatics [65; 66; 67; 68; 69; 70]. Of particular interest are neural network potentials that are informed LMF ideas [68; 69; 70], like the self-consistent field neural network (SCFNN) [69; 70]. These networks focus on training short range GT interactions separately from long range interactions. I anticipate that many of these neural network potentials could be combined with the LMF-RPC scheme to treat the averaged effects of long range interactions in path integral and ring polymer MD simulations with ab initio accuracy. ## V Simulation details Path integral molecular dynamics (PIMD) simulations were performed using the i-PI package [71] interfaced with the LAMMPS software package [72], modified to include the truncated potential \(v_{0}(r)\) and the LMF potential \(\mathcal{V}_{\text{R}}(z)\). Equations of motion were integrated in the normal mode representation using the Cayley integration scheme with a timestep of 0.5 fs [73]. All simulations were performed in the canonical (NVT) ensemble at a constant temperature of 298 K maintained using a stochastic velocity rescaling thermostat [74], with 1024 water molecules in a \(27.72\times 27.72\times 150.00\) A\({}^{3}\) simulation cell. All simulations employed the flexible quantum SPC water model of Voth and coworkers [75]. Lennard-Jones and short range Coulomb interactions we truncated at a distance of 9.8 A. Idealized hydrophobic walls were each represented with a 9-3 Lennard-Jones potential, \[U_{\text{w}}(z)=\varepsilon_{\text{w}}\left[\frac{2}{15}\left(\frac{\sigma_{ \text{w}}}{z}\right)^{9}-\left(\frac{\sigma_{\text{w}}}{z}\right)^{3}\right], \tag{38}\] with \(\sigma_{\text{w}}=3.461\) A and \(\varepsilon_{\text{w}}=0.43875491\) kcal/mol. Wallwater interactions are cut off at a distance of 30 A. Walls are positioned at \(z=0\) and \(z=43.06\) A, in accord with previous work [43]. All ring polymers used \(P=32\) beads for each particle in the system, which has been shown to be sufficiently converged [30]. The modified LAMMPS source code is available at github.com/remsing-group/lmf-rpc/. ###### Acknowledgements. This work is supported by the National Aeronautics and Space Administration under grant number 80NSSC20K0609, issued through the NASA Exobiology Program. I acknowledge the Office of Advanced Research Computing (OARC) at Rutgers, The State University of New Jersey for providing access to the Amarel cluster and associated research computing resources that have contributed to some of the results reported here. I thank Atul Thakur for helpful comments on the manuscript and D. Rodman for inspiration on the figures.
2309.14400
DECORAIT -- DECentralized Opt-in/out Registry for AI Training
We present DECORAIT; a decentralized registry through which content creators may assert their right to opt in or out of AI training as well as receive reward for their contributions. Generative AI (GenAI) enables images to be synthesized using AI models trained on vast amounts of data scraped from public sources. Model and content creators who may wish to share their work openly without sanctioning its use for training are thus presented with a data governance challenge. Further, establishing the provenance of GenAI training data is important to creatives to ensure fair recognition and reward for their such use. We report a prototype of DECORAIT, which explores hierarchical clustering and a combination of on/off-chain storage to create a scalable decentralized registry to trace the provenance of GenAI training data in order to determine training consent and reward creatives who contribute that data. DECORAIT combines distributed ledger technology (DLT) with visual fingerprinting, leveraging the emerging C2PA (Coalition for Content Provenance and Authenticity) standard to create a secure, open registry through which creatives may express consent and data ownership for GenAI.
Kar Balan, Alex Black, Simon Jenni, Andrew Gilbert, Andy Parsons, John Collomosse
2023-09-25T16:19:35Z
http://arxiv.org/abs/2309.14400v1
# Decorait - Decentralized Opt-in/out Registry for AI Training ###### Abstract. We present DECORAIT; a decentralized registry through which content creators may assert their right to opt in or out of AI training as well as receive reward for their contributions. Generative AI (GenAI) enables images to be synthesized using AI models trained on vast amounts of data scraped from public sources. Model and content creators who may wish to share their work openly without sanctioning its use for training are thus presented with a data governance challenge. Further, establishing the provenance of GenAI training data is important to creatives to ensure fair recognition and reward for their such use. We report a prototype of DECORAIT, which explores hierarchical clustering and a combination of on/off-chain storage to create a scalable decentralized registry to trace the provenance of GenAI training data in order to determine training consent and reward creatives who contribute that data. DECORAIT combines distributed ledger technology (DLT) with visual fingerprinting, leveraging the emerging C2PA (Coalition for Content Provenance and Authenticity) standard to create a secure, open registry through which creatives may express consent and data ownership for GenAI. Content provenance, Distributed ledger technology (DLT/Blockchain), Generative AI, Data governance. + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + FootnoteFootnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + FootnoteFootnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + FootnoteFootnote †: + FootnoteFootnote †: + Footnote †: + Footnote †: + FootnoteFootnote †: + FootnoteFootnote †: + Footnote †: + FootnoteFootnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + FootnoteFootnote †: + Footnote †: + Footnote †: + FootnoteFootnote †: + Footnote †: + FootnoteFootnote †: + Footnote †: + FootnoteFootnote †: + FootnoteFootnote †: + Footnote †: + FootnoteFootnote: + FootnoteFootnote †: + FootnoteFootnote †: + FootnoteFootnote: + FootnoteFootnote †: + FootnoteFootnote: + FootnoteFootnote †: + FootnoteFootnote †: + FootnoteFootnote: training data, that enable creatives to register ownership and means for payment for GenAI use as well as their consent for that use. To this end, we propose DECORAIT, a decentralized registry for GenAI training that enables creators to express consent, or otherwise, for their images to be used in AI training, as well as enabling them to receive reward when such use occurs. Our work follows emerging community trends toward centralized, commercial opt-out services. For example, _spawning.ai_ maintains lists of opted-out URL patterns (from individual links to entire domains). GenAI models can match against these lists to exclude content from training. However, a URL list may not capture all instances of a creator's content online. Moreover, scaling up multiple individually managed databases to track opt-out raises data consistency and interoperability challenges. The protocol of the future creative economy also ought to ensure the contributing creatives to GenAI can be recognized and rewarded for their creative assets when their particular content or style is identified to have contributed to specific synthetic media. DECORAIT addresses these issues through three contributions: 1. We propose a **fingerprint-based content similarity score**, followed by a **credit apportionment scheme** to match images and reward creatives for their training content most correlated with generated synthetic media. 2. A **sharded decentralized search index** using distributed ledger technology (DLT), in which a content fingerprint distilled from the image provides a key to register and robustly query opt-in/out information. We propose a hierarchical approach to scale vector search of this index and a hybrid on/off-chain approach to query processing. 3. We leverage the emerging **Coalition for Content Provenance and Authenticity (C2PA)** standard to express consent and payment preferences via cryptographically signed asset'manifests'. These manifests are stored within a distributed file system (IPFS) and referenced by hashed URL link via the DECORAIT DLT search index. Without loss of generality, we demonstrate DECORAIT within the pipeline of Dreambooth (2018) which enables specialization of diffusion models to generate novel renditions of a specific subject provided via exemplar training images. Dreambooth provides a suitable use case as it enables GenAI model users to assure the assets they intend to leverage for model personalization have been opted-in for AI training. Additionally, the proposed system enables the fair recognition and reward of those contributing creatives. We could imagine a future for stock photography in which contributors receive payments not only through direct licensing (as now), but automatically via DECORAIT's ability to provide downstream recognition and persistent crediting of the contributing creators to GenAI. Fair monetary reward is encouraged via our apportioning algorithm, coupled with the transparency and auditability of crypto-currency payments processed using DLT. ## 2. Related Work **Distributed Ledger Technology (DLT)**, colloquially 'blockchain', ensures the immutability of data distributed across many parties without requiring those parties to trust one another or any central authority (Kalal et al., 2017). While the original and dominant use is cryptocurrency tokens (_e.g._ Bitcoin (Kal et al., 2017)), emerging use cases include digital preservation (Kal et al., 2017), supply chain and media provenance (Kal et al., 2017; Kavukcu et al., 2018). DLT has been used to track ownership of media via the ERC-721 Non-Fungible Token (NFT) standard (Ball et al., 2017), although NFT lacks a rights or permissions framework (Kal et al., 2017). Recently, Ekila explored tokenized rights in NFT (Ball et al., 2017). DLT was analyzed for media integrity in ARCHANGEL (Kavukcu et al., 2018); digital records were hashed and used to tamper-proof archival records. Perceptual hashing extended ARCHANGEL from documents to images (Ball et al., 2017) and videos (Ball et al., 2017). Our work uses perceptual hashes for search; as a key to resolving image fingerprints to data on training consent. Recent advances in proof of stake and Layer 2 solutions scale DLT for improved throughput and reduced climate impact, yet scalable storage remains challenging. Peer-to-peer (p2p) distributed file-sharing technologies such as the Interplanetary File System (IPFS (Ball et al., 2017)) are used to address this. **C2PA** is an emerging metadata standard for embedding content provenance information ('manifests') in media files ('assets') (Kal et al., 2017). Manifest's are signed via public-key pair and describe facts about asset provenance, such as who made it, how and using which ingredient assets. These facts are called 'assertions'. C2PA initially focused on trusted media (Kavukcu et al., 2018) and journalism (Ball et al., 2017) use cases. Recently, C2PA (v1.3) described a training-mining assertion in which creators may set flags to opt in or out of GenAI training, which we leverage in our work. Unfortunately, C2PA metadata is stripped by non-compliant platforms (_e.g._ social media) or attackers. Therefore, we use perceptual hashing to match content to manifests. **Content Fingerprinting** identifies content robustly in the presence of degradation or rendition (format, quality, or resolution change) and minor manipulation. Perceptual hashing (Ball et al., 2017; Ball et al., 2017; Ball et al., 2017) and watermarking (Ball et al., 2017; Ball et al., 2017) have been used to match content. Fingerprinting has also been used to detect and attribute images to the GenAI models that made them (Kavukcu et al., 2018). **Diffusion models** lay at the foundation of most recent advances in GenAI (Kavukcu et al., 2018; Kavukcu et al., 2018; Kavukcu et al., 2018; Kavukcu et al., 2018). Such models are commonly trained on millions or even billions of images to gain the ability to synthesize diverse and high-quality images consistently. Diffusion models have shown substantially superior performance in comparison to GANs (Dosov et al., 2018). However, they have also been shown to memorize content and style from training data to a higher degree than GANs (Kavukcu et al., 2018), phenomenon attributed to the presence of duplicated image data (Kavukcu et al., 2018; Kavukcu et al., 2018) within the training data. (Kavukcu et al., 2018) showed that content and style memorization is an even greater concern, specifically in text-conditioned diffusion models, due to duplicated captions within the training data, with data replication not commonly occurring in unconditional diffusion models. This further accentuates the need to involve creatives and obtain consent to use their creative content in the GenAI training pipeline. The present work lays the groundwork for such a system, querying and registering the creatives' opt-in or out decision on GenAI training, as well as offering a pipeline to reward creatives for using those assets in GenAI. **Model personalization** methods are techniques which enable diffusion models to be customized to synthesize novel renditions of a specific subject in different contexts. Recently, both training-free adaptation (Kavukcu et al., 2018) and fine-tuning (Kavukcu et al., 2018; Kavukcu et al., 2018) have been explored to perform customization to object instances. In this work, we utilize the Dreambooth (Kavukcu et al., 2018) technique for model personalization, which fine-tunes a pre-trained text-to-image diffusion model - the base model - using a small set of 'concept' images depicting a specific subject. The subject is thus embedded in the output domain of the model which learns to bind it to a unique identifier (token), which can then be used as part of the prompt to synthesize the subject in new and diverse contexts. We use Dreambooth to demonstrate the DECORATI system, aiding in the training pipeline by identifying opted-in assets from a stock photography website available to train a personalized instance of a diffusion model. ## 3. Tracing and Describing Image Provenance We begin by describing how images are matched to trace visual provenance. We use this approach to 'fingerprint' training images in order to robustly match to entries in the DECORAIT registry, thereby accessing data on consent status and creator wallet addresses which are encoded via the C2PA open standard (subsec. 3.2). A second pair-wise model enables both verification of such matches and correlation between synthetic and training data for credit apportionment. ### Fingerprinting and match verification In order to reliably match training images at scale, we employ two modules. First, a contrastively trained model to extract compact embeddings for measuring image similarity, and second a model which classifies whether the closest matching images in that embedding are true matches. The latter is motivated by the difficulty of thresholding image similarity distances at scale whilst retaining practical accuracy levels. The classifier probability serves both as a match verification check and a score to drive credit apportionment. #### 3.1.1. **Fingerprinting Model** We adapt the fingerprinting technique described in (Bordes et al., 2017) to obtain compact embeddings of the images within the registry's corpus, allowing robust visual content attribution and search. The resulting fingerprint is a compact embedding (256-D) of a CNN, contrastively trained to be discriminative of image content whilst robust to image degradations and manipulations, to model content transformations common as images are shared online. The model is trained through a contrastive learning objective (Dosov et al., 2016). Let \(\phi_{i}=E(x_{i})\in\mathbb{R}^{256}\) be the feature vector obtained as the output of a ResNet-50 encoder for an image \(x_{i}\) and \(\hat{\phi}_{i}\) represent an embedding of a differently augmented version of \(x_{i}\). The training objective is given by \[\mathcal{L}_{C}=-\sum_{i\in\mathcal{B}}\log\left(\frac{d\left(\phi_{i},\hat{ \phi}_{i}\right)}{d\left(\phi_{i},\hat{\phi}_{i}\right)+\sum_{j\neq i\in \mathcal{B}}d\left(\phi_{i},\phi_{k}\right)}\right), \tag{1}\] where \(d(a,b)\coloneqq\exp\left(\frac{1}{\lambda}\frac{a^{7}b}{\|a\|_{2}\|B\|_{2}}\right)\) measures the similarity between the feature vectors \(a\) and \(b\), and \(\mathcal{B}\) is a large randomly sampled training mini-batch (Bordes et al., 2017). In terms of data augmentation, in addition to the typical techniques used in contrastive learning such as colour jittering and random cropping, we consider minor manipulations, benign modifications and degradations of image content due to noise, format change and recompression, resolution change (resize), and several other degradation manipulations studied in (Kang et al., 2018). This is because images may be reshared online and be subject to many such transformations and renditions, and we wish to match regardless of these. #### 3.1.2. **Verification and Apportionment Model** Provided a short-list of the top-K candidate matches from the previous fingerprinting step, we verify image matches through an additional pair-wise comparison between the query image and each candidate match retrieved. The spatial feature maps derived from the fingerprint model are used to compare the images as follows. Let \(F_{q}\in\mathbb{R}^{H\times W\times D}\) be the feature map for a query image \(x_{q}\) and let \(\{F_{i}\}_{i=1}^{k}\) be the \(k\) corresponding retrieval feature maps. Each feature map is processed with a \(1\times 1\) convolution to reduce the dimensionality to \(\frac{D}{4}\) and then numerous pooled descriptors from a set of 2D feature map windows \(\mathcal{W}\subset[1,H]\times[1,W]\) are extracted, similar to R-MAC (Kang et al., 2018). Let \(f_{w}^{q}\in\mathbb{R}^{\frac{D}{4}}\) denote the GeM-pool (Kang et al., 2018) and unit-normalized feature vector for a window \(w\in\mathcal{W}\) and feature map \(F_{q}\). In contrast to (Kang et al., 2018), the window-pooled feature vectors are not averaged, but collected as: \[\hat{F}_{q}=[f_{w_{1}}^{q},\ldots,f_{w_{W}}^{q}]\in\mathbb{R}^{|\mathcal{W}| \times\frac{D}{4}}, \tag{2}\] Figure 2. Match Verification Model. Two images are compared at multiple scales to robustly find (partial) matches. The model extracts multiple aggregated feature vectors from the two feature maps corresponding to numerous image patches of different sizes and positions. These features (collected in \(\hat{F}_{q}\) and \(\hat{F}_{i}\)) are then used to compute the feature correlation matrix \(C_{qi}\), which is fed to an MLP to compute a final score. where \(w_{i}\in\mathcal{W}\) and the number of windows is \(|\mathcal{W}|=55\) in practice. The feature correlation matrix is then computed as: \[C_{qi}=\hat{F}_{q}\bar{F}_{i}^{T}\in\mathbb{R}^{|\mathcal{W}|\times|\mathcal{W}|}. \tag{3}\] These feature correlations are then flattened and fed to a 3-layer MLP, which outputs a similarity score between query \(q\) and retrieval \(i\). To make the model symmetric w.r.t. its inputs, the match score between images \(x_{q}\) and \(x_{i}\) is defined as \[\text{apportion}(x_{q},x_{i})=\sigma(\text{MLP}(C_{qi})+\text{MLP}(C_{iq})), \tag{4}\] where \(\sigma\) represents a sigmoid activation. The model is illustrated in Fig. 2. To train the model, positive example pairs are built via a strong data augmentation protocol, similar to the data augmentation step in the fingerprinter model training. This protocol includes colour jittering, blurring, random resize cropping, and random rotations. A hard negative mining approach is used to generate challenging negatives. For the sampling of negatives, the global average-pooled feature maps of query and queued examples are compared via cosine similarity. Given pairs of true and false matches, the model is trained with a standard binary cross-entropy loss. During training, the backbone feature extractor from the fingerprinter model is frozen. ### Encoding consent and ownership The Coalition for Content Provenance and Authenticity (C2PA) standard aims to aid internet users' trust decisions about digital assets they might come across on platforms such as social media or news websites. Recent work also employs C2PA as a tool to encode provenance information within synthetically generated media, including within its metadata details about the model used to create it, as well as its training data (Bianchi et al., 2017). A'manifest' is a data packet that may be bound to digital assets at creation time or post-factum. This manifest embeds facts about the provenance of a digital asset within its metadata. These facts are referred to as 'assertions'. They may include information such as who created the asset, how it was made, what hardware and software solutions aided in its creation, and any edits it may have undergone since its creation. This data is cryptographically signed to prevent tampering. Signing C2PA manifests requires that the signer uses their private key and public certificates, following the Public Key Infrastructure (PKI). This assures that the consumer makes trust decisions about the asset based on the identity of the manifest signer. A certification authority (CA) conducts real-world verification to ensure signing credentials are only issued to trusted, non-malicious actors (Krishnan et al., 2017). Additionally, C2PA manifests may bear information about other "ingredient" assets used in the creation process. These ingredients may point at assets, each bearing its own C2PA manifest describing its provenance. As such, C2PA encodes a graph structure with the root at the current asset and branching out to its ingredient assets. The C2PA standard describes this ingredient model in terms of creation of classical images (and other media assets) but we use it in DECORAIT to describe how Dreambooth models may be created from their training concept images, and how synthetic images are created from their Dreambooth model as an ingredient. Recently, C2PA (v1.3) introduced several training-mining assertions in which creators may set flags to opt in or out of GenAI training within manifests. These flags are _data_mining_, _ai_inference_, _ai_generative_training_ and _ai_training_. We leverage these flags to encode consent in DECORAIT. C2PA manifests also support the inclusion of DLT-based wallet addresses. For example, in Adobe Photoshop, any DLT wallet address linked to a user's Adobe identity may be recorded in the C2PA metadata of an exported image. In the following sections, we show how this wallet information, embedded immutably within assets at creation-time, may be leveraged to reward creatives when their images are used to train GenAI. ## 4. DecORAIT System Architecture DECORAIT is a decentralized search index, performing _key-value_ lookups using a robust image fingerprint (subsec. 3.1.1) as the _key_. The _value_ is a URI, resolvable to a C2PA manifest indicating permission to train. A scalable solution demands: 1) persistent distributed storage of manifests; 2) a distributed and immutable lookup operated via an open model without recourse to a centralized trust. Fig. 3 provides an overview of DECORAIT, which addresses these decoupled requirements by: 1) storing manifests on IPFS, where URIs are formed using a CID - a bit-wise (SHA-256 (Dong et al., 2016)) content hash; 2) using a hybrid on/off-chain solution to create a sharded search index (subsec. 4.1). In Sec. 5, we explore empirical trade-offs in defining the boundary between on and off-chain computation for the search and the optimal level of sharding. ### Decentralized Fingerprint Index All images within DECORAIT undergo visual fingerprinting using the approach outlined in Sec. 3.1.1 to enable large-scale retrieval of visually similar assets upon querying the registry. We adopt a hierarchical approach to share the search index, applying k-means clustering to fingerprints computed from a representative (1M) image sample. The resulting k-centroids subdivide the fingerprint hash space into \(k\) shards. Recursive sharding is possible, but experiments focus on a single level. We shard the index using \(k\) + 1 DLT smart contracts deployed on a local Ethereum test-net; one contract per each of the \(k\) clusters, plus a single entry point - the 'hero' contract - to orchestrate the sharding. The hero contract performs the k-NN assignment of fingerprints to the k-centroids, delegating operations (_e.g._ ingest, query) to be handled by the smart contract of the closest cluster (and so, shard). The contracts are implemented in Solidity, which does not support floating point math. We convert the 256-D floating point fingerprinting embeddings into integers as fixed-point (\(10^{15}\) precision), a workaround for applying ML operations on DLT (Krishnan et al., 2017). ### Hybrid on/off-chain variants We explore several design choices for implementing our system, evaluating three main variants (Table 1). In particular, we explore options for persisting the key-value store (here used to map image fingerprints to manifest URIs) and on/off-chain options for implementing the shard assignment and retrieval processes. #### 4.2.1. **Image Fingerprints and Data Storage** DLT storage patterns commonly persist data in two main ways: 1) _in-contract i.e._ within the state of a smart contract (as with NFTs), or 2) on the _event log_, a ledger of signals/exceptions emitted from smart contract code (as with cryptocurrency transactions). In our experiments, we use mnemonics prefixed E- to indicate variants using the event log and C- to indicate variants using in-contract storage. Shards are described by k-centroid data from clustering in fixed point (256 integers) form. Fingerprints are similarly represented. These 256-D data are stored as strings in the event log but may be stored in integer arrays for in-contract storage. There is cost efficiency in storing strings over integer arrays. However, there is a time cost in converting the strings for fixed point operation during ingest and query. The transaction cost implications are quantified in subsec. 5.4. #### 4.2.2. **Shard assignment and retrieval.** To store (ingest) or retrieve (query) a key-value pair, it is necessary to match the fingerprint to its shard via a k-NN assignment operation against the k-centroids obtained during initial clustering. In the case of queries, the retrieval is performed by matching against each key (fingerprint) stored within the key-value store of that shard. In Table 1, \begin{table} \begin{tabular}{c c c c} \hline Action & C-OOO & E-OOF & E-FOF \\ \hline Key-value storage & in contract & event log & event log \\ Shard centroid prediction on query & on-chain & on-chain & off-chain \\ Shard prediction on ingest & on-chain & on-chain & on-chain \\ Retrieval with-in shard & on-chain & off-chain & off-chain \\ \hline \end{tabular} \end{table} Table 1. Configuration of the three implementation variants. Figure 3. Overview of DECORAIT. 1) Ingest (blue): An image is fingerprinted by the client, and the hash is passed to the Hero contract, which determines on-chain which of the sharded (cluster) contracts will handle the ingest. The cluster contract emits an event recording the fingerprint (key) and IPFS URI (value) of the C2PA manifest, which the client stores on IPFS. The relevant off-chain sharded index listens for on-chain updates from its respective contract. 2) Query (pink): An image is fingerprinted by the client, and k-centroid data is used to determine which index shard to query with the fingerprint (key) to obtain the C2PA manifest URI (value). The client decides on whether GenAI training is permitted using the manifest. The diagram reflects the recommended variant (E-FOF) of DECORAIT (_cf_. Table 1). we use 'O' to indicate on-chain and 'F' to indicate off-chain computation for each matching operation and compare the efficiency of these variants in Sec. 5. #### 4.2.3. **Smart contract interaction.** The hero contract receives all operations and transactions in all variants. When ingesting a fingerprint to the registry, the hero contract reads it and calls the respective shard contract, which stores the key-value pair within its own contract (C-) or the event log (E-). In all cases, the smart contract performs the sharding via on-chain operations, which safeguards the integrity of the shards against inaccurate or malicious additions that could otherwise "infect" the clusters. When querying a fingerprint, the on-chain variant (C-OOO) proceeds similarly - determining the shard and delegating the retrieval process to the relevant smart contract. The retrieval is performed on-chain in this case. In variants E-OOF and E-FOF, the query processing is partly delegated to off-chain processes. A web service is provided for each shard, which listens to the event log emitted by the smart contract of its respective shard. When submitting a query, the hero smart contract determines the appropriate shard index to call. This may be done on-chain (per the ingestion flow) or off-chain using k-centroid data from the hero contract. The relevant shard's web service performs the retrieval in both cases. Using off-chain processing mitigates computational costs as the index scales, as we now show. Fig 3 shows the interaction of the web service and smart contracts. ### Decorait in the GenAI Workflow We now describe how DECORAIT integrates with the GenAI training process to determine consent and how subsequently generated synthetic images may be traced to pay a reward to the creators who contributed that training data. #### 4.3.1. **Training Consent.** To ensure the creatives who authored the images have consented to their assets being used for training, each image is queried against the DECORAIT registry. As described in subsec. 4.1 the fingerprint embedding is used to identify the closest visual matches within the decentralized search index. These are verified using the apportionment model (subsec. 3.1.2) to obtain the closest match and so, a decision on training consent for each of the images. As described in subsec. 3.2, this information is embedded within the C2PA manifest accompanying each image on the DECORAIT system. We envision a future in which stock photography sites might parse and show this consent information by default, enabling users to select only the opted-in images when sourcing data for GenAI training. #### 4.3.2. **Encoding Synthetic Image Provenance.** We further leverage the C2PA standard to encode the provenance of the newly generated synthetic image, cryptographically tying it to the "ingredient" set of concept images and GenAI model. Thus, using the CZPA manifest of the generated image it is possible to trace both the model that generated it (the personalized model, and in turn, its base model), as well as the data used to personalize it. Specifically, the C2PA "ingredient" assertion is used to indicate the image dataset as ingredients to the fine-tuned model, as well as the base model. The personalized, fine-tuned model is then listed as an ingredient within the manifests of any synthetic images it generates. Thus, the synthetic image is tied to its ingredient assets listed above. This offers a complete creation provenance chain, immutably signed at creation time. Although in the case of finetuning models, the images from the dataset are individually included in the manifest, C2PA allows for manifests to be defined over archives of image collections for larger datasets. Figure 4 visualizes this relationship. #### 4.3.3. **Apportionment and Payment.** Given a synthetic image, DECORAIT enables credit to be assigned across training data images in order to recognize and reward contribution. The set of training image ingredients is first identified by traversing the image's provenance graph, rooted in the manifest of the synthetic image. Similar to the training stage, DECORAIT again uses visual fingerprinting to perform matching within the decentralized search index to lookup the C2PA manifest of each training image -- including the DLT wallet address of each image's creator. Credit is then assigned to each image proportional to a pair-wise score predicted by the apportionment model of subsec. 3.1.2: given a synthetically generated image \(X_{q}\in\mathbb{R}^{H\times W\times 3}\), the visual similarity of each training image \(X_{i}\) in the identified concept set is scored via eq. 4 yielding a weighting \(w_{i}\): \[w_{i}=\max\big{(}\operatorname{apportion}(X_{q},X_{i})-\lambda,0\big{)}, \tag{5}\] where \(\lambda=0.7\) is an empirically set threshold for the visual similarity. We then assign credit per image by normalizing these weights over all top-image matches in the concept set. Once the credit apportionment has been determined, payments may be processed securely and transparently over DLT. The authors' wallet addresses are extracted from the C2PA manifest associated with each image (Figure 4, left). In Sec.5.5 we demonstrate how the DECORAIT system can be applied to a Dreambooth training pipeline, querying the registry for training consent, computing the similarity score, and apportioning credit amongst the set of concept images for any given generation. ## 5. Evaluation We evaluate the relative performance and scalability of the three variants of the DECORAIT decentralized search index: C-OOO, E-OOF, E-FOF (_c.f_. Table 1) concluding on the most performant variant. We then demonstrate the proposed variant of the DECORAIT system as applied to the use case of a Dreambooth training pipeline and demonstrate querying the registry, resolving to a decision on training consent of the images within the set of concept images, followed by processing payments using DLT based on our apportionment algorithm. ### Experimental Setup We evaluate using the LAION400M dataset (Lou et al., 2017), comprising image-text pairs crawled from publicly available web pages. LAION400M is extensively used to train GenAI models. For our experiments, we sample a training corpus of 1M images and sign these with C2PA manifests setting the ai_generative_training, data_mining, and ai_training flags to 'not allowed' to signify that the author has opted out of those images being used to train GenAI models. The evaluation uses up to 1000 query images randomly sampled from the corpus, to which random augmentations are applied. The data augmentation process follows (Beng et al., 2017). It aims to mimic the perturbations an image may suffer from repeated use, upload, download, and compression on the internet (_e.g._ noise, and changes in resolution, quality, and format). In addition, we form a second query set of 100 unperturbed images. Lastly, we demonstrate the proposed DECORATI system variant (E-FOF) within a Dreambooth model specialization pipeline. ### Evaluating Accuracy vs Sharding We evaluate the lookup's accuracy as a function of sharding (cluster count \(k\)) while maintaining a constant corpus size. The accuracy is agnostic to the on/off chain implementation of storage and query lookup, but the performance (query speed) varies significantly. Results are reported in table 2 for 1000 queries. There is a trend to slightly reduced accuracy as sharding (\(k\)) increases due to the risk of heavy perturbations mis-assigning image fingerprints to adjacent shards. When no perturbation is present, the system performs with 100% accuracy for all shard counts. Yet, increasing sharding will reduce retrieval time (see below). On this basis, we select \(k=25\) as an appropriate sharding trade-off for the remainder of our experiments. ### Evaluating Performance vs Sharding Retrieval speed varies significantly for each of our three variants and comprises two processes: closest centroid (shard) prediction and retrieval within the shard. We evaluate the speed of the nearest centroid prediction as a function of shard count (\(k\)). The number of distance computations (between the query embedding and \begin{table} \begin{tabular}{c c c c} \hline \hline Clusters & Accuracy & Cluster Prediction & Retrieval Time \\ (k) & (\%) & Time (on-chain)(s) & (off-chain) (ms) \\ \hline 1(Baseline) & 92.3 & - & 46.157 \\ 15 & 89.1 & 6.5 & 3.612 \\ 25 & 89.1 & 10.3 & 2.461 \\ 50 & 88.1 & 20.6 & 1.306 \\ 100 & 87.3 & 35.2 & 0.721 \\ 200 & 86.5 & 59.9 & 0.413 \\ 500 & 86 & 113 & 0.284 \\ 750 & 87.4 & 181.8 & 0.271 \\ 1000 & 86.4 & 272.9 & 0.249 \\ \hline \hline \end{tabular} \end{table} Table 2. Evaluating accuracy vs. shard count (\(k\)) for a 0.5M corpus size. The performance of on-chain shard prediction and within shard retrieval is studied for E-OOF. Shard prediction time increases, but retrieval time decreases as \(k\) increases. Figure 4. Provenance graph of synthetic media, as may be encoded via C2PA manifests. Left: starting from the generated image, the specialized Dreambooth model is listed as ingredient in its C2PA manifest. In turn, the Dreambooth model links to the specialization set of images and the base text-to-image Stable Diffusion model, which then may list as ingredient an archive of its entire training image corpus. Right: example JSON C2PA manifest accompanying a synthetic image, with highlighted ingredient and DLT wallet address assertions (the latter using the schema of a commercial image editor). each cluster centroid) scales linearly with \(k\) (Table 2), and this becomes prohibitive (several seconds) for high-frequency transactions (queries) at \(k=25\), though acceptable for bulk ingestion. This suggests that variant patterns x-Oxx are not scalable at query time. Further, we evaluate the speed and accuracy of shard prediction and image retrieval as a function of our system's image corpus size for variants C-xxx and E-xxx. C-OOO stores the data and executes the lookup on-chain. In contrast, E-OOF/FOF emit the image data as events on the blockchain and performs image lookup, retrieval, and verification off-chain. Table 3 for C-OOO shows that the on-chain retrieval speed drops significantly as corpus size increases, suggesting C-OOO is unfit for GenAI contexts with large amounts of data. Table 4 shows that E-FOF maintains high retrieval accuracy as corpus size increases, with an average retrieval speed of just over 4 ms for a corpus size of 1M images. Tables 3 and 4 were measured for 500 queries. Further, we find that ingesting images for the system's initial setup takes an average of 683.2 ms per image in C-OOO, whereas E-FOF significantly improves speed requiring an average of only 81.5 ms per image. We conclude that E-FOF exhibits scalability with corpus size and shard count, leading us to recommend variant E-FOF for the GenAI training opt-in/out task. ### Evaluating Cost Transaction cost is a consideration in scaling DLT systems. C-OOO is significantly more costly than E-OOF/FOF. Ingesting images costs, on average, 0.9M gas/image for C-OOO and, in comparison, only 0.2M gas/image for E-xOx variants. Similarly, when adding an image, a user would pay, on average, 19M gas/image for C-OOO but only 15M gas/image for E-xOx variants. Projecting the fingerprint embedding space onto a lower dimensional space using principal component analysis can further reduce these costs but does not alter the trend. The cost factor reinforces our design recommendation to use the DLT event log rather than in-contract storage for the key-value data. ### DECORAIT applied to Dreambooth Using the recommended E-FOF variant of the system, we demonstrate DECORAIT in a real-world scenario by specializing a Stable Diffusion model using the Dreambooth (Zhu et al., 2017) method to synthesize renditions of a specific subject in new contexts. Initially, a set of concept images is purchased from a popular stock photography website, which can be viewed on the left side of Fig.5. Unfortunately, their delivery is not accompanied by a C2PA manifest, therefore training consent cannot be immediately determined. The DECORAIT system is then queried to determine training consent across the set of concept images, by matching the images to their corresponding images within the registry. The assets within the registry are accompanied by C2PA manifests, which detail the author's choice of whether to allow GenAI training using that asset. The query to the DECORAIT registry resolves to several of our chosen images indicating that the creative has opted out of GenAI training. Fig.1 pictures the effect differing training data can have on the resulting model and the synthetic media it is able to generate, especially when a subset of the chosen concept images has been opted-out of GenAI training. Once the model is trained using the opted-in images and following the Dreambooth method, we encode this provenance information within the manifest of both the resulting model and the generated synthetic image. An example provenance graph is pictured in Fig.4 and we follow the same structure in this example. The "inggrediient" feature of the C2PA standard is leveraged in order to reference the resulting personalized model as the ingredient asset of the generated synthetic image. Within the personalized model's manifest, we encode as ingredients both the set of concept images it was trained on, as well as the base text-to-image Stable Diffusion model which was fine-tuned in order to create the personalized model. The base model may include within its manifest an archive detailing its entire training corpus of ingredient images. Further, we apply the apportionment algorithm in order to reward the contributing creatives. The process starts from the C2PA manifest of the synthetic image, tracing the provenance graph in order to identify the personalized model which created it and ultimately its training images. Then, the apportionment accumulates prediction scores using the fingerprint and second stage classifier model for each concept image the model was specialized on. The wallet addresses belonging to the creatives who authored the training images are identified by analyzing the C2PA manifests of those images. Lastly, payments are processed for each contributing creative, with currency sent directly to their wallet address through DLT, as pictured on the right side of Fig.5. The transaction confirmation is also pictured. Thus, we have demonstrated an end-to-end pipeline which included ethically building a dataset of assets which have been opted-in for GenAI training, successfully avoiding copyright infringement, personalizing a generative diffusion model, as well as analyzing the resulting synthetic media and running our proposed apportioning \begin{table} \begin{tabular}{c c c c} \hline Corpus & Accuracy & Accuracy un- & Cluster prediction \& KNN \\ (x\(10^{3}\)) & perturbed (\%) & perturbed (\%) & search time (off-chain) (ms) \\ \hline 100 & 91.6 & 100 & 0.58466 \\ 250 & 91 & 100 & 1.50747 \\ 500 & 91.2 & 100 & 2.53011 \\ 1000 & 91.2 & 100 & 4.27562 \\ \hline \end{tabular} \end{table} Table 4. Evaluating the recommended event-log storage variant (E-FOF) for accuracy and speed as corpus size increases, showing good scalability. Shard count \(k=25\). \begin{table} \begin{tabular}{c c c c} \hline Corpus & Accuracy & Accuracy un- & Cluster prediction \& KNN \\ & perturbed (\%) & perturbed (\%) & search time (on-chain) (s) \\ \hline 500 & 91.2 & 100 & 10.58 \\ 1000 & 92.4 & 100 & 19.72 \\ 5000 & 90.6 & 100 & 142.13 \\ 12500 & 90.8 & 100 & 295.38 \\ \hline \end{tabular} \end{table} Table 3. Evaluating in-contract storage (C-OOO) for accuracy and speed as corpus size increases. Shard count \(k=25\). Accuracy is good, but speed is poor relative to event-log variants. algorithm in order to recognize and reward the contributing creatives, enabling near-instant processing of royalty-like payments using DLT. ## 6. Conclusion We presented an end-to-end system through which content creators may assert their right to opt in or out of GenAI training, as well as receive reward for their contributions. We investigated the feasibility of a decentralized opt-in/out registry for GenAI using DLT, reaching recommendations that 1) event-log storage is appropriate; 2) on-chain shard prediction is appropriate for ingest but not for the query. We propose variant E-FOF as the most scalable solution, achieving 100% accuracy on non-augmented queries and 91.2% accuracy in the presence of augmentations, with query speed up to 4 ms for a corpus of 1M images. DECORAIT employs the distributed ledger (DLT) as a trustless registry and source of truth. The bulk of the computationally expensive operations are conducted off-chain. We proposed a fingerprinting-based content similarity score for image attribution and credit apportionment over the attribution corpus in the case of synthetic media, with payments securely processed for the contributing creatives using DLT. The system leverages the C2PA standard to track content provenance, specify GenAI training consent and store the creator's DLT wallet addresses. We demonstrated the DECORAIT system as part of a Dreambooth GenAI model personalization pipeline, demonstrating our proposed method for recovering synthetic media provenance and apportioning credit. DECORAIT thus enables contributing creatives to receive recognition and reward when their content is used in GenAI training. Future work could incorporate the DECORAIT registry within popular GenAI data loaders and ship the apportioning flow as a library in order to drive adoption. Most notably, future efforts should focus on investigating the socio-technical drivers and challenges our system may face when deployed in the wild. Further consideration is required for its development and implementation within a sustainable business model. Equally critical is the necessity for Figure 5. DECORAIT and Dreambooth pipeline including registry querying and model personalization flow. The Dreambooth model is specialized using the 3 opted-in images of a car and the proposed apportionment algorithm is applied across the image corpus. The red cross indicated images which have been opted-out according to the DECORAIT registry. The resulting apportionment conducted on the generated synthetic image from the experiment as described in Sec.5.5 is shown. The DLT wallet addresses of the three authors of the images are identified using the accompanying C2PA manifests. Payment is then conducted automatically, securely, and transparently using DLT, and one transaction’s confirmation is pictured. establishing comprehensive policies within the legal and regulatory space addressing digital rights and data sourcing for training GenAI models. These questions are likely to remain open for some time, however, ensuring the consensual use of digital assets and fair reward to contributors within the GenAI training pipeline is both a timely and urgent matter. We believe the proposed DECORAIT system is a promising first step towards a decentralized, end-to-end solution to the problem. ###### Acknowledgements. DECORAIT was supported in part by DECaDE under EPSRC Grant EP/T022485/1.
2310.00516
Enhancing Efficiency and Privacy in Memory-Based Malware Classification through Feature Selection
Malware poses a significant security risk to individuals, organizations, and critical infrastructure by compromising systems and data. Leveraging memory dumps that offer snapshots of computer memory can aid the analysis and detection of malicious content, including malware. To improve the efficacy and address privacy concerns in malware classification systems, feature selection can play a critical role as it is capable of identifying the most relevant features, thus, minimizing the amount of data fed to classifiers. In this study, we employ three feature selection approaches to identify significant features from memory content and use them with a diverse set of classifiers to enhance the performance and privacy of the classification task. Comprehensive experiments are conducted across three levels of malware classification tasks: i) binary-level benign or malware classification, ii) malware type classification (including Trojan horse, ransomware, and spyware), and iii) malware family classification within each family (with varying numbers of classes). Results demonstrate that the feature selection strategy, incorporating mutual information and other methods, enhances classifier performance for all tasks. Notably, selecting only 25\% and 50\% of input features using Mutual Information and then employing the Random Forest classifier yields the best results. Our findings reinforce the importance of feature selection for malware classification and provide valuable insights for identifying appropriate approaches. By advancing the effectiveness and privacy of malware classification systems, this research contributes to safeguarding against security threats posed by malicious software.
Salim Sazzed, Sharif Ullah
2023-09-30T22:36:31Z
http://arxiv.org/abs/2310.00516v2
# Enhancing Efficiency and Privacy in Memory-Based Malware Classification through Feature Selection ###### Abstract Malware poses a significant security risk to individuals, organizations, and critical infrastructure by compromising systems and data. Leveraging memory dumps that offer snapshots of computer memory can aid the analysis and detection of malicious content, including malware. To improve the efficacy and address privacy concerns in malware classification systems, feature selection can play a critical role as it is capable of identifying the most relevant features, thus, minimizing the amount of data fed to classifiers. In this study, we employ three feature selection approaches to identify significant features from memory content and use them with a diverse set of classifiers to enhance the performance and privacy of the classification task. Comprehensive experiments are conducted across three levels of malware classification tasks: i) binary-level benign or malware classification, ii) malware type classification (including Trojan horse, ransomware, and spyware), and iii) malware family classification within each family (with varying numbers of classes). Results demonstrate that the feature selection strategy, incorporating mutual information and other methods, enhances classifier performance for all tasks. Notably, selecting only 25% and 50% of input features using Mutual Information and then employing the Random Forest classifier yields the best results. Our findings reinforce the importance of feature selection for malware classification and provide valuable insights for identifying appropriate approaches. By advancing the effectiveness and privacy of malware classification systems, this research contributes to safeguarding against security threats posed by malicious software. ## I Introduction Malware, an abbreviation for malicious software, encompasses software or code specifically designed to inflict harm or compromise computer systems, networks, or users. It encompasses various forms, including viruses, worms, Trojans, ransomware, spyware, and adware [1]. The proliferation of physical communication systems such as smartphones, tablets, Internet of Things (IoT) devices, and cloud computing has led to a surge in the development and deployment of malware [2]. For instance, global malware attacks reached 5.5 billion in 2022, indicating a two percent increase compared to the previous year 1. It should be noted that malware attacks are not only a threat to the privacy and security of individuals' or organizations' data, but they may also extend to critical infrastructures and become fatal to human lives. For example, the Triton malware attack (2017), a state-sponsored attack, targeted a petrochemical plant to take over the safety instrument systems of the plant. The aim was to kill humans by triggering an explosion or releasing toxic gas [3]. Besides, the healthcare and public health sectors are also increasingly being affected by malware attacks; in 2022 alone, a total of 210 ransomware incidents were reported in these sectors [4]. Footnote 1: Source: [https://www.statista.com/statistics/873097/malware-attacks-per-year-worldwide/](https://www.statista.com/statistics/873097/malware-attacks-per-year-worldwide/) As mentioned before, malware can have different forms and exploit particular sectors of targets. To determine the level of risk and severity associated with different malware, the types or families of malware need to be identified and prioritized for proactive and reactive cyber defense. Analyzing the traits of malware plays a crucial role in understanding, identifying, and effectively countering the threats they present. Specifically, categorizing different types of malware is of utmost importance as it helps in crafting appropriate responses, attributing attacks to their sources, and developing proactive security measures to safeguard systems, networks, and users from potentially harmful consequences. Malware analysis can be categorized into three types: static, dynamic, and memory-based [5]. In static analysis, malicious files are studied without being executed, and the required features are extracted accordingly. As there is no need for execution, static methods require much fewer computational resources along with providing a fast recognition scheme. However, recent malware files use obfuscation techniques, such as the insertion of dead code, register reassignment, the substitution of instruction, and code manipulation to avoid static analysis detection [6]. In contrast, behavior analysis executes and monitors malicious files in a controlled environment. Unlike static analysis, behavior analysis is not vulnerable to obfuscation techniques, but it consumes excessive time and resources (i.e. memory and CPU usage) [7]. Memory analysis has been proven to be a powerful analysis technique that can effectively study malware behaviors [8]. It uses memory images to analyze information about running programs, operating systems, and the general state of the computer. Examining memory can detect process/DLL hooking techniques used by malware to appear as a legitimate process. The analysis provides accurate information about malware behaviors by extracting memory-based features that can express malware activities and characteristics. Memory-based features can also overcome some of the behavior analysis limitations, such as the single view of execution and malware's disguised behaviors during the analysis [9]. Dissecting memory dumps is an effective approach for detecting and classifying malware [10], in addition to other approaches such as static and dynamic analysis [5]. Given the sophisticated evasion techniques employed by malware to circumvent traditional security measures, examining the memory of infected systems becomes essential for gaining crucial insights. Memory analysis unveils hidden processes, injected code, or rootkit-like behaviors that may escape detection by antivirus or intrusion detection systems. Exploring the memory space occupied by running processes makes it possible to determine their objectives, functionalities, and potentially malicious actions. This valuable information contributes to enhancing cybersecurity defenses by strengthening existing security measures. Malware variants are continuing to evolve by using advanced obfuscation and packing techniques. These concealing techniques make malware detection and classification significantly challenging. Manually scrutinizing memory dumps to comprehend compromised processes, network connections, and code artifacts associated with malware is time-consuming and requires significant effort [11]. Apart from accuracy, timely malware detection is crucial to minimize the potential damage and impact caused by the malware. Swift identification of malware allows for prompt actions like isolating infected systems, removing the malware, and restoring compromised data or configurations. Due to this urgency, researchers have conducted numerous studies in recent years focusing on automatic malware identification, including their existence and more detailed analysis, such as type classification. These studies make use of various machine learning techniques to achieve their objectives. By leveraging machine learning, researchers aim to expedite the detection process and enhance the efficiency of malware analysis, thus bolstering overall cybersecurity efforts. Feature selection can play an important role in classification and prediction [12]. However, they remain mostly unexplored in malware detection [11]. A carefully curated feature set can improve the performance of classifiers, as shown in earlier studies from diverse domains [13]. Although, in most areas, the primary purpose of feature selection is to reduce computational complexity and enhance accuracy, in cybersecurity domains, another crucial aspect is privacy, which can be benefited from feature selection. As data privacy is a critical component in the cyber-security domain, limiting the amount of data utilized for training machine learning models can greatly reduce the risk of security breaches. Therefore, here, we focus on identifying influential features from memory dumps to limit the data used in the classification task. Since improper feature selection ( e.g., erroneous exclusion of essential features, introduction of bias and errors) can lead to performance degradation, we explore multiple feature selection approaches. As in the cyber-security domain, computational time is an important element, we focus solely on filter-based feature selection methods, which are computationally efficient and do not require exhaustive search. We consider three filter-based feature selection methods- i) Chi-square, ii) Analysis of variance (ANOVA), and iii) Mutual information (MI) for limiting the number of features utilized for classification. To validate the advantage of the feature selection, we conduct comprehensive experiments across three levels of malware classification tasks: i) binary-level benign or malware classification, ii) malware type classification (Trojan horse, ransomware, spyware), and iii) malware family classification for each malware family (5 classes based on malware family) using selected features with multiple classifiers. Our results demonstrate that feature selection strategies enhance the performance of classifiers across all tasks. Notably, by selecting only 25% and 50% of input features, when using random forest (RF) classifiers, we achieve similar or better results than incorporating all the features (100%). MI feature selection approach with Random Forest (RF) classifier shows the most consistent performance. ### _Contributions_ The primary contributions of this paper can be summarized as follows- * We investigate the effectiveness of diverse feature selection approaches for malware identification and classification task. * We demonstrate that the MI-based feature selection approach can effectively identify significant memory features (e.g., 25%, 50%) and can obtain a similar or better performance than utilizing all the input features. ## II Related Work In recent years, there has been a significant increase in malware attacks, leading to a growing focus on malware analysis as a critical research area in the cybersecurity domain. Malware leaves distinct traces in computer memories, making memory analysis a valuable tool to gain insights into malware behavior and patterns. As our study revolves around malware detection through memory content, this section primarily delves into works related to memory analysis. Dener et al. [14] applied a number of machine learning algorithms for malware and benign type classification (2-class) from memory data (the same dataset considered in this study). The authors obtained accuracy close to 100% for this malware identification task. The authors of the MalMemAnalysis-2022 dataset [1] utilized an ensemble approach for distinguishing benign and malware classes from memory data and obtained an F1 score of around 0.99. A number of studies incorporated image processing algorithms for malware identification and classification tasks. For example, Dai et al. [9] presented a method that involves extracting a memory dump file and converting it into a grayscale image. The author(s) resized the image to a fixed size, extracted features using the histogram of gradients technique, and then employed a classification algorithm to categorize the malware. Li et al. (2019) proposed a deep learning-based approach for malware analysis. This method involves taking a memory snapshot of the system and converting it into a grayscale image. They employed a convolutional neural network (CNN) to model the system and train the deep learning model to distinguish between malicious and benign memory snapshots. The authors claimed that this approach significantly reduces analysis runtime without compromising accuracy. Bozkir et al. [15] proposed a new memory dumping and computer vision based method to detect malware in memory, even they do not exist on the hard drive. The proposed approach captures the memory dump of suspicious processes which are converted to RGB images. The authors then generate feature vectors utilizing GIST and HOG (Histogram of Gradients) descriptors as well as their combination, which are then classified by machine learning classifiers. Mosli et al. [16] conducted a study to detect malware by extracting Registry, DLLs, and APIs from memory images which compared malware detection performances using machine learning algorithms. Later the authors performed a behavior-based automated malware detection using forensic memory analysis and machine learning techniques [17]. Petrik et al. [18] performed malware detection more specifically with binary raw data from memory dumps of devices. It had the characteristics of being independent from the operating system and architectural structure. Demme et al. [2] demonstrated the effectiveness of leveraging various performance counters, such as instructions per cycle (IPC), cache behavior, and memory behavior, to classify malware. Building upon this research, Tang et al. [19] utilized hardware performance counters (HPC) in conjunction with unsupervised methods to detect malware. Sharafaldin et al. [20] proposed BotViz, a hybrid method that incorporates hooks to enhance bot detection. Martin-Perez et al. [21] introduced two strategies (Guided De-Relocation and linear Sweep De-Relocation) for pre-processing memory dumps, aiming to expedite and simplify the analysis process by relocating file objects. Only a limited number of works are available which incorporate feature selection for malware classification. Abbasi et al. [11] proposed a particle swarm optimization (PSO) based meta-heuristic approach for feature selection. However, their work focused on only ransomware detection. Besides, they employed the costly wrapper-based feature selection approach, which is computationally expensive. Moreover, they focused on malware behavior-based analysis, which is fundamentally different from our memory-based analysis. Tsafrir et al. [12] introduced three feature extraction methodologies for MP4 malware detection and incorporated them with machine learning (ML) algorithms. These methodologies included two file structure-based approaches and one knowledge-based approach. To assess their effectiveness, the researchers conducted a series of experiments using six ML algorithms on multiple datasets. Some other works focused on creating tools for memory analysis and forensics. For example, Okolica and Peterson [22] introduced CMAT, a self-contained tool that can extract forensic information from the memory dump. The authors emphasized the significance of a highly flexible memory analysis process that can be employed across different platforms and systems in their study. Such flexibility can significantly reduce the time required to match the system with the corresponding profile. Block and Dewald [23] introduced a memory analysis plug-in, which was designed to simplify the analysis process. The plug-in provides detailed information about heap objects in memory and can aid memory analysis professionals in understanding operations occurring within the system memory. Lashkari et al. [10] developed VolMemLyzer, a Python-based script designed to facilitate feature extraction from memory dumps generated by another tool called Volatility. VolMemLyzer extracts thirty-six features from memory dumps, encompassing various categories, such as processes, dynamic link libraries, sockets, handles, callbacks, loaded modules, code injections, and connections. ## III Dataset The MalMemAnalysis-2022 dataset considered in this study was introduced by Carrier et al. [1], which includes hidden malware families obtained by memory analysis. The dataset consists of 58,596 memory records, equally divided into benign and malicious (i.e., malware) classes, each having 29,298 instances. There are 56 features in the MalMemAnalysis-2022 dataset, with each feature representing specific types of memory information (more details regarding dataset creation and features are available on [1]). The 29298 malware samples comprise the following three types of malware- 1. Trojan horse: A Trojan horse is a type of malicious software (malware) that disguises itself as a legitimate program or file to deceive users into installing or executing it on their computers. Once inside, it can perform harmful actions without the user's knowledge, such as stealing sensitive data, providing unauthorized access to cybercriminals, recording keystrokes, participating in botnets, and delivering ransomware. Trojans are often distributed through deceptive means like email attachments or infected downloads, posing significant risks to computer security. 2. Spyware: Spyware is a type of malware designed to covertly gather information from a user's computer or device without their knowledge or consent. Once installed, spyware operates in the background, tracking browsing habits, recording keystrokes, capturing login credentials, and monitoring other sensitive data. The collected information is then sent back to the attacker, who can use it for various malicious purposes, such as identity theft, financial fraud, or targeted advertising. Spyware's focus is on information gathering and stealthy monitoring of the user's activities. It typically does not have destructive capabilities like a Trojan. 3. Ransomware: Ransomware is a type of malicious software (malware) designed to encrypt the files and data on a victim's computer or network, rendering them inaccessible. Once the files are encrypted, the ransomware displays a ransom note, demanding payment, usually in cryptocurrency, in exchange for providing the decryption key needed to unlock the files. It is typically distributed through malicious email attachments, infected software downloads, or by exploiting vulnerabilities in systems. Each type of malware further can be categorized into multiple malware families, as shown in detail in Table I. Table I shows the distributions of malware types and families in the dataset. As mentioned earlier, three types of malware are present in the dataset, where each type further contains malware from five different families. Considering the 3-class malware types (i.e., Trojan horse, Spyware, and Ransomware ), the dataset is almost class-balanced. The dataset representing each type of malware, based on the distributions of corresponding malware families, is not entirely class-balanced, as among the five malware families, the percentage of a malware family can range from 15% to 25%. ## IV Malware Categorization Here, we utilize memory dumps for malware detection/classification at three distinct levels (Fig 1), where each level is associated with a particular task- * Level/Task 1: Classify the memory data into two groups: i) benign and ii) malware. * Level/Task 2: Classify the malware data into 3 types: i) Trojan Horse, ii) Spyware, and ii) Ransomware. * Level/Task 3: Classify different types of malware into the number of families they represent. ### _Feature Selection_ In the feature selection phase, we utilize statistical approaches to identify highly class-correlated features. To address computational constraints, we prefer filter-based statistical methods over wrapper methods, as they avoid exhaustive searches and classifier integration. Besides, we emphasize minimizing the number of selected features during this step to enhance computational efficiency and minimize security concerns in subsequent stages. Thus, our goal is to identify a feature selection method that demonstrates robust performance while utilizing only a minimal number of features. We employ three filter based feature selection approaches, identifying the most relevant features as described below. #### Iv-A1 Chi-square We compute the Chi-square (\(\chi 2\)) value between each feature and the corresponding class. Utilizing the Chi-square statistics, features that do not correlate well with class labels are eliminated as they may not be significant for classification. The Chi-square (\(\chi 2\)) value is calculated as follows - \[\chi 2=\sum\frac{\left(O_{i}-E_{i}\right)^{2}}{E_{i}} \tag{1}\] where, \(O_{i}\) represents observed value and \(E_{i}\) represents expected value of feature \(i\). A gene with a high chi-square value is highly correlated with the class; hence, important for classification. #### Iv-A2 Analysis of Variance (ANOVA) ANOVA is a statistical method that examines the means of two or more groups to find whether they are significantly different from each other. ANOVA F-test determines the variance between and within groups, calculates the F score, and uses it to identify informative features. Fig. 1: Various levels of classifcation tasks considered in this study \begin{table} \begin{tabular}{c|c|c} **Malware Type (#Total)** & **Malware Family** & **(\#)Instances** \\ & Zeu & 1950 \\ & Emotet & 1967 \\ Trojan Horse (9487) & Refroso & 2000 \\ & scar & 2000 \\ & Reconyc & 1570 \\ \hline \multirow{4}{*}{Spyware (10020)} & 180Solutions & 2000 \\ & Coolowbesearch & 2000 \\ & Gator & 2200 \\ & Transponder & 2410 \\ & TIBS & 1410 \\ \hline \multirow{4}{*}{Ransomware (9791)} & Comti & 1988 \\ & MAZE & 1958 \\ \cline{1-1} & Pysa & 1717 \\ \cline{1-1} & ako & 2000 \\ \cline{1-1} & Shade & 2128 \\ \end{tabular} \end{table} TABLE I: Malware Statistics in MalMemAnalysis-2022 dataset 3 Mutual Information (MI) Mutual information is a measure of the statistical dependency between two random variables. In the context of feature selection, it quantifies the amount of information a feature (i.e., memory-related attribute) provides about the target variable (class of the sample). A higher score indicates a stronger relationship between the feature and the target variable, indicating that the feature is potentially more informative for the prediction task. The mutual information between a feature X and a class label Y is computed as follows- \[MI(X,Y)=H(X)+H(Y)-H(X,Y) \tag{2}\] where \(MI(X,Y)\) represents the mutual information between feature X and class label Y. \(H(X)\) is the entropy of feature \(X\). \(H(Y)\) is the entropy of class label Y. \(H(X,Y)\) is the joint entropy of feature \(X\) and class label \(Y\). Entropy measures the uncertainty or disorder in a random variable. The calculation of entropy involves probability distributions of the variables. The specific formula for entropy calculation depends on the type of data and probability distribution considered. We use the default implementation of the mutual information algorithm of scikit-learn [24]. ### _Classification_ We employ a number of machine learning classifiers (described below) for the classification tasks with selected features. For all the classifiers, the default parameter settings of scikit-learn library [24] are used. #### Iv-B1 Random Forest (RF) Random Forest is an ensemble learning method that combines multiple decision trees. It is known for its robustness, ability to handle large datasets with high-dimensional features, and resistance to overfitting. Random Forest can handle both classification and regression tasks. #### Iv-B2 Naive Bayes (NB) Naive Bayes is a probabilistic classifier based on Bayes' theorem with the assumption of independence between features. Despite its simplifying assumption, Naive Bayes classifiers perform well in various domains, especially with large datasets. #### Iv-B3 K-Nearest Neighbors (K-NN) K-Nearest Neighbors (KNN) is a non-parametric classification method that determines the class of a data point by taking a majority vote from its closest neighbors. This approach is known for its ease of implementation and excellent performance, especially when dealing with small to medium-sized datasets and clear clusters. #### Iv-B4 AdaBoost AdaBoost is an ensemble learning algorithm used for classification and regression tasks. It combines multiple weak learners, like decision trees, to create a powerful model by iteratively giving more weight to misclassified instances and adjusting the sample weights. This process focuses on challenging data points and improves overall accuracy. AdaBoost is versatile but sensitive to noisy data. #### Iv-B5 Linear Discriminant Analysis (LDA) Linear Discriminant Analysis (LDA) can be used as a supervised dimensionality reduction technique for classification tasks. LDA aims to find a linear combination of features that maximizes class separability and can be used to project the data into a lower-dimensional space while preserving class information. #### Iv-B6 Extra Trees Extra Trees, also known as Extremely Randomized Trees, is a powerful ensemble machine learning algorithm that constructs multiple decision trees using randomized subsets of features and thresholds. Unlike Random Forest, ExtraTrees introduces an additional layer of randomness in the feature and threshold selection process, effectively mitigating the risk of overfitting. Through the aggregation of predictions from individual trees, ExtraTrees can deliver robust and accurate results, making it well-suited for handling high-dimensional data, noisy features, and irrelevant variables. ## V Results and Discussion ### _Evaluation Settings_ To evaluate and compare the performance of various classifiers, we consider 5-fold cross-validation. Cross-validation is generally considered better than a pre-defined training-testing split for model evaluation because it provides more reliable and unbiased estimates of a model's performance. We adopt two performance evaluation metrics: i) macro F1 and ii) accuracy. Since some of our classification tasks deal with a class-imbalanced dataset, the macro F1 score is a better estimator for the classifier's performance than accuracy, which weights all the classes equally. ### _Performance comparison of feature selection with original setting_ Table II compares the performance of various classifiers for malware and benign type classification employing multiple feature selection approaches. We assess their performances under three conditions: when using 25% and 50% of features selected by various feature selection methods, as well as in the original configuration employing 100% of the features. As we can see, classifiers exhibit very similar performance with (e.g., 25% and 50%) and without (100%) incorporating feature selection, which corroborates the effectiveness of the feature selection approaches. This binary-level malware identification task is relatively easy, as we can see that all the classifiers are capable of yielding perfect or almost perfect F1 scores. One interesting thing we observe is that for MNB, the performance improvement is dramatic when we reduce the number of features. When all the memory features are used, we obtain an F1 score of only 0.64. Leveraging feature selection approaches such as ANOVA and MI to identify the top 25% of features can dramatically improve the performance of the classifier. Table III shows the performance of various classifiers for the malware type classification. Similar to the malware identification task, we provide the performance of various classifiers in three different settings. As we can find from Table III, among all the classifiers, RF performs best; it obtains an F1 score of around 0.75-0.76. One important thing we notice, all the tree-based classifiers perform better than other methods. Regarding the features used, we find that incorporating 50% features improves performance for all the top classifiers, and this is true for all the feature selection approaches. When the best results are considered, we find MI feature selection is the most effective for classifying the top feature. Table IV presents the performance of various classifiers for the malware family classification. For the 5-class classification problem of Trojan malware, we find RF with 50% feature (28 features) selected by MI performs best with an F1 score of 0.75, compared to the F1 scores obtained as 0.74 using all features. For the Spyware 5-class categorization, the best F1 score of 0.66 is obtained by the RF classifier with 50% features selected by ANOVA. MI with RF shows very similar results of 0.65. We find classifying the Ransomware family very challenging. For Ransomware, the best result is obtained by Chi-Square (50% features) with RF, which is 0.57. Selecting only 25% features through MI, RF performs similarly to using 100% features. Based on the results of all the tasks we investigate, it is evident that we can actually reduce the number of features used for ML classifiers without compromising classifier performance. In some instances, the feature selection even improves the performance of the classifiers. For all the classification tasks (Table III and IV), we notice by selecting 50% features, we can actually obtain better results than incorporating all the features. We can reduce the percentage of features even further to 25% without compromising efficacy. For example, when classifying the Trojan horse family, utilizing just 25% of the features resulted in an F1 score of 0.73. Even using all features, the improvement was minimal, reaching only 0.74, a mere 1% increase. For Spyware and Ransomware, we can actually reach the same level of performance using only 25% of the features selected by various feature selection methods. \begin{table} \begin{tabular}{c|c c|c c c c c c c|c c c c c c} **Classifier** & \multicolumn{2}{c}{**100\% percent**} & \multicolumn{6}{c}{**25\% percent**} & \multicolumn{6}{c}{**50\% percent**} \\ \hline & & & \multicolumn{3}{c}{**Chi**} & \multicolumn{3}{c}{**ANOVA**} & \multicolumn{3}{c}{**Mutual**} & \multicolumn{3}{c}{**Chi**} & \multicolumn{3}{c}{**ANOVA**} & \multicolumn{3}{c}{**Mutual**} \\ \hline & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. \\ MNB & 0.64 & 0.55 & 0.64 & 0.55 & 0.98 & 0.98 & 0.98 & 0.98 & 0.64 & 0.55 & 0.98 & 0.98 & 0.64 & 0.55 \\ LDA & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 \\ Adaboost & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ K-NN & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ Extra-Tree & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ Random Forest & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \end{tabular} \end{table} TABLE II: Performance of various classifiers for malware identification task (i.e., malicious or benign) \begin{table} \begin{tabular}{c c|c c c c c c c c c c c c c} **Malware** & \multicolumn{3}{c}{**100\% features**} & \multicolumn{3}{c}{**25\% features**} & \multicolumn{3}{c}{**50\% features**} \\ \cline{3-14} **Family** & **Classifier** & & & \multicolumn{3}{c}{**Chi**} & \multicolumn{3}{c}{**ANOVA**} & \multicolumn{3}{c}{**MI**} & \multicolumn{3}{c}{**Chi**} & \multicolumn{3}{c}{**ANOVA**} & \multicolumn{3}{c}{**MI**} \\ \hline & & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. \\ \hline \multirow{3}{*}{Trozan} & Adaboost & 0.54 & 0.55 & 0.48 & 0.49 & 0.44 & 0.44 & 0.53 & 0.54 & 0.52 & 0.53 & 0.51 & 0.51 & 0.55 & 0.55 \\ & K-NN & 0.573 & 0.58 & 0.54 & 0.54 & 0.42 & 0.42 & 0.63 & 0.63 & 0.57 & 0.57 & 0.51 & 0.51 & 0.51 & 0.58 & 0.59 \\ & Extra-Tree & 0.72 & 0.72 & 0.65 & 0.66 & 0.50 & 0.50 & 0.73 & 0.73 & 0.72 & 0.72 & 0.65 & 0.65 & 0.73 & 0.73 \\ & Random Forest & 0.74 & 0.74 & 0.69 & 0.69 & 0.54 & 0.54 & **0.73** & 0.73 & 0.74 & 0.74 & 0.69 & 0.70 & **0.75** & 0.75 \\ \hline \multirow{3}{*}{Spyware} & K-NN & 0.51 & 0.50 & 0.52 & 0.50 & 0.5 & 0.49 & 0.52 & 0.51 & 0.51 & 0.55 & 0.51 & 0.50 & 0.52 & 0.51 \\ & Adaboost & 0.44 & 0.43 & 0.39 & 0.40 & 0.38 & 0.38 & 0.43 & 0.42 & 0.41 & 0.41 & 0.45 & 0.44 & 0.42 & 0.42 \\ & Extra-Tree & 0.62 & 0.61 & 0.65 & 0.64 & 0.57 & 0.56 & 0.63 & 0.62 & 0.63 & 0.62 & 0.64 & 0.62 & 0.64 & 0.63 \\ & Random Forest & **0.65** & 0.64 & 0.65 & 0.64 & 0.57 & 0.56 & 0.64 & 0.62 & 0.65 & 0.64 & **0.66** & 0.65 & 0.63 \\ \hline \multirow{3}{*}{Ransome} & K-NN & 0.45 & 0.43 & 0.43 & 0.41 & 0.41 & 0.50 & 0.51 & 0.45 & 0.45 & 0.45 & 0.45 & 0.45 & 0.44 & 0.44 & 0.44 \\ & Adaboost & 0.40 & 0.40 & 0.37 & 0.37 & 0.34 & 0.33 & 0.395 & 0.39 & 0.38 & 0.38 & 0.39 & 0.38 & 0.40 & 0.40 \\ & Extra-Tree & 0.55 & 0.55 & 0.50 & 0.50 & 0.44 & 0.43 & 0.55 & 0.50 & 0.50 & 0.50 & 0.53 & 0.53 & 0.55 & 0.55 \\ \end{tabular} \end{table} TABLE IV: Performance of top classifiers with various feature selection approaches for 5-class malware family classification tasks (bold texts represent best F1 scores for each type of feature settings.) \begin{table} \begin{tabular}{c|c c|c c c c c c c|c c c c c} **Classifier** & \multicolumn{3}{c}{**100\% percent**} & \multicolumn{6}{c}{**25\% percent**} & \multicolumn{6}{c}{**50\% percent**} \\ \hline & & & \multicolumn{3}{c}{**Chi**} & \multicolumn{3}{c}{**ANOVA**} & \multicolumn{3}{c}{**Mutual**} & \multicolumn{3}{c}{**Chi**} & \multicolumn{3}{c}{**ANOVA**} & \multicolumn{3}{c}{**Mutual**} \\ \hline & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. & F1 & Acc. \\ MNB & 0.64 & 0.55 & 0.64 & 0.55 & 0.98 & 0.98 & 0.98 & 0.98 & 0.64 & 0.55 & 0.98 & 0.98 & 0.64 & 0.55 \\ LDA & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 & 0.99 \\ Adaboost & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 \\ K-NN & 1. Among all feature selection algorithms, we observe that the MI delivers the most consistent performance. For all the tasks, in 25% and 50% feature settings, MI yields a similar performance of 100% input features. Although in a few cases, we find Chi-square and ANOVA perform a bit better than MI, their performances are not consistent; In contrast, MI performance is always consistent and similar to using 100% of features. We further investigate whether the reduced feature sets impact the prediction patterns of the classifiers. For the highly challenging classification task-3 (malware family classifications), we show the confusion matrix of RF classifiers with 100%, 25%, and 50% features (Fig 2). As we can see, in all three malware family classification tasks, different feature sizes show similar patterns in misclassification. The accuracy of different classes is similar in all three settings. Besides, the misclassification (prediction) also shows similar patterns for all the cases. Regarding privacy, while feature selection may not directly address data privacy concerns, it can have an indirect impact on privacy in certain scenarios. By reducing the number of features, feature selection can potentially limit the exposure of sensitive or identifiable information during model training or analysis. This reduction in the feature space may help minimize the risk of inadvertently revealing private information. ### _Comparison with other studies_ Note that to the best of our knowledge, only two studies [1, 14] have utilized the MalMemAnalysis-2022 dataset for malware analysis, as the dataset is relatively recent. However, both studies focused solely on the malware identification task (Task 1 of this study). The study by [1] achieved the best results of approximately 0.99 by employing an ensemble of NB, RF, and DT classifiers. This number is slightly lower than our feature selection strategy with the RF classifier. Unfortunately, the authors did not mention their criteria for selecting the training and testing datasets, making a direct comparison with our method difficult. On the other hand, the other study by [14] achieved F1 scores ranging from 0.99 to 1.0 using various classifiers, which is similar to our results. They used a 70% and 30% training and testing split; however, they did not provide details on how they selected the training and testing data, hindering a precise comparison. Fig. 2: Comparison of confusion matrices MI with RF classifiers considering various numbers of features (100% (column 1), 25% (column 2), and 50% (column 3)) for malware family classification (A. Trojan horse B. Spyware C. Ransomware) Despite the missing information, all the works, including ours, demonstrate F1 scores and accuracy close to perfection, suggesting that classifying benign and malware samples in this dataset is relatively trivial. ## VI Summary and Future Work With the growth of Advanced Persistent Threats (APTs), more sophisticated malware has been exhibited in recent years. As malware developers continually refine their techniques and develop novel strategies to evade security controls, the need for advanced malware analysis frameworks becomes crucial. Thus, this research aims to enhance the performance of malware detection and classification tasks by incorporating feature selection approaches. We assess the performance and generalization capabilities of the various feature selection approaches for five classification tasks (one malware detection task, one malware type classification task, and three malware family classifications) through a set of experiments. The evaluation on the MalMemAnalysis-2022 dataset shows the effectiveness of the feature selection approaches (i.e., Chi-Square, ANOVA, and MI) for various scenarios. Among the three feature selection approaches, MI exhibits the best consistency and efficacy and obtains similar or better performance than the original 100% input features by leveraging only 25%-50% features with the RF classifier. The reduced feature set obtained by MI can improve the efficiency of ML classifiers, as well as can enhance privacy through data minimization as fewer data are fed to ML classifiers for the prediction task. By selecting robust and relevant features, models can become more resilient against such attacks. Removing noisy or misleading features can improve the model's ability to generalize and reduce its susceptibility to manipulation. However, it is important to note that relying solely on feature selection is not enough to ensure comprehensive data privacy. Adopting a holistic approach that incorporates both feature selection and privacy-preserving practices (e.g., data anonymization, encryption, access controls) is essential to effectively mitigate privacy risks. Future work will focus on expanding and refining the findings of this study. We will investigate the performance of feature selection approaches on a broader range of memory datasets beyond the MalMemAnalysis-2022. Besides, we will explore other privacy-preserving practices, such as differential privacy techniques or federated learning, to ensure a more robust and comprehensive safeguard against privacy risks. Investigating the trade-offs between model accuracy and privacy preservation, as well as areas that require attention, which we plan to study in our future works.
2309.12071
Benchmarking quantized LLaMa-based models on the Brazilian Secondary School Exam
Although Large Language Models (LLMs) represent a revolution in the way we interact with computers, allowing the construction of complex questions and the ability to reason over a sequence of statements, their use is restricted due to the need for dedicated hardware for execution. In this study, we evaluate the performance of LLMs based on the 7 and 13 billion LLaMA models, subjected to a quantization process and run on home hardware. The models considered were Alpaca, Koala, and Vicuna. To evaluate the effectiveness of these models, we developed a database containing 1,006 questions from the ENEM (Brazilian National Secondary School Exam). Our analysis revealed that the best performing models achieved an accuracy of approximately 46% for the original texts of the Portuguese questions and 49% on their English translations. In addition, we evaluated the computational efficiency of the models by measuring the time required for execution. On average, the 7 and 13 billion LLMs took approximately 20 and 50 seconds, respectively, to process the queries on a machine equipped with an AMD Ryzen 5 3600x processor
Matheus L. O. Santos, Cláudio E. C. Campelo
2023-09-21T13:39:54Z
http://arxiv.org/abs/2309.12071v1
# Benchmarking quantized LLaMa-based models on the Brazilian Secondary School Exam ###### Abstract Although Large Language Models (LLMs) represent a revolution in the way we interact with computers, allowing the construction of complex questions and the ability to reason over a sequence of statements, their use is restricted due to the need for dedicated hardware for execution. In this study, we evaluate the performance of LLMs based on the 7 and 13 billion LLaMA models, subjected to a quantization process and run on home hardware. The models considered were Alpaca, Koala, and Vicuna. To evaluate the effectiveness of these models, we developed a database containing 1,006 questions from the ENEM (Brazilian National Secondary School Exam). Our analysis revealed that the best performing models achieved an accuracy of approximately 46% for the original texts of the Portuguese questions and 49% on their English translations. In addition, we evaluated the computational efficiency of the models by measuring the time required for execution. On average, the 7 and 13 billion LLMs took approximately 20 and 50 seconds, respectively, to process the queries on a machine equipped with an AMD Ryzen 5 3600x processor. Large language models, LLMs, ENEM, GGML, LLaMA, Quantization. ## I Introduction With the introduction of the article _Attention is all you need_[1], the field of Natural Language Processing (NLP) underwent a significant revolution. Tasks that were previously dominated by heuristics and machine learning algorithms began to achieve state-of-the-art results with the use of Transformers [2]. This neural network architecture aims to pay attention to the most relevant parts of the inputs, such as keywords or areas with people in an image, for example. With the emergence of _transformers_, a class of neural network models that are trained to predict the next word given a sequence of previous words had their metrics elevated to the state of the art. This category of models is known as language models, and its first applications were aimed at generating _word embeddings_[3]. This technique makes it possible to dynamically assign words to a semantic vector space, where similar words are close to each other. Later, encoder-decoder architectures known as _seq2seq_ were used, which made use of _transformers_ to achieve state-of-the-art text encoding and decoding tasks. A notable example is the translation of texts between different languages, even when these texts are of different lengths. With the introduction of the GPT (Generative Pre-trained Transformer) family of models, models trained through unsupervised learning gained popularity. These models were pre-trained on large amounts of unlabeled data and retained general knowledge during their training. They were then fine-tuned on a much smaller amount of data and for shorter periods of time for specific tasks. However, the release of Chat-GPT, a model trained for human interactions through conversations, brought even greater visibility to these models. These models have brought significant innovation in the way humans interact with computers, enabling intuitive communication through dialogues where the responses are precisely tailored to the requests. This results in significant time savings compared to traditional search on search engines. However, it is important to note that these models are not freely accessible. For example, the renowned Chat-GPT model does not publicly provide its source code, which prevents researchers from conducting studies on its internal workings. Additionally, access to its functionalities through the API requires payment of fees. However, companies such as Meta1 have taken an open-source approach by making Large Language Models (LLMs) available as a basis for researchers and enthusiasts to conduct their research. The models released by Meta have sizes of 7, 13, 30, and 65 billion parameters for the first version, and 7, 13 and 70 billion for the second version. Although these models are considered smaller compared to the GPT family (for example, GPT-3.5 Turbo has 154 billion parameters), it still requires dedicated hardware to run them, which restricts research to people who have access to these resources. Footnote 1: [https://about.meta.com/](https://about.meta.com/) However, as has been shown by [4], it is possible to decrease the amount of memory required to use these models with a quantization process. This process aims to decrease the accuracy of the weights of the hidden layers of the models at the cost of performance loss. Using one project aims to use an API written from scratch in C/C++ for model execution without the need for dedicated GPUs2. The models are based on LLaMA, published by Meta [5], they are: Vicuna3, Koala4 and Alpaca5, all of which have two variants, one of 7 and one of 13 billion parameters. This allowed anyone to experience the potential of these models, since it would be possible to run inference on domestic hardware. Footnote 2: [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) Footnote 3: [https://mysys.org/blog/2023-03-30-vicuna/](https://mysys.org/blog/2023-03-30-vicuna/) Footnote 4: [https://bair.berkeley.edu/blog/2023/04/03/koala/](https://bair.berkeley.edu/blog/2023/04/03/koala/) Footnote 5: [https://cfm.stanford.edu/2023/03/13/alpaca.html](https://cfm.stanford.edu/2023/03/13/alpaca.html) The Brazilian National Secondary School Exam (ENEM) is a test that is taken annually by secondary school students across the country and serves as a gateway to colleges throughout Brazil, thus representing a challenge that many students prepare for all year long. As demonstrated by [6], these LLMs are able to generalize knowledge, increasing the number of activities they perform as they increase the number of parameters. That said, evaluating the performance of these LLMs on ENEM questions becomes a good benchmark of how robust these large models are, since these are general purpose models, and have not been trained to answer questions. Hence, the goal of this study is to evaluate quantized language models, based on LLaMA [5], capable of operating on home hardware, using ENEM questions as the analysis scenario. For this purpose, we produced a carefully structured database of questions containing the texts of the questions along with the correct answers. The database encompasses a total of 1,006 questions, covering the period from 2010 to 2022. The database produced has great potential for LLM analysis and also for other studies in the field of natural language processing. The experiments conducted in our study aim to answer the following research questions: * How effective are quantized models, based on LLaMA, trained in English, in solving ENEM questions described in Brazilian Portuguese? * How effective are quantized models, based on LLaMA, trained in English, in solving ENEM questions translated from Brazilian Portuguese into English? * There is an improvement between the LLaMA models from the first version to the second? * How efficient (in terms of time to run on a computer with modest hardware) are quantized models, based on LLaMA, when used to solve ENEM questions? ## II Related Work The use of LLMs is rapidly advancing in various research fields. One notable application is in the field of medicine, where researchers utilized the PALM model [7], trained by Google, to perform question answering in the medical domain. This model was evaluated on the United States Medical Licensing Examination (USMLE) [8], and the analysis demonstrated that the model provided answers that reached a consensus with experts in 92.6% of the questions. This highlights the potential benefits of these models in assisting healthcare professionals in their practice. As shown by [9], there are already efforts in training LLMs for question solving. According to the comparative study provided by the authors, their model performed better than all other models available on the market, except for GPT-4, for English and Chinese exams. The model was evaluated on the following datasets: MMLU, AGIEval, and C-Eval, and had the following metrics: 67.2, 49.2, and 62.7, respectively; Against 86.2, 56.4, and 68.7 from GPT-4. Additionally, there are reports of research in training language models with a focus on creating a chain of thought, where the model is able to explain the why of its responses [10]. This can help create language models that are increasingly able to provide responses that are useful to humans. Thinking in a question answering context, a model that was able to explain the reasoning behind the answer to an alternative would be very useful to a student, for example. In the Brazilian context, a team of researchers proposed to use GPT-4 [11] to evaluate questions from ENEM [12]. The model showed 87.29% accuracy on the 2022 questions, against 73.73% accuracy of gpt-3.5-turbo. This improvement was due to the increase in the size of the model to also include images. This shows that these models were able to perform better than most of the humans who take this exam every year. Quantized language models are in focus, given the number of computational resources required to run them [13, 14]. However, these studies address evaluation using abstract metrics6. This work aims to evaluate these quantized models in a tangible way, checking how well they can answer a challenging test such as ENEM. Footnote 6: [https://huggingface.co/docs/transformers/perplexity](https://huggingface.co/docs/transformers/perplexity) ## III Theoretical Framework This section introduces some relevant concepts for a better understanding of the rest of the article. ### _Large Language Models - LLMs_ One of the determining factors for the high efficiency exhibited by some language models is their size [6]. For example, the GPT-3 [15] model, published by OpenAI7, has 175 billion parameters, resulting from 34 days of training on 1,024 Nvidia A100 GPUs. The estimated cost for this training was $4.6 million. Footnote 7: [https://openai.com/](https://openai.com/) For comparison, the 7 billion parameters LLaMA [5] model published by Meta requires a GPU with at least 28 GB of memory to run an inference8. These requirements are prohibitive, as such equipment is expensive. Footnote 8: [https://discuss.huggingface.co/llama-7b-gpu-memory-requirement/34323](https://discuss.huggingface.co/llama-7b-gpu-memory-requirement/34323) _LLaMA.cpp_9 is a project that aims to create an API for CPU inference of LLMs, using C/C++ and techniques that allow models not to be loaded completely into memory. The templates are based on LLaMA [5], and can run on home computers. However, it is important to note that these benefits are not obtained without costs. To enable the execution of LLaMA.cpp, it is necessary to reduce the size of the models, which is achieved by applying a quantization technique. This technique involves compressing the weights in the hidden layers of the models, resulting in a reduction in the space required for their storage. Figure 1 illustrates that as the level of quantization increases, i.e., there is a loss of precision in the layers, the perplexity metric increases. To conduct the experiments described in this paper, all models were quantized at Q4. According to the authors of the repository, this level of quantization leads to a worsening of the perplexity metric by about 2%. More details about the quantization process can be found in Section III-C. ### _Model quantization process_ The process of quantizing the models used in LLaMA.cpp is described in the project Ggml10. This project aims to compress different language models, not only those based on LLaMA, by also quantizing models from the GPT family, such as GPT-2 and GPT-J. Footnote 10: [https://github.com/ggerganov/ggml](https://github.com/ggerganov/ggml) The weights of the hidden layers in a model without quantization are represented as 16-bit floats. In the quantization process described in 11, a set of _QK_ weights are represented as an integer part plus a floating point part. For example, for a quantization of _Q4_, a block of 4 weights, each being represented in float16, are represented as a float32 scale factor plus 2 integers of 2 bytes each. According to the author, this approach reduced the size of the models by 75%. Footnote 11: [https://github.com/ggerganov/ggml](https://github.com/ggerganov/ggml) ## IV Methodology This section presents the methodology adopted to evaluate the models. It discusses how the database for evaluation was made, the models used, and the experiments conducted. ### _Dataset_ One of the main contributions of this paper is the provision of a structured and validated database composed of numerous questions from the Brazilian Secondary School Exam (ENEM) [16]. The questions basically consist of three parts: the first being a portion of text, tables or images, or a combination of these. The second part is a question about the first part. And finally, five alternatives, only one of which is correct. This database was developed with a focus on questions that can only be answered by text, since the models that will be evaluated have textual comprehension capabilities. In total, the database contains 1,006 questions, in which the description texts, the alternatives, and the correct answers were identified. The process of collecting these questions followed the following procedure: * Collection of ENEM tests, from 2010 to 2022, in PDF format, obtained from Instituto Nacional de Estudos e Pesquisas Educacionais Anisio Teixeira (INEP)12. Footnote 12: [https://www.gov.br/inep/pt-br/raresa-de-atuacao/avaliaacao-e-exames-educacionais/enem/provas-e-gabiarios](https://www.gov.br/inep/pt-br/raresa-de-atuacao/avaliaacao-e-exames-educacionais/enem/provas-e-gabiarios) * Use of the tool13 for text extraction from each PDF file. * Definition of heuristics for concatenating the text of each question, grouping description, question, and alternatives. * Filtering out questions that did not fit the scope of the experiments. Footnote 13: [https://pyumupd.readthedocs.io/en/latest/document.html](https://pyumupd.readthedocs.io/en/latest/document.html) The following criteria were established for removing questions not suitable for the experiments: * Questions containing some image, table, or equation; since the models we will use can only understand text. * Any question that it was not possible to distinguish which parts of the text were the alternatives, since this part was of utmost importance for the models. * Questions that were not processed properly by the PDF file content extraction tool. These questions had, for example, strange characters in their content. To remove questions that contain images, tables, or equations, heuristics were used to check if within the question there are any of the keywords, such as: **table, figure, image**. With this, we were able to remove many questions that would be impossible for the models to answer. The distribution of these questions by year is shown in Figure 2. No questions were extracted for the years 2010 and 2021 due to problems in reading the PDF. The distribution of questions by subject area can be seen in Figure 3. In this figure, it is possible to see that mathematics and natural sciences and their technologies were the areas with the fewest questions due to the filtering of questions that contain graphs, equations, and tables. Fig. 1: Performance degradation of quantized models. Chart available at: [https://github.com/ggerganov/llama.cpp/pull/1684](https://github.com/ggerganov/llama.cpp/pull/1684). The annotation of the answers was performed manually, based on the ground truth available in PDF format, being registered in a file in JSON format. We preferred a manual approach, since implementing a script for automation would be costly, since the PDF files have different structures. The dataset produced is freely available, as well as the artifacts used for its production (files in PDF format and source code of the data processing and transformation scripts) at14. Footnote 14: [https://github.com/wineone/tcc-matheus-lisboa](https://github.com/wineone/tcc-matheus-lisboa) ### _Models evaluated_ Language models were selected that were aligned with the goal of the study, i.e., large models capable of running on home machines. The models were obtained from the Hugging Face15 model repository, and were made available by users who performed the quantization process. The models were tested to verify that they are compatible with LLaMA.cpp16. This tool provides the execution of models based on LLaMA [5] on domestic machines by employing quantization techniques and selective reading of the parts needed for model execution. Footnote 15: [https://hf-proxy-cf.effarig.sitem/models](https://hf-proxy-cf.effarig.sitem/models) For the experiments, LLaMA v1 and v2 based models of 7 and 13 billion parameters, resulting from fine-tuning of the original models, were used. These are: * **LLaMA 1 7b, 13b**: Models trained from scratch on a diverse dataset that comes from various sources. These are: **English Common Crawl, C4, GitHub, Wikipedia, Gutenberg and Books3, arXiv and Stack Exchange**. This dataset contains approximately 1.4 trillion tokens, but for the 7 and 13 billion parameter models, a subset of 1 trillion was used. * **Alpaca 7b, 13b**: Resulting from fine-tuning the LLaMA models with a set of 52,000 question and answer examples, this model was trained to perform better in question and answer scenarios. * **Koala 7b, 13b**: Fine-tuning of the LLaMA models, but trained 117,000 user iterations with ChatGPT17. This model was trained to perform better in dialogs. Footnote 17: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) * **Vicuna 7b, 13b**: Fine-tuning the LLaMA models, but trained with a set of 70,000 user iterations with ChatGPT, via the ShareGPT data, which are community-sourced conversations with the model. * **LLaMA 2 7b, 13b**: Second published version of the LLaMA [17]. According to the authors, an optimized version of the autoregressive model was used, with a more robust data treatment and 40% more training data. Footnote 18: [https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/) One point to note is that the data used to train these models is mostly in English, and no evidence was found that these models had been exposed to data from ENEM questions during their training or validation stages, which could invalidate the results presented in Section V. ### _Experiment Definitions_ As already discussed, these language models can only receive a portion of text on input and return another portion of text on output, so an integral part of the activity is to define the format of the text that will be used to feed them. For the preparation of the prompts, the methodology proposed in the course Prompt Engineering19 was adopted, made available by OpenAI, the company that published the GPT family models. Although the course is focused on the GPT models, most of the models we used in this experiment are based on data extracted from conversations with the Chat-GPT, so it is expected that the way that these models work is to some degree similar to the methodologies provided in the course. The approach taken was to ask the model to answer the correct alternative and to flag only the letter of the alternative, to facilitate the verification Fig. 3: Distribution of questions extracted per knowledge area. Fig. 2: Count of questions extracted per year. of the effectiveness of the models and the computation of the evaluation metrics. Figure 4 shows an example of a prompt. To perform the comparison of the models, two experiments were run. The first experiment, aiming to answer the question **Q1**, compared the accuracy of the models by running all the models in all the questions, replacing the text of the questions in the prompt, and collecting the result of the models in the text. The second experiment was designed to answer **Q2**, for this all the questions were translated, as well as the prompt, and all the answers were computed. The Google Translate API was used for the translation, using the Text Blob19 library. Footnote 19: [https://pypi.org/project/textblob/](https://pypi.org/project/textblob/) The execution times for these models were also evaluated in order to answer **Q4**. The evaluation was conducted using two machines, one equipped with an AMD Ryzen 5 3600x processor and the other with an Intel i9 9900k processor. The time in seconds was collected for the inference of the questions in Portuguese and English, with the Portuguese questions executed on the machine equipped with the 3600x and the English questions on the machine equipped with the 9900k. The results are presented in Section V. ### _Model evaluation_ In order to evaluate the assertiveness of the models in answering the test questions, we adopted the metric of accuracy, which is defined as the number of correct questions divided by the total number of questions, as described in Equation 1. \[acc=\frac{\#correct}{\#total} \tag{1}\] Model accuracy calculation One of the problems encountered was how to identify which alternative was predicted by the model, given the generational nature of the text. For the vast majority of the prompts, the model presented a very objective output, containing only one letter representing some possible alternatives (A, B, C, D, E). However, in other situations, the model output consisted of a text about the question, followed by the letter representing the answer. In addition to these, we also observed outputs containing long texts without much meaning and without an objective answer. With this in mind, a set of heuristics was defined to capture the alternative selected by the model. The aim of these heuristics is to identify the alternative predicted by the model from among the text returned by the model. For example, in the text "The answer is B" or "B") the alternative chosen by the model was B. Table I presents the percentage of questions that we were able to identify as an alternative signaled by the model. A manual inspection was performed to ensure that the heuristics identified all available alternatives. ## V Results and Discussions This section presents and discusses the results observed from the experiments conducted. The research questions will be answered: * How effective are the models on questions in Portuguese? * How effective are the models on questions translated into English? * How was the improvement between LLaMA 1 and LLaMA 2? * How long does it take to run these models on home machines? ### _Q1 e Q2 - How effective are the models on questions described in Portuguese and English?_ Addressing the **Q1** and **Q2**, the accuracy of the models on the question set was evaluated. In Table II, the performance of the models is presented. It can be seen that some models, such as LLaMA 1 7b and 13b, Alpaca 7b, Koala 7b and 13b, and LLaMA 2 7b performed similarly to a random classifier. This suggests that these models may not be able to adequately understand the questions and provide the correct answers in both English and Portuguese. However, they demonstrated an ability to recognize that the text provided is a question and were able to indicate an alternative, even if incorrect. During the inference phase, a bias phenomenon was observed in the models analyzed. Most of these models showed a consistent tendency to generate a single option as a result. The percentage distribution of the questions identified in Portuguese during this phase for each model is illustrated in Figure 5, while the distribution for the English language is represented in Figure 6. Except for the LLaMA 1 7b, Vicuna 7b and Vicuna 13b, LLaMA 1 7b, and LLaMA 2 7b and 13b models, all the others showed a significant bias towards alternative A, contrary to the expectation of a balanced distribution among all options. Notably, the Vicuna 13b model exhibited a bias toward alternative B for both languages, while the LLaMA 1 7b and LLaMA 2 7b models showed a bias toward alternative D in Portuguese, and toward alternatives B Fig. 4: Example of a question that will be used for an inference in the models. and D in English, respectively. The 7 billion parameter Vicuna, and the LLaMA 2 13b were identified as the models with the lowest bias, as it did not show a significant bias toward any of the options or languages. Still, the models seemed to show a more pronounced bias toward Portuguese, while they showed a less pronounced bias toward English However, the Alpaca 13b, Vicuna 7b and 13b, and LLaMA 2 13b models performed significantly better, with the 7b Vicuna achieving an accuracy rate of approximately 40% for the English language and the 13b Alpaca achieving 40% accuracy for the Portuguese language. The best model evaluated was LLaMA 2 with 13 billion parameters, which achieved an accuracy of 46.8% for Portuguese and 49.3% for English. While these results are distant from those reported by [12] for Chat-GPT, they are quite promising, considering that these models are open-source alternatives, have undergone quantization, and can be run on domestic machines without the need for specialized hardware. As for the assumption that the models would perform better at English than in Portuguese, this was true for LLaMA 1 13b, Koala 7b and 13b, Vicuna 7b and 13b LLaMA 2 7b and 13b. The best metrics for each language were 46.8% for Portuguese, against 49.3% in English, suggesting that there is indeed an improvement in translating the questions and evaluating the models. To better observe the capacity of the models, the metrics were also compared in the four areas of knowledge of the ENEM test, which are: 'Humanities and its technologies'; 'Nature sciences and its technologies'; 'Mathematics and its technologies'; 'Languages, codes and its technologies'. The metrics can be found in Table III. Both for Portuguese and English, the models managed to perform well in the areas of 'humanities and its technologies' and 'codes and its technologies', with the LLaMA 13b having an accuracy of 63.6% and 51.5%, respectively. In the area of 'natural sciences', the result was a little worse, with the LLaMA 2 13b achieving an accuracy of 41.4%. In the area of'mathematics and its technologies', no LLaMA model performed satisfactorily, having their accuracies limited to 24.3% for Portuguese and 26.2% for English. Moreover, in this area, the observed accuracies were worse than a random model in some situations. ### _Q3 - There is an improvement between the LLaMA models from the first version to the second?_ Looking at the metrics of the models based on LLaMA 1, none managed to beat LLaMA 2's 13 billion parameters. The best LLaMA 1-based models achieved an accuracy of 40% for Portuguese (alpaca 13b) and 39.9% for English (Vicuna 7b), while LLaMA 2 13b achieved 46.8% for Portuguese and Fig. 5: Distribution of alternatives identified in the models, questions in Portuguese. Fig. 6: Distribution of alternatives identified in the models, questions in English. 49.3% for English. This was due to improvements in the base model, as described in [17]. This shows the capacity of open-source language models, and that they can improve even more overtime. ### _Q4 - How efficient are the models in terms of time to run?_ Another factor of great importance in evaluating these models is the execution time of the inferences performed. To answer **Q4**, two experiments were conducted. In each of them, all models performed an inference for each of the questions in the data set. During the run, the times for performing the inferences (in seconds) were computed. Two machines were used, one equipped with an AMD Ryzen 5 3600x and the other equipped with an Intel i9 9900k. Table IV has the average times for running the questions. Models with 13 billion parameters consistently take longer than models with 7 billion parameters. However, since these models do not require dedicated GPUs, these execution times are not prohibitive and allow the use of these LLMs by any interested party. ## VI Conclusions and future works This study presented a database for evaluating language models in Portuguese, offering a contribution to future research. In addition, we performed an evaluation of quantized language models that can be run on domestic hardware, expanding the dissemination and accessibility of these models, which represent a revolution in the field of natural language processing. While the results may seem underwhelming, it is important to note that these language models are significantly smaller and have been trained with a smaller amount of data compared to the commercially available, closed-source options. Despite these limitations, the results indicate that the open-source models are progressing rapidly and are expected to improve their performance on tasks of this nature. This paper is intended to provide a basis for future research, and therefore we present some ideas that emerged during the development of the study. They are: * **Database expansion**: In order to restrict the scope of this study, only ENEM exams from the years 2010 to 2022 were considered. However, we believe that the generated scripts can be generalized to other years of ENEM, further expanding this database. * **Evaluation of these models in other databases**: A similar task would be to evaluate these models on questions from public competitions. However, as there are many public exams each year, the exams from these competitions can be used to build an even more comprehensive and robust database. * **Training Models**: The database provided contains a considerable amount of questions. It would be interesting to explore the possibility of training these language models to perform the task of answering questions. * **Consider other models**: As shown in [9], there are already models trained for the purpose of explaining what their reasoning is for answering questions. Given this, future experiments can look in more depth at the rationale that led the model to a particular answer. * **Consider multimodal models**: As shown in [12], the GPT-4 model performed impressively well on ENEM questions, in part due to its ability to process visual information in conjunction with the text. It is believed that multimodal models of this type will be available in open source in the near future. * **Investigate the biases of the models**: Through the experiments conducted in this study, it was not possible to understand the reason for the observed biases in the behavior of the models. Therefore, in future investigations, this phenomenon can be further investigated.
2302.00061
Dynamic Flows on Curved Space Generated by Labeled Data
The scarcity of labeled data is a long-standing challenge for many machine learning tasks. We propose our gradient flow method to leverage the existing dataset (i.e., source) to generate new samples that are close to the dataset of interest (i.e., target). We lift both datasets to the space of probability distributions on the feature-Gaussian manifold, and then develop a gradient flow method that minimizes the maximum mean discrepancy loss. To perform the gradient flow of distributions on the curved feature-Gaussian space, we unravel the Riemannian structure of the space and compute explicitly the Riemannian gradient of the loss function induced by the optimal transport metric. For practical applications, we also propose a discretized flow, and provide conditional results guaranteeing the global convergence of the flow to the optimum. We illustrate the results of our proposed gradient flow method on several real-world datasets and show our method can improve the accuracy of classification models in transfer learning settings.
Xinru Hua, Truyen Nguyen, Tam Le, Jose Blanchet, Viet Anh Nguyen
2023-01-31T19:53:01Z
http://arxiv.org/abs/2302.00061v1
# Dynamic Flows on Curved Space Generated by Labeled Data ###### Abstract The scarcity of labeled data is a long-standing challenge for many machine learning tasks. We propose our gradient flow method to leverage the existing dataset (i.e., source) to generate new samples that are close to the dataset of interest (i.e., target). We lift both datasets to the space of probability distributions on the feature-Gaussian manifold, and then develop a gradient flow method that minimizes the maximum mean discrepancy loss. To perform the gradient flow of distributions on the curved feature-Gaussian space, we unravel the Riemannian structure of the space and compute explicitly the Riemannian gradient of the loss function induced by the optimal transport metric. For practical applications, we also propose a discretized flow, and provide conditional results guaranteeing the global convergence of the flow to the optimum. We illustrate the results of our proposed gradient flow method on several real-world datasets and show our method can improve the accuracy of classification models in transfer learning settings. ## 1 Introduction A major challenge in many data science applications is the scarcity of labeled data. Data augmentation methods have been studied in the literature; see for example, the noise injection methods Moreno-Barea _et al._ (2018), generative models Yi _et al._ (2019), and Shorten and Khoshgoftaar (2019) for a survey. We consider a setting where one domain has only a few labeled samples for each class, so we cannot train a well-performing classifier with the available data. To alleviate the data scarcity problem in this setting, we propose to enrich the target dataset by generating additional labeled samples. Using generative models is not possible in our setting because they usually require more than a few samples for each class to learn and generate high-quality new samples Gao _et al._ (2018). In our work, we choose a source dataset with extensive labeled data and then flow the labeled data to the target dataset. Precisely, we introduce a novel data augmentation methodology based on a gradient flow approach that minimizes the maximum mean discrepancy (\(\mathrm{MMD}\)) distance between the target and the augmented data. Therefore, when minimizing the \(\mathrm{MMD}\) distance, we are able to obtain an efficient scheme which generates additional labeled data from the target distribution. Our scheme is model-independent and can be applied to any datasets regardless of the number of classes or dimensionality. Mathematically, we consider a feature space \(\mathcal{X}=\mathbb{R}^{m}\) and a _categorical_ label space \(\mathcal{Y}\). We have a source domain dataset consisting of \(N\) samples \((x_{i},y_{i})\in\mathcal{X}\times\mathcal{Y}\) for \(i=1,\ldots,N\), and a target domain dataset of \(M\) samples \((\bar{x}_{j},\bar{y}_{j})\in\mathcal{X}\times\mathcal{Y}\) for \(j=1,\ldots,M(M\ll N)\). The ultimate goal of this paper is to generate new samples in the target domain, and we aim to generate new samples whose distribution is as close as possible to the distribution that governs the target domain. We here introduce a gradient flow method Arbel _et al._ (2019); Mroueh _et al._ (2019) to synthesize new, unseen data samples. Gradient flow is a continuous flow along the path where a considered loss function decreases its value. Because we have extensive source domain samples, it is possible to flow each source sample towards the target data while minimizing the loss function. The terminal product of the flow will be new samples that can sufficiently approximate the distribution of the target domain. Thus, gradient flow is an approach to synthesize new target domain samples, and is a complement to data augmentation methods, like adding random noise. Unfortunately, formulating a gradient flow algorithm for labeled data with categorical set \(\mathcal{Y}\) is problematic. Indeed, there is no clear metric structure on \(\mathcal{Y}\) in order to define the topological neighborhood, this in turn leads to the difficulty of forming the gradients with respect to the categorical component. To overcome this difficulty, we lift each individual label to a richer structure. For example, a label such as "0" is replaced by a mean vector and a covariance matrix based on the whole distribution of the information associated to this particular label. Then it will be much more natural to apply gradient flow algorithms in the space of the lifted representation. A gradient flow on the dataset space with this idea was recently proposed in Alvarez-Melis and Fusi (2021) by leveraging a new notion of distance between datasets in Alvarez-Melis and Fusi (2020); Courty _et al._ (2017); Damodaran _et al._ (2018). The main idea behind this approach is to reparametrize the categorical space \(\mathcal{Y}\) using the conditional distribution of the features, which is assumed to be Gaussian, and then construct a gradient flow on the feature-Gaussian space. Nevertheless, the theoretical analysis in Alvarez-Melis and Fusi (2021) focuses solely on the gradients with respect to the feature with no treatment of the flow with respect to the Gaussian component. In fact, the space of Gaussian distributions is not a (flat) vector space, and extracting gradient information depends on the choice of the metric imposed on this Gaussian space. On the other hand, our method computes the full gradient with respect to the Gaussian component (the mean and covariance matrix component that correspond to the label component). Our gradient flows minimize the \(\mathrm{MMD}\) loss function, thus it belongs to the family of \(\mathrm{MMD}\) gradient flows that was pioneered in [11, 12], and further extended in [11]. The \(\mathrm{MMD}\) function compares two distributions via their kernel mean embeddings on a _flat_ reproducing kernel Hilbert space (RKHS). In contrast to the Kullback-Leibler divergence flow, the \(\mathrm{MMD}\) flow can employ a sample approximation for the target distribution [10]. Further, the squared \(\mathrm{MMD}\) possesses unbiased sample gradients [1, 13]. However, existing literature on \(\mathrm{MMD}\) flows focus on distributions on flat Euclidean spaces. The flow developed in our paper here is for distributions on a _curved_ Riemannian feature-Gaussian space. Moreover, our approach is distinctive from the flow in [1] because we impose a specific metric on the Gaussian component, and we compute explicitly the Riemannian gradient of the \(\mathrm{MMD}\) loss function with respect to this metric to formulate our flow. Table 1 compares our work with two recent papers on gradient flow in theory and numerical experiments. Recently, generative models [14, 15] are successful in generating image samples from given distributions. The most important difference with our method is that generative models learn a prior distribution from massive data that are similar to the target data and generate new target samples conditioning on the prior distribution [15, 1]. Comparatively, our algorithm can transfer between two non-similar and non-related distributions, for example, from random Gaussian noise to MNIST in Supplementary B.11. Another benefit of our method is that we provide conditions for global convergence of our algorithms in Section 4, whereas generative models or more specific, generative adversarial networks (GANs), currently do not guarantee global convergence [13]. The application of our gradient flow is few-shot transfer learning, where we want to train classifiers with limited labeled data in the target domain. The numerical experiments in Section 5 demonstrate that our gradient flows can effectively augment the target data, and thus can significantly boost the accuracy in the classification task in the few-shot learning setting. Moreover, we run experiments on Tiny ImageNet datasets to highlight that our algorithm is scalable to higher-dimensional image data, that is higher than recent gradient flow works [1, 12]. We also compare our method with [1], mixup method [11], and traditional data augmentation methods. Results in Supplementary B.8-B.10 show that our method improves the accuracy in transfer learning more than these methods. Some related works study nonparametric gradient flows using the \(2\)-Wasserstein distance between distributions [1, 12, 13, 14, 15, 16, 17], and traditional data augmentation methods. Results in Supplementary B.8-B.10 show that our method improves the accuracy in transfer learning more than these methods. Some related works study nonparametric gradient flows using the \(2\)-Wasserstein distance between distributions [1, 12, 13, 14, 15, 16, 17], but only for distributions on Euclidean spaces and a different distance. Nonparametric gradient flows with other metrics include Sliced-Wasserstein Descent [10, 11], Stein Descent [15, 16], and Sobolev Descent [11]. However, they also only consider distributions on Euclidean spaces. In particular, [10] introduce Riemannian structures for the Stein geometry on flat spaces, while ours is for an optimal transport metric on a curved space. Parametric flows for training GANs are studied in [1, 12, 13, 14]. Contributions.We study a gradient flow approach to synthesize new labeled samples related to the target domain. To construct this flow, we consider the space of probability distributions on the feature-Gaussian manifold, and we are metrizing this space with an optimal transport distance. We summarize the contributions of this paper as follows. * We study in details the Riemannian structure of the feature-Gaussian manifold in Section 3, as well as the Riemannian structure of the space of probability measures supported on this manifold in Supplementary A.1. * We consider a gradient flow that minimizes the squared \(\mathrm{MMD}\) loss function to the target distribution. We describe explicitly the (Riemannian) gradient of the squared \(\mathrm{MMD}\) in Lemma 5, and we provide a partial differential equation describing the evolution of the gradient flow that follows the (Riemannian) steepest descent direction. * We propose two discretized schemes to approximate the continuous gradient flow equation in Section 4.1 and 4.2. We provide conditions guaranteeing the global convergence of our gradient flows to the optimum in both schemes. * In Section 5, we demonstrate numerical results with our method on real-world image datasets. We show that our method can generate high-fidelity images and improve the classification accuracy in transfer learning settings. Notations.We use \(\mathbb{S}^{n}\) to denote the set of \(n\times n\) real and symmetric matrices, and \(\mathbb{S}^{n}_{++}\subset\mathbb{S}^{n}\) consists of all positive definite matrices. For \(A\in\mathbb{S}^{n}\), \(\mathsf{tr}(A)\coloneqq\sum_{i}A_{ii}\). We use \(\langle\cdot,\cdot\rangle\) and \(\|\cdot\|_{2}\) to denote the standard inner product and norm on Euclidean spaces. Let \(\mathcal{P}(X)\) be the collection of all probability distributions with finite second moment on metric space \(X\). If \(\varphi:X\to Y\) is a Borel map and \(\nu\in\mathcal{P}(X)\), then the push-forward \(\varphi_{\#}\nu\) is the distribution on \(Y\) given by \(\varphi_{\#}\nu(E)=\nu(\varphi^{-1}(E))\) for all Borel sets \(E\subset Y\). For a function \(f\) of the continuous time variable \(t,f_{t}\) denotes the value of \(f\) at \(t\) while \(\partial_{t}f\) denotes the standard derivative of \(f\) w.r.t. \(t\). Also, \(\delta_{z}\) denotes the Dirac delta measure at \(z\). All proofs are provided in the Supplementary material. ## 2 Labeled Data Synthesis via Gradient Flows of Lifted Distributions In this section, we describe our approach to synthesize target domain samples using gradient flows. A holistic view of our method is presented in Fig. 1. In the first step, we would need to lift the feature-label space \(\mathcal{X}\times\mathcal{Y}\) to a higher dimensional space where a met ric can be defined. Consider momentarily the source data samples \((x_{i},y_{i})_{i=1}^{N}\). Notice that this data can be represented as an empirical distribution \(\nu\) on \(\mathcal{X}\times\mathcal{Y}\). More precisely, we have \(\nu=N^{-1}\sum_{i=1}^{N}\delta_{(x_{i},y_{i})}\). As \(\mathcal{Y}\) is discrete, the law of conditional probabilities allows us to dis-integrate \(\nu\) into the conditional distributions \(\nu_{y}\) of \(X|Y=y\) satisfying \(\nu(E\times F)=\int_{F}\nu_{y}(E)\nu^{2}(\mathrm{d}y)\) for every \(E\subset\mathcal{X}\) and \(F\subset\mathcal{Y}\), where \(\nu^{2}\coloneqq N^{-1}\sum_{i=1}^{N}\delta_{y_{i}}\) is the second marginal of \(\nu\)(Ambrosio et al., 2008, Theorem 5.3.1). The lifting procedure is obtained by employing a pre-determined mapping \(\phi:\mathcal{X}\to\mathbb{R}^{n}\), and any categorical value \(y\in\mathcal{Y}\) can now be represented as an \(n\)-dimensional distribution \(\phi_{\#}\nu_{y}\). Using this lifting, any source sample \((x_{i},y_{i})\in\mathcal{X}\times\mathcal{Y}\) is lifted to a point \((x_{i},\phi_{\#}\nu_{y_{i}})\in\mathcal{X}\times\mathcal{P}(\mathbb{R}^{n})\) and the source dataset is representable as an empirical distribution of the form \(N^{-1}\sum_{i=1}^{N}\delta_{(x_{i},\phi_{\#}\nu_{y_{i}})}\). The lifted representation of a categorical value \(y\in\mathcal{Y}\) as an \(n\)-dimensional distribution \(\phi_{\#}\nu_{y}\in\mathcal{P}(\mathbb{R}^{n})\) is advantageous because \(\mathcal{P}(\mathbb{R}^{n})\) is metrizable, for example, using the \(2\)-Wasserstein distance. The downside is that \(\mathcal{P}(\mathbb{R}^{n})\) is infinite dimensional, and encoding the datasets in this lifted representation is not efficient. To resolve this issue, we assume that \(\phi_{\#}\nu_{y}\) is Gaussian for all \(y\in\mathcal{Y}\), and thus any distribution \(\phi_{\#}\nu_{y}\) can be characterized by the mean vector \(\mu_{y}\in\mathbb{R}^{n}\) and covariance matrix \(\Sigma_{y}\in\mathbb{S}_{++}^{n}\) defined as \(\mu_{y}=\int_{\mathcal{X}}\phi(x)\nu_{y}(\mathrm{d}x)\) and \(\Sigma_{y}=\int_{\mathcal{X}}\big{[}\phi(x)-\mu_{y}\big{]}\big{[}\phi(x)-\mu_{ y}\big{]}^{\top}\nu_{y}(\mathrm{d}x)\) for all \(y\in\mathcal{Y}\), where \(\top\) denotes the transposition of a vector. In real-world settings, the conditional moments of \(\phi(X)|Y\) are sufficiently different for \(y\neq y^{\prime}\), and thus the representations using \((\mu_{y},\Sigma_{y})\) will unlikely lead to any loss of label information. With this lifting, the source data thus can be represented as an empirical distribution \(\rho^{0}\) on \(\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{S}_{++}^{n}\) via \(\rho^{0}=N^{-1}\sum_{i=1}^{N}\delta_{(x_{i},\mu_{y_{i}},\Sigma_{y_{i}})}\). By an analogous construction to compute \(\bar{\mu}_{y}\) and \(\Sigma_{y}\) using the target data, the target domain data \((\bar{x}_{j},\bar{y}_{j})_{j=1}^{M}\) can be represented as another empirical distribution \(\varrho=M^{-1}\sum_{j=1}^{M}\delta_{(\bar{x}_{j},\bar{\mu}_{\bar{y}_{j}}, \bar{\Sigma}_{\bar{y}_{j}})}\). Let us denote the shorthand \(\mathcal{Z}=\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{S}_{++}^{n}\), then \(\rho^{0}\) and \(\varrho\) are both probability measures on \(\mathcal{Z}\). We refer to \(\rho^{0}\) and \(\varrho\) as the feature-Gaussian representations of the source and target datasets. We now consider the gradient flow associated with the optimization problem \[\min_{\rho\in\mathcal{P}(\mathcal{Z})}\ \Big{\{}\mathcal{F}(\rho)\coloneqq \frac{1}{2}\mathrm{MMD}(\rho,\varrho)^{2}\Big{\}}\] under the initialization \(\rho=\rho^{0}\). The objective function \(\mathcal{F}(\rho)\) quantifies how far an incumbent solution \(\rho\) is from the target distribution \(\varrho\), measured using the \(\mathrm{MMD}\) distance. In Sections 3 and 4, we will provide the necessary ingredients to construct this flow. Suppose that after \(T\) iterations of the discretized gradient flow algorithm, we obtain a distribution \(\rho^{T}\in\mathcal{P}(\mathcal{Z})\) that is sufficiently close to \(\varrho\), i.e., \(\mathcal{F}(\rho^{T})\) is close to zero. Then we can recover new target labels by projecting the samples of the distribution \(\rho^{T}\) to the locations on \(\mathcal{X}\times\mathcal{Y}\). This projection can be computed efficiently by solving a linear optimization problem, as discussed in Supplementary B.3. **Remark 1** (Reduction of dimensions).: _If \(m=n\) and \(\phi\) is the identity map, then our lifting procedure coincides with that proposed in (Alvarez-Melis and Fusi, 2020). However, a large \(n\) is redundant, especially when the cardinality of \(\mathcal{Y}\) is low. If \(n\ll m\), then \(\phi\) offers significant reduction in the number of dimensions, and will speed up the gradient flow algorithms._ **Remark 2** (Generalization to elliptical distributions).: _Our framework can be extended to the symmetric elliptical distributions because the Bures distance for elliptical distributions admits the same closed-form as for the Gaussian distributions (Gelbrich, 1990). In this paper, we use \(\phi\) as the t-SNE embedding. According to (van der Maaten and Hinton, 2008), t-SNE's low-dimensional embedded space forms a Student-t distribution, which is an elliptical distribution._ \begin{table} \begin{tabular}{l l l l} Paper & Dataset & On curved Riemannian space & Gradient has mean \\ & & & and covariance component \\ \hline (Alvarez-Melis and Fusi, 2021) & synthetic, *NIST, and CIFAR10 & \(\mathcal{X}\) & \(\mathcal{X}\) \\ (Arbel et al., 2019) & synthetic & \(\mathcal{X}\) & \(\mathcal{X}\) \\ Ours & synthetic, *NIST, and TinyImageNet & \(\check{\check{\vee}}\) & \(\check{\check{\vee}}\) \\ \hline \end{tabular} \end{table} Table 1: To the best of our knowledge, we provide the _first_ results on the full gradient of the features and lifted labels on a curved Riemannian space. We also conduct numerical experiments on the highest-dimension real-world datasets. Figure 1: Schematic view of our approach: The source and target datasets are first lifted to distributions \(\rho^{0}\) and \(\varrho\) on the feature-Gaussian space (left box). We then run a gradient flow for \(T\) iterations to get a terminal distribution \(\rho^{T}\) (middle). Atoms of \(\rho^{T}\) are projected to get labeled target samples (right). Riemannian Geometry of \(\mathcal{Z}\) and \(\mathcal{P}(\mathcal{Z})\) If we opt to measure the distance between two Gaussian distributions using the 2-Wasserstein metric, then this choice would induce a natural distance \(d\) on the space \(\mathcal{Z}=\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{S}^{n}_{++}\) prescribed as \[d\big{(} (x_{1},\mu_{1},\Sigma_{1}),(x_{2},\mu_{2},\Sigma_{2})\big{)}\] \[\quad\coloneqq\big{[}\|x_{1}-x_{2}\|_{2}^{2}+\|\mu_{1}-\mu_{2}\|_ {2}^{2}+\mathbb{B}(\Sigma_{1},\Sigma_{2})^{2}\big{]}^{\frac{1}{2}}, \tag{3.1}\] where \(\mathbb{B}\) is the Bures metric on \(\mathbb{S}^{n}_{++}\) given by \(\mathbb{B}(\Sigma_{1},\Sigma_{2})\coloneqq\big{[}\text{tr}(\Sigma_{1}+\Sigma_{ 2}-2[\Sigma_{1}^{\frac{1}{2}}\Sigma_{2}\Sigma_{1}^{\frac{1}{2}}]^{\frac{1}{2}} ]^{\frac{1}{2}}\big{]}^{\frac{1}{2}}\). As \(\mathbb{B}\) is a metric on \(\mathbb{S}^{n}_{+}\)[Bhatia _et al._, 2019, p.167], \(d\) is hence a product metric on \(\mathcal{Z}\). In this section, first, we study the non-Euclidean geometry of \(\mathcal{Z}\) under the ground metric \(d\). Second, we investigate the Riemannian structure on \(\mathcal{P}(\mathcal{Z})\), the space of all distributions supported on \(\mathcal{Z}\) and with finite second moment, that is induced by the optimal transport distance. These Riemannian structures are required to define the Riemannian gradients of any loss functionals on \(\mathcal{P}(\mathcal{Z})\), and will remain important in our development of the gradient flow for the squared \(\operatorname{MMD}\). The space \(\mathcal{Z}\) is not a linear vector space. In this section, we reveal the Riemannian structure on \(\mathcal{Z}\) associated to the ground metric \(d\). As we shall see, \(\mathcal{Z}\) is a curved space as its geodesics are not straight lines and involve solutions to the Lyapunov equation. For any positive definite matrix \(\Sigma\in\mathbb{S}^{n}_{++}\) and any symmetric matrix \(V\in\mathbb{S}^{n}\), the Lyapunov equation \[H\Sigma+\Sigma H=V \tag{3.2}\] has a unique solution \(H\in\mathbb{S}^{n}\)[Bhatia, 1997, Theorem VII.2.1]. Let \(\operatorname{L}_{\Sigma}[V]\) denote this unique solution \(H\). The space \(\mathbb{S}^{n}_{++}\) is a Riemannian manifold with the Bures metric \(\mathbb{B}\) as the associated distance function, see [Takatsu, 2011, Proposition A]. Since \(\mathcal{Z}\) is the product of two Euclidean spaces and \(\mathbb{S}^{n}_{++}\), this gives rise to the following geometric structure for \(\mathcal{Z}\). **Proposition 3** (Geometry of \(\mathcal{Z}\)).: _The space \(\mathcal{Z}\) is a Riemannian manifold: at each point \(z=(x,\mu,\Sigma)\in\mathcal{Z}\), the tangent space is \(\operatorname{T}_{\mathcal{Z}}\mathcal{Z}=\mathbb{R}^{m}\times\mathbb{R}^{n} \times\mathbb{S}^{n}\) and the Riemannian metric is_ \[\big{\langle}(w_{1},v_{1},V_{1}),(w_{2},v_{2},V_{2})\big{\rangle} _{z}\] \[\coloneqq\langle w_{1},w_{2}\rangle+\langle v_{1},v_{2}\rangle+ \langle V_{1},V_{2}\rangle_{\Sigma} \tag{3.3}\] _for two tangent vectors \((w_{1},v_{1},V_{1})\) and \((w_{2},v_{2},V_{2})\) in \(\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{S}^{n}\), where \(\langle V_{1},V_{2}\rangle_{\Sigma}\coloneqq\text{tr}\big{(}\!\operatorname{ L}_{\Sigma}[V_{1}]\,\Sigma\,\mathrm{L}_{\Sigma}[V_{2}]\big{)}\). Moreover, the distance function corresponding to this Riemannian metric coincides with the distance \(d\) given by (3.1)._ As \(\mathcal{Z}\) is a product Riemannian manifold, any geodesic in \(\mathcal{Z}\) is of the form \((\theta,\gamma,\Gamma)\) with \(\theta\), \(\gamma\) being the Euclidean geodesics (straight lines) and \(\Gamma\) being a geodesic in the Riemannian manifold \(\mathbb{S}^{n}_{++}\). More precisely, for each \(\Sigma\in\mathbb{S}^{n}_{++}\) and each tangent vector \(V\in\mathbb{S}^{n}\), the geodesic in the manifold \(\mathbb{S}^{n}_{++}\) emanating from \(\Sigma\) with direction \(V\) is given by \[\Gamma(t)=(I+t\!\operatorname{L}_{\Sigma}[V])\Sigma(I+t\!\operatorname{L}_{ \Sigma}[V])\quad\text{for }t\in J^{*}, \tag{3.4}\] where \(J^{*}\) is the open interval about the origin given by \(J^{*}=\{t\in\mathbb{R}:\,I+t\!\operatorname{L}_{\Sigma}[V]\in\mathbb{S}^{n}_{++}\}\)[Malago _et al._, 2018]. As a consequence, for each point \((x,\mu,\Sigma)\in\mathcal{Z}\) and each tangent vector \((w,v,V)\in\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{S}^{n}\), the Riemannian exponential map in \(\mathcal{Z}\) for \(t\in J^{*}\) is given by \[\exp_{(x,\mu,\Sigma)}(t(w,v,V))\coloneqq(\theta(t),\gamma(t),\Gamma(t)). \tag{3.5}\] where \(\theta(t)\coloneqq x+tw\), \(\gamma(t)\coloneqq\mu+tv\), and \(\Gamma(t)\) is defined by (3.4). By definition, \(t\mapsto\exp_{(x,\mu,\Sigma)}(t(w,v,V))\) is the geodesic emanating from \((x,\mu,\Sigma)\) with direction \((w,v,V)\). Given the Riemannian metric (3.3), one can define the corresponding notion of gradient and divergence [Lee, 2003]. For a differentiable function \(\varphi:\mathcal{Z}\to\mathbb{R}\), its gradient \(\widetilde{\nabla}_{d}\varphi(z)\) w.r.t. the metric \(d\) defined by (3.1) is the unique element in the tangent space \(\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{S}^{n}\) satisfying \[\big{\langle}\widetilde{\nabla}_{d}\varphi(z),\left(w,v,V\right)\big{\rangle}_{ z}=D\varphi_{z}(w,v,V)\] for all \((w,v,V)\in\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{S}^{n}\) with \(D\varphi_{z}(w,v,V)\) denoting the standard directional derivative of \(\varphi\) at \(z\) in the direction \((w,v,V)\). By exploiting the special form of \(\langle\cdot,\cdot\rangle_{z}\) in (3.3), we can compute \(\widetilde{\nabla}_{d}\varphi(z)\) explicitly: **Lemma 4** (Gradients).: _For a differentiable function \(\varphi:\mathcal{Z}\to\mathbb{R}\), we have for \(z=(x,\mu,\Sigma)\) that_ \[\widetilde{\nabla}_{d}\varphi(z)=\big{(}\nabla_{x}\varphi(z),\;\nabla_{\mu} \varphi(z),\;2[\nabla_{\Sigma}\varphi(z)]\Sigma+2\Sigma[\nabla_{\Sigma}\varphi( z)]\big{)}, \tag{3.6}\] _where \((\nabla_{x},\nabla_{\mu},\nabla_{\Sigma})\) are the standard (Euclidean) gradients of the respective components._ The last component in formula (3.6) for \(\widetilde{\nabla}_{d}\varphi\) reflects the curved geometry of \(\mathcal{Z}\), and can be interpreted as the Riemannian gradient of the function \(\Sigma\mapsto\varphi(x,\mu,\Sigma)\) w.r.t. the Bures distance \(\mathbb{B}\). For a continuous vector field \(\Phi:\mathcal{Z}\to\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{S}^{n}\) and a distribution \(\rho\in\mathcal{P}(\mathcal{Z})\), the divergence \(\operatorname{div}_{d}(\rho\Phi)\) is the signed measure on \(\mathcal{Z}\) satisfying the integration by parts formula \[\int_{\mathcal{Z}}\varphi(z)\operatorname{div}_{d}(\rho\Phi)(\mathrm{d}z)=- \int_{\mathcal{Z}}\langle\Phi(z),\widetilde{\nabla}_{d}\varphi(z)\rangle_{z}\; \rho(\mathrm{d}z)\] for every differentiable function \(\varphi:\mathcal{Z}\to\mathbb{R}\) with compact support. In case \(\rho\) has a density w.r.t. the Riemannian volume form on \(\mathcal{Z}\), then this definition coincides with the standard divergence operator induced by Riemannian metric (3.3). The optimal transport distance and its induced Riemannian metric on the space \(\mathcal{P}(\mathcal{Z})\) are relegated to Supplementary A.1 ## 4 Gradient Flow for Maximum Mean Discrepancy As \(\mathcal{P}(\mathcal{Z})\) is an infinite dimensional curved space, many machine learning methods based on finite dimensional or linear structure cannot be directly applied to this manifold. To circumvent this problem, we use a positive definite kernel to map \(\mathcal{P}(\mathcal{Z})\) to a RKHS and then perform our analysis on it. Let \(k\) be a positive definite kernel on \(\mathcal{Z}\), and let \(\mathcal{H}\) be the RKHS generated by \(k\). The inner product on \(\mathcal{H}\) is denoted by \(\langle\cdot,\cdot\rangle_{\mathcal{H}}\), and the kernel mean embedding \(\rho\in\mathcal{P}(\mathcal{Z})\longmapsto\mathbf{m}_{\rho}(\cdot)\in \mathcal{H}\) is given by \(\mathbf{m}_{\rho}(z)\coloneqq\int_{\mathcal{Z}}k(z,w)\,\rho(\mathrm{d}w)\) for \(z\) in \(\mathcal{Z}\). The \(\operatorname{MMD}\)[Gretton _et al._, 2012] between \(\rho\in\mathcal{P}(\ two distributions over all test functions in the unit ball of \(\mathcal{H}\) (see Supplementary A.3). Moreover, it can be expressed by \(\mathrm{MMD}(\rho,\varrho)=\|\mathbf{m}_{\rho}-\mathbf{m}_{\varrho}\|_{\mathcal{ H}}\). When \(k\) is characteristic, the kernel mean embedding \(\rho\mapsto\mathbf{m}_{\rho}\) is injective and therefore, \(\mathrm{MMD}(\rho,\varrho)=0\) if and only if \(\rho=\varrho\). Consider the loss function \(\mathcal{F}[\rho]\coloneqq\frac{1}{2}\mathrm{MMD}(\rho,\varrho)^{2}=\frac{1}{2} \|\mathbf{m}_{\rho}-\mathbf{m}_{\varrho}\|_{\mathcal{H}}^{2}\). As explained in the introduction, there are three advantages of \(\mathrm{MMD}\) over Kullback-Leibler divergence: its associated gradient flow can employ a sample approximation for the target distribution, the input distribution \(\rho\) does not have to be absolutely continuous w.r.t. the target distribution \(\varrho\), and the squared \(\mathrm{MMD}\) possesses unbiased sample gradients. For each \(\rho\), the Riemannian gradient \(\mathrm{grad}\,\mathcal{F}[\rho]\) is defined as the unique element in \(\mathrm{T}_{\rho}\mathcal{P}(\mathcal{Z})\) satisfying \(g_{\rho}(\mathrm{grad}\,\mathcal{F}[\rho],\zeta)=\frac{\mathrm{d}}{\mathrm{d}t }\Big{|}_{t=0}\mathcal{F}[\rho_{t}]\) for every differentiable curve \(t\mapsto\rho_{t}\in\mathcal{P}(\mathcal{Z})\) passing through \(\rho\) at \(t=0\) with tangent vector \(\partial_{t}\rho_{t}|_{t=0}=\zeta\). By using the Riemannian metric tensor (A.3), we can compute explicitly this gradient. **Lemma 5** (Gradient formula).: _The Riemannian gradient of \(\mathcal{F}\) satisfies \(\mathrm{grad}\,\mathcal{F}[\rho]=-\mathrm{div}_{d}\left(\rho\widetilde{\nabla }_{d}[\mathbf{m}_{\rho}-\mathbf{m}_{\varrho}]\right).\)_ The Riemannian gradient \(\mathrm{grad}\,\mathcal{F}\) on \(\mathcal{P}(\mathcal{Z})\) depends not only on the gradient operator \(\widetilde{\nabla}_{d}\) but also on the divergence operator. Using Lemma 5, we can rewrite the gradient flow equation \(\partial_{t}\rho_{t}=-\mathrm{grad}\,\mathcal{F}[\rho_{t}]\) explicitly as \[\partial_{t}\rho_{t}=\mathrm{div}_{d}\big{(}\rho_{t}\widetilde{\nabla}_{d}[ \mathbf{m}_{\rho_{t}}-\mathbf{m}_{\varrho}]\big{)}\quad\text{for}\quad t\geq 0. \tag{4.1}\] The next result exhibits the rate at which \(\mathcal{F}\) decreases its value along the flow. **Proposition 6** (Rate of decrease).: _Along the gradient flow \(t\mapsto\rho_{t}\in\mathcal{P}(\mathcal{Z})\) given by (4.1), we have_ \[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}[\rho_{t}]=-\int_{\mathcal{Z}}\big{\|} \widetilde{\nabla}_{d}[\mathbf{m}_{\rho_{t}}-\mathbf{m}_{\varrho}]\big{\|}_{z }^{2}\,\rho_{t}(\mathrm{d}z)\quad\text{for}\quad t\geq 0.\] Proposition 6 implies that \(\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{F}[\rho_{t}]=0\) if and only if \(\widetilde{\nabla}_{d}[\mathbf{m}_{\rho_{t}}-\mathbf{m}_{\varrho}](z)=0\) for every \(z\) in the support of the distribution \(\rho_{t}\). Thus, the objective function will decrease whenever the gradient \(\widetilde{\nabla}_{d}[\mathbf{m}_{\rho_{t}}-\mathbf{m}_{\varrho}]\) is not identically zero. ### Riemannian Forward Euler Scheme We propose the Riemannian version of the forward Euler scheme to discretize continuous flow (4.1): \[\boxed{\rho^{\tau+1}=\exp(s_{\tau}\Phi^{\tau})_{\#}\rho^{\tau}} \tag{4.2}\] \[\text{with}\ \Phi^{\tau}\coloneqq-\widetilde{\nabla}_{d}[ \mathbf{m}_{\rho^{\tau}}-\mathbf{m}_{\varrho}],\] where \(s_{\tau}>0\) is the step size. Here, for a vector field \(\Phi=(\Phi_{1},\Phi_{2},\Phi_{3}):\mathcal{Z}\to\mathbb{R}^{m}\times\mathbb{R }^{n}\times\mathbb{S}^{n}\) and for \(\varepsilon\geq 0\), \(\exp(\varepsilon\Phi):\mathcal{Z}\to\mathcal{Z}\) is the Riemannian exponential map induced by (3.5), i.e., for \(z=(x,\mu,\Sigma)\in\mathcal{Z}\): \[\exp_{z}(\varepsilon\Phi(z))\!\!=\!\!\begin{pmatrix}x+\varepsilon\Phi_{1}(z) \\ \mu+\varepsilon\Phi_{2}(z)\\ (I+\varepsilon\mathrm{L}_{\Sigma}[\Phi_{3}(z)])\Sigma(I+\varepsilon\mathrm{L}_ {\Sigma}[\Phi_{3}(z)])\end{pmatrix}.\] Notice in the above equation that the input \(z\) affects simultaneously the bases of the exponential map \(\exp_{z}\) as well as the direction \(\Phi(z)\). This map is the \(\varepsilon\)-perturbation of the identity map along geodesics with directions \(\Phi\). When \(\rho^{\tau}=N^{-1}\sum_{i=1}^{N}\delta_{z_{i}^{\tau}}\) is an empirical distribution, scheme (4.2) flows each particle \(z_{i}^{\tau}\) to the new position \(z_{i}^{\tau+1}=\exp_{z_{i}^{\tau}}(s_{\tau}\Phi(z_{i}^{\tau}))\). The next lemma shows that \(\Phi^{\tau}\) is the steepest descent direction for \(\mathcal{F}\) w.r.t. the exponential map among all directions in the space \(\mathbb{L}^{2}(\rho^{\tau})\), which is the collection of all vector fields \(\Phi\) on \(\mathcal{Z}\) satisfying \(\|\Phi\|_{\mathbb{L}^{2}(\rho^{\tau})}^{2}\coloneqq\int_{\mathcal{Z}}\|\Phi(z)\|_ {2}^{2}\rho^{\tau}(\mathrm{d}z)<\infty\). **Lemma 7** (Steepest descent direction).: _Fix a distribution \(\rho^{\tau}\in\mathcal{P}(\mathcal{Z})\). For any vector field \(\Phi:\mathcal{Z}\to\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{S}^{n}\), we have_ \[\frac{\mathrm{d}}{\mathrm{d}t}\Big{|}_{\varepsilon=0}\mathcal{F}[\exp( \varepsilon\Phi)_{\#}\rho^{\tau}]=\int_{\mathcal{Z}}(\widetilde{\nabla}_{d}[ \mathbf{m}_{\rho^{\tau}}-\mathbf{m}_{\varrho}](z),\Phi(z))_{z}\,\rho^{\tau}( \mathrm{d}z).\] _If \(\hat{\Phi}^{\tau}\) is the unit vector field (w.r.t. the \(\|\cdot\|_{\mathbb{L}^{2}(\rho^{\tau})}\) norm) in the direction of \(\Phi^{\tau}\) given in (4.2), then_ \[\frac{\mathrm{d}}{\mathrm{d}\varepsilon}\big{|}_{\varepsilon=0}\mathcal{F}[\exp( \varepsilon\hat{\Phi}^{\tau})_{\#}\rho^{\tau}]=-\|\widetilde{\nabla}_{d}[ \mathbf{m}_{\rho^{\tau}}-\mathbf{m}_{\varrho}]\|_{\mathbb{L}^{2}(\rho^{\tau})}\] _and this is the fastest decay rate among all unit directions \(\Phi\) in \(\mathbb{L}^{2}(\rho^{\tau})\)._ It follows from Lemma 7 that the discrete scheme (4.2) satisfies the Riemannian gradient descent property: if \(\widetilde{\nabla}_{d}[\mathbf{m}_{\rho^{\tau}}-\mathbf{m}_{\varrho}]\) is nonzero and if \(s_{\tau}>0\) is chosen sufficiently small, then \(\mathcal{F}[\rho^{\tau+1}]<\mathcal{F}[\rho^{\tau}]\). In Proposition 14 in the Supplementary, we quantify the amount of decrease of \(\mathcal{F}\) at each iteration. Algorithm 1 implements the flow (4.2) iteratively. Each iteration in Algorithm 1 has complexity \(O(N(Nm+n^{3}))\), where \(m\) is the feature's dimension, \(n\) is the reduced dimension (\(n\ll m\)), \(N\) is the number of particles. ``` 1:Input: a source distribution \(\rho^{0}=N^{-1}\sum_{i=1}^{N}\delta_{z_{i}^{0}}\), a target distribution \(\varrho=M^{-1}\sum_{j=1}^{M}\delta_{\bar{z}_{j}}\), a number of iterations \(T\), a sequence of step sizes \(s_{\tau}>0\) with \(\tau=0,1,...,T\) and a kernel \(k\) 2:Initialization: Compute \(\bar{\Psi}(z)\!=\!M^{-1}\sum_{j=1}^{M}\widetilde{\nabla}_{d}^{1}k(z,\bar{z}_{j})\) with \(\widetilde{\nabla}_{d}^{1}k(z,\bar{z}_{j})\) is \(\widetilde{\nabla}_{d}\) of \(z\mapsto k(z,\bar{z}_{j})\) 3:repeatfor each \(\tau=0,\ldots,T-1\): 4: Compute \(\Psi^{\tau}(z)=N^{-1}\sum_{i=1}^{N}\widetilde{\nabla}_{d}^{1}k(z,z_{i}^{\tau})\) 5:for\(i=1,\ldots,N\) 6:do\(z_{i}^{\tau+1}\leftarrow\exp_{z_{i}^{\tau}}\left(s_{\tau}(\bar{\Psi}-\Psi^{ \tau})(z_{i}^{\tau})\right)\) 7:endfor 8:Output:\(\rho^{T}=N^{-1}\sum_{i=1}^{N}\delta_{z_{i}^{T}}\) ``` **Algorithm 1** Discretized Gradient Flow Algorithm for Scheme (4.2) in [1] does not apply even in the case of Euclidean spaces. In general, there is a possibility that \(\mathrm{MMD}(\rho_{t},\varrho)\) does not decrease to zero as \(t\to\infty\). In view of Proposition 6, this happens if the solutions \(\rho_{t}\) are trapped inside the set \(\big{\{}\rho:\ \int_{\mathcal{Z}}\big{\|}\widehat{\nabla}_{d}[\mathbf{m}_{\rho}- \mathbf{m}_{\varrho}]\big{\|}_{\mathcal{Z}}^{2}\,\rho(\mathrm{d}z)=0\big{\}}\). For each distribution \(\rho\) on \(\mathcal{Z}\), we define in Supplementary A.3 a symmetric linear and positive operator \(\mathbb{K}_{\rho}:\mathcal{H}\to\mathcal{H}\) with the property that \(\big{\langle}\mathbb{K}_{\rho}[\mathbf{m}_{\rho}-\mathbf{m}_{\varrho}], \mathbf{m}_{\rho}-\mathbf{m}_{\varrho}\big{\rangle}_{\mathcal{H}}=\int_{ \mathcal{Z}}\big{\|}\widehat{\nabla}_{d}[\mathbf{m}_{\rho}-\mathbf{m}_{\varrho }]\big{\|}_{\mathcal{Z}}^{2}\,\rho(\mathrm{d}z)\) We further show in Proposition 16 that \(\rho_{t}\) globally converges in \(\mathrm{MMD}\) if the minimum eigenvalue \(\lambda_{t}\) of the operator \(\mathbb{K}_{\rho_{t}}\) satisfies an integrability condition. ### Noisy Riemannian Forward Euler Scheme The analysis in Section 4.1 reveals that the gradient flows suffer from convergence issues if the residual \(\mathbf{m}_{\rho_{t}}-\mathbf{m}_{\varrho}\) belongs to the null space of the operator \(\mathbb{K}_{\rho_{t}}\). To resolve this, we employ graduated optimization [1, 1, 2] used for non-convex optimization in Euclidean spaces. Specifically, we modify algorithm (4.2) by injecting Gaussian noise into the exponential map at each iteration \(\tau\) to obtain \[\rho^{\tau+1}=\exp(s_{\tau}\Phi^{\tau})_{\#}\rho^{\tau,\beta_{\tau}}\] (4.3) \[\text{with }f^{\beta_{\tau}}\!\!:\!(z,u)\mapsto\exp_{z}(\beta_{ \tau}u),\;\rho^{\tau,\beta_{\tau}}\!\!\!:=\!\!f^{\beta_{\tau}}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! target domain: for example, in 1-shot learning, only 1 image per class from the target domain is selected to form the target dataset \(D=(\bar{x}_{j},\bar{y}_{j})_{j=1}^{M}\). We then perform a noisy gradient flow scheme (4.3) from the source dataset to the target dataset to get N new samples \(S_{T}=(x_{i}^{T},y_{i}^{T})_{i=1}^{N}\). With the target dataset \(D\) and new samples \(S_{T}\), we can retrain the classifier \(P\). Similarly, we can also train new classifiers from scratch using datasets \(D\) and \(D\cup S_{T}\). Finally, we test the classifiers on the test set of the target domain. Fig. 3 presents the accuracy of five transfer learning strategies on four pairs of source and target domain. For the labels above the plot, labels without \(P\) mean training a new classifier from scratch, whereas labels with \(P\) mean transferring the pre-trained classifier. \(D\) and \(S_{T}\) represent the samples in the target domain and our flowed samples. We observe a common trend that the addition of the flowed samples \(S_{T}\) always improves the accuracy of the classifiers, as we compare \(D\cup S_{T}\) with \(D\) and compare \(P\cup D\cup S_{T}\) with \(P\cup D\). Moreover, the data augmentation with \(S_{T}\) leads to a higher increase of accuracy for the 1-shot learning, where the data scarcity problem is more severe. The transfer learning results for SVHN and TIN datasets are provided in the Supplementary B.7. Although Few-shot learning is more challenging due to the high complexity of the datasets, the addition of \(S_{T}\) still improves the accuracy. We also compare with [1]2, mixup method [1], and image augmentation methods, see results in Supplementary B.8-B.10. Footnote 2: The only gradient flow work that has experiments on *NIST datasets, but it does not run experiments on TIN and SVHN. **Conclusions.** This paper focuses on a gradient flow approach to generate new labeled data samples in the target domain. To overcome the discrete nature of the labels, we represent datasets as distributions on the feature-Gaussian space, and the flow is formulated to minimize the \(\mathrm{MMD}\) loss function under an optimal transport metric. Contrary to existing gradient flows on linear structure, our flows are developed on the _curved_ Riemannian manifold of Gaussian distributions. We provide explicit formula for the Riemannian gradient of the \(\mathrm{MMD}\) loss, and analyze in details the flow equations and the convergence properties of both continuous and discretized forms. The numerical experiments demonstrate that our method can efficiently generate high-fidelity labeled training data for real-world datasets, and improve the classification accuracy in few-shot learning. The main limitation exists in the assumption that the data of one label forms an elliptical distribution. Figure 3: Average target domain accuracy on the test split for transfer learning with one-shot (left) and five-shot (right). Results are taken over 10 independent replications, and the range of accuracy is displayed by the error bars. Figure 2: Sample path visualizations for five pairs of source-target domain. The original image and additional results are in the supplementary. ## Ethical Statement Our work has positive societal impacts, because it can help reduce repetitive data collection and labeling work. It does not have possible negative societal impacts in the current stage.
2309.13409
Time-Series Forecasting: Unleashing Long-Term Dependencies with Fractionally Differenced Data
This study introduces a novel forecasting strategy that leverages the power of fractional differencing (FD) to capture both short- and long-term dependencies in time series data. Unlike traditional integer differencing methods, FD preserves memory in series while stabilizing it for modeling purposes. By applying FD to financial data from the SPY index and incorporating sentiment analysis from news reports, this empirical analysis explores the effectiveness of FD in conjunction with binary classification of target variables. Supervised classification algorithms were employed to validate the performance of FD series. The results demonstrate the superiority of FD over integer differencing, as confirmed by Receiver Operating Characteristic/Area Under the Curve (ROCAUC) and Mathews Correlation Coefficient (MCC) evaluations.
Sarit Maitra, Vivek Mishra, Srashti Dwivedi, Sukanya Kundu, Goutam Kumar Kundu
2023-09-23T15:42:54Z
http://arxiv.org/abs/2309.13409v4
# Time-Series Forecasting: Unleashing Long-Term Dependencies with Fractionally Differenced Data ###### Abstract This study introduces a novel forecasting strategy that leverages the power of fractional differencing (FD) to capture both short- and long-term dependencies in time series data. Unlike traditional integer differencing methods, FD preserves memory in series while stabilizing it for modeling purposes. By applying FD to financial data from the SP's index and incorporating sentiment analysis from news reports, this empirical analysis explores the effectiveness of FD in conjunction with binary classification of target variables. Supervised classification algorithms were employed to validate the performance of FD series. The results demonstrate the superiority of FD over integer differencing, as confirmed by Receiver Operating Characteristic/Area Under the Curve (ROCAUC) and Mathews Correlation Coefficient (MCC) evaluations. classification; forecasting; fractional difference; mathews correlation; time-series; ## I Introduction The shocks or fluctuations in the time series can have a lasting impact and affect future values over an extended period. This has garnered significant attention from both academia and practitioners in the field of forecasting. Several researchers, such as Doukhan et al. (2002), Robinson (1995), Mikosch & Starica (2004), Nguyen et al. (2020), and Zhang et al. (2018), have contributed to this understanding of LRD and its relevance in financial price series forecasting. As we go further back in a time series with short-range dependence, the influence of past values on present values rapidly decreases. However, this impact lasts for a longer time in time series with long-range dependence, frequently leading to a slow decay of autocorrelation functions. This is characterized by the persistence of shocks and the extended influence of past observations. In the past, researchers highlighted the presence of cumulative LRD in time series and claimed that it causes non-linearity (e.g., Kitagawa, 1987; Haubrich 1993; Granger & Joyeux, 1980). There is a shift towards moving beyond a mere reliance on mean values and non-Gaussian modelling techniques have emerged to represent the underlying patterns and fluctuations in price series. When it comes to price forecasting, researchers like David et al. (2017) and Asl et al. (2017) have emphasized the significance of this modelling approaches in comprehending the complexities of market dynamics. Interestingly, Serinaldi (2010) contends that, despite being noisy, real-world time series display persistent behavior in their observations. This realization has broadened the area of research in this field, with economists recognizing that financial series may exhibit LRD due to stochastic behavior on both current and distant past values. The research landscape has expanded because of the pioneering approaches developed in the past by Granger and Joyeux (1980) and Hosking (1981). In a recent study, Castellano et al., 2020 highlighted the importance of LRD. Their study focused on time series generated by stochastic processes and highlighted the intricate relationship between LRD and the observed autocorrelation patterns. LRD can therefore reveal how firmly systems depend on prior realizations and, subsequently, how quickly they bounce back from positive or negative shocks. Our argument here centred against integer differencing, which is widely used to stationarize price series. This stationary process resulted in important series memory loss, which is critical for the predictive power of a model. Ayadi et al. (2009) proposed fractional integration as a solution to the complexity of modelling time series. Financial time series often exhibit long-range dependence, which means that past values have a significant impact on future values over extended time horizons. This property can make modeling and forecasting challenging. FD can be used to reduce the long-term dependencies in the data by applying a fractional difference that is less than 1. This can make the data more amenable to modeling with traditional methods that assume shorter memory or independence. Through this work, we aim to determine the best difference strategy that preserves crucial series memory while enabling successful prediction of future observations by studying the trade-off between stationarity and memory. In the past, Hosking (1981) introduced an autoregressive moving average model with FD to conceptualize the idea of LRD in time series. In recent times, Mills (2019) reiterated the same concept by investigating medium- to long-term forecasting. They concluded that being able to spot recurring patterns in a time series would be quite helpful. The relationships between a current value \(x_{t}\) of a series and a set of lagged values \(x_{t-k}\), where \(k~{}=~{}1,2,or~{}3\), are commonly referred to as autocorrelations. The definition of the lag-k autocorrelation is: \[r_{k}=\frac{\sum_{x_{t}=k+1}^{t}(x_{t}-\bar{x})(x_{t-k}-\bar{x})}{\text{Ts}^{ 2}} \tag{1}\] where the sample mean and variance of \(x_{t}\) are denoted by, \(\bar{x}=T^{-1}\sum_{t=1}^{T}x_{t}\) and \(s^{2}=T^{-1}\sum_{t=1}^{T}(x_{t}-\bar{x})^{2}\), respectively. The autocorrelation function (ACF), a collection of sample correlations for various values of k, is essential for time-series research. Through this work, we present an empirical analysis and make a theoretical contribution by incorporating theoretical aspects into the experimentation. We re-examine the relationship between stationarity and memory using their concept to show that raising the level of differentiation leads to stationarity, but at the expense of fundamental memory loss. Fig. 1 presents a flow diagram of the body of this study. It explains the different sections covered in this study and provides a summary of each section. ## II Literature review Fractional differentiation (FD) and long-memory processes have indeed been subjects of considerable research in the fields of econometrics and time-series analysis (for example, Granger & Joyeux, 1980; Hosking, 1981; Beran, 1994; Baillie, 1996; Granger & Ding, 1995; Diebold & Inoue, 2001). Hurst (1951, 1957), Mandelbot & Walls (1968), McLeod & Hipel (1978), and Smith & Harris (1987) were among the first to study LRD using hydrogeological data. Despite their contributions to the corpus of knowledge, most of these studies have concentrated on theoretical concepts and mathematical formulations without addressing how they might be implemented in practical circumstances. Granger & Joyeux (1980) and Hosking (1981) were pioneers in linking long memory to FD. They presented fractional finite difference and Grunwald-Letnikov FD, stating that, if \(Y_{t}=\) stochastic process, \(b\ =\) lag operator such that \(b(Y_{t})\ =\ Y_{t-1}\), \(d\ =\ FD\), and \(\varepsilon_{t}\ =\) white noise. Baillie (1996) also argued the possibility of performing time-series modelling, considering that parameter \(d\) assumes non-integer values. The theoretical aspects of FD were developed by Johansen and Nielsen (2014, 2019 and 2010), who offered mathematical formulations and investigated the features of those formulations. Gil-Alana et al. (2017) and MacKinnon (1996) offered more information on the statistical characteristics and estimating techniques connected to FD. In the area of econometrics, Kapetanios et al. (2019) and Cavaliere et al. (2017, 2022)'s work assisted us in understanding how FD might be used to examine time-series data in the economy and finance. In an earlier study, Sadaei et al. (2016) provided an empirical analysis highlighting the benefits and potential of FD in a financial price series. Models like the Fractionally Integrated GARCH (FIGARCH) model are used to capture long-memory behavior in financial data (Chen et al., 2022; Pan et al., 2023). While prior researchers have recorded a few great mathematical theories concerning FD time series, we are aware of no published studies that involve forecasting using real-world price series and binary classifications in the applied field. We have used long range dependence and long memory interchangeably in this work. Flores-Muoz et al. (2018) and David et al. (2017) used autoregressive models to incorporate FD into price and commodity series. Wang and Xu's (2022) work also emphasized FD are preferred than integer differences in equations which strengthens the case for FD. The relationship between non-stationary time series and their stationary transformations was explored by De Prado (2018) to justify the occurrence of memory loss. This investigation specifically involved comparing first-order logarithmic series and FD. All these works have collectively contributed to the development of FD theory and its applications in our work. ## III methodologies Assuming that \(Y_{t}\) is the result of taking the \(d^{th}\) difference of a time-series \(X_{t}\), where \(t=0,1\ldots,(n-1)\). With that assumption, it is possible to explain the backward difference operator as: \[Y_{t}\!=\!\Delta^{d}X_{t}=(1-b)^{d}X_{t},\ \ \text{where}\ \ \Delta\!=(1-b)\ \ \ \.\ \.\.\.\ (2)\] Here, \(\Delta\) (Backward difference operator) represents differencing, \(\Delta^{d}X_{t}=(1-b)^{d}X_{t}\) expresses the backward difference operator, where b represents the lag operator. The lag operator shifts the series backward by one step (\(bX_{t}=X_{(t-1)}\)). Our argument here is that, instead of just subtracting the previous observation, we subtract a weighted combination of past observations. These weights are determined by the FD parameter (\(d\)), which controls the degree of persistence or memory in the series. Hosking, (1981) described FD as the discrete-time equivalent of stochastic movement, using the backward shift operator for this purpose. In the case of \(d\ \in\ (0,1)\), the time-series (\(Y_{t}\)) shows long-memory. In the case of a 1\({}^{\text{st}}\) order difference _(\(d=1\)),_ \[Y_{t}\ =\ (1\ -\ b)\ X_{t}\ =\ X_{t-b}\ X_{t}\ =\ X_{t}-X_{t-1} \tag{3}\] Likewise, in the case of \(d=2\), the 2\({}^{\text{nd}}\) degree polynomial can be calculated as: Fig. 1: Workflow diagram \[Yt\ =\ (1\ -\ b)2Xt \tag{4}\] \[=\ (1\ -\ b)\ (X_{t}\ -\ X_{(t-1)})\ =\ X_{t}\ -\ 2X_{(t-1)}\ +\ X_{(t-2)}\] The \(d^{th}\) difference for any integer d can be defined by extending \((1\ -\ b)^{d}\) and then applying the resulting polynomial in b to \(X_{t}\). The coefficients (weights) in the FD formula can be derived using Taylor series expansion and the gamma function. These coefficients determine the contribution of each lagged observation to the current value of the differenced series. \[(1\ -\ b)^{d}=1\ +\ \frac{d}{1!}\ (-\ b)^{1}+\ \frac{d(d-1)}{2!}\ (-\ b)^{2}+\] \[\frac{d(d-1)(d-2)}{3!} \tag{5}\] \[=\ \sum_{j=0}^{\infty}\frac{d(d-1)(d-2)\ldots(d-(j-1))}{j!}\ (-1)^{j}\ b^{j} \tag{6}\] The numerator in the above expression has \(j\) factors, except when \(j\ =\ 0\), the sign of each factor in the numerator is now changed by multiplying it by \(-1\): \[(1\ -\ b)^{d}=\sum_{j=0}^{\infty}\frac{-d(1\ -d)(2\ -d)\ldots((j-1)-d)}{j!}\ b^{j} \tag{7}\] After that, multiply by \(1\ =\frac{\Gamma(j-1)}{\Gamma(-d)}\) and by switching the elements' positions, the following equation was formulated: \[(1\ -\ b)^{d}=\sum_{j=0}^{\infty}\frac{(j-1\ -d)(j-2\ -d)\ldots(j-d)\Gamma(j-j-d)}{j! \ \Gamma(-d)}\ b^{j} \tag{8}\] The recurrence property of the gamma function is then used to: \(\Gamma(X)=(X\ -\ 1)\,\Gamma(X\ -\ 1)\), the numerator can be expressed as \(\Gamma(j-d)\). Thus, the formula can be revised as commonly used representation of the FD operator: \((1\ -\ b)^{d}\ =\ \sum_{j=0}^{\infty}\frac{\Gamma^{j}(j-d)}{\Gamma(j+1)\Gamma(-d)}\ b^{j}\). It is important to calculate the coefficients in the series to perform a FD algorithm: \(\omega_{j}\ =\ \frac{\Gamma\ (j\ -d)}{\Gamma(j+1)\Gamma(-d)}\,\ j=\ 0,1,2\...\) Because these coefficients are used to multiply observations in the time-series, this unending line of coefficients may be condensed to the length of the data series. When calculating these coefficients, there is a challenge since the numerator and denominator grow to enormous sizes and the computer cannot handle them. The recursive feature of the gamma function was used to generate a recursive formula for the \(\omega_{j}\): \[\omega_{0}\ =\ \frac{\Gamma(0\ -\ d)}{\Gamma(1\ )\Gamma(-d)}=1 \tag{9}\] \[\omega_{j}\ =\ \frac{\Gamma(j-d)}{\Gamma(j+1)\Gamma(-d)}\ =\ \frac{(j-d-1) \Gamma(j-d-1)}{j!\ \Gamma(j)\Gamma(-d)}=\frac{(j-d-1)}{j}\ \omega_{(j-1)} \tag{10}\] In the recursive formula for \(\omega_{j}\), the gamma function is not used; hence, it is possible to calculate \(\omega_{j}\) for extremely large values of j. The only computation needed is to multiply \(\omega_{(j-1)}\) by \((j-d-1)/j\). For the series to keep its memory with a real non-integer positive d, we have \(Y_{\rm{t}}=\) cumulative sum with weights \(\omega_{j}\) and values X is formulated as: \[Y_{\rm{t}}\ =\ \sum_{j=0}^{\infty}\omega_{j}\ X(t-j) \tag{11}\] where, \[\omega_{j}\ =\ \left\{1,-d,\ \frac{d(d-1)}{2!},\ \frac{d(d-1)(d-2)}{3!}, \ldots,(-1)^{j}\prod_{i=0}^{j-1}\frac{d-i}{j!},...\,\right\} \tag{12}\] and \[X\ =\ \left\{X_{(t-k)},....\,\right\} \tag{13}\] The weights \(\omega_{j}\) determine the contribution of each lagged observation \(X_{(t-j)}\) to the current value \(Y_{\rm{t}}\). However, the series is theoretically infinite, which means it includes an infinite number of terms. In practice, it is not feasible to include all infinite terms when calculating \(Y_{\rm{t}}\). Therefore, a threshold is introduced to truncate the series and include only a finite number of terms. ## IV Data Mining The CRISP (CRoss Industry Standard Process) data mining procedure is followed here, except for the last phase (deployment). The SPY series is considered here, which is an exchange-traded fund (ETF) that tracks the performance of the SP500 index. Given the substantial volume of SPY, its changes can present a stock market trend. Daily data from 1 January 2010 till 6 November 2020, i.e., 2627 datapoints, were taken with initial regular parameters, e.g., Open, High, Low, Adj. Close, Volume. Table 1 displays the statistical summary of the dataset. Fig. 2 displays the volatility of the SPY Volume series over the last 252 datapoints. The SP500 index consists of Fortune 500 companies, and the top 35 businesses account for 48% of the index's value. The average emotion generated by the news report portrays the mood of the SPY. Researchers (e.g., Sun et al., 2016, Yang et al., 2018, Bozanta et al., 2021, Obaid & Pukthuanthong, 2022, Chang et al., 2021) have proved the fundamental connection between investor sentiment and stock trends, e.g., bullish, or bearish. Consequently, sentiment analysis was performed on the daily news headlines about 35 companies throughout a ten-year period, from 2010 to 2020. We derive the daily, continuously compound rate of return on SPY for the index labelled i as \(r_{i}(t)=ln\big{(}\frac{x_{i}(t)}{x_{i(t-1)}}\big{)}\), where \(x_{i}(t)\) is the close price of the day t and \(x_{i}(t-1)\) is the close price of the day \(t-1\). The compounded return value (\(r_{i}(t)\), over time is depicted in Fig. 3. The spectrum created by the linear transformation is almost consistent throughout a wide frequency range. This is analogous to a stochastic, stationary signal, such as white noise. Because there is no linkage with earlier data, each new data value gives the same amount of added information. Because these signals are not the best for exposing dynamical relationships, the daily original price \(x_{i}(t)\) was chosen. The line plot on the extreme right (Fig. 3) displays the time evolution of the daily closing price. The noisy and chaotic-like characteristics are shown in Fig. 4. #### Iii-A1 Hurst Exponent The Hurst exponent (HE) used to assess the persistence of time series. Table II displays the HE values at different lags. HE with 5 lags (0.6041) suggests a moderate level of LRD in the data. HE with 10 lags (0.5624) indicates a slightly weaker long-term dependence. The influence of past values may extend over a slightly longer time scale at this stage. HE with 20 lags (0.6047) is like the 5 lags case, suggesting a consistent level of LRD. Past values continue to influence future values over a moderate time scale. HE with 100 lags (0.3634) indicates a decrease in LRD. The results suggest that the series may have a combination of both short-term and long-term dependencies. The persistence observed in the HE values implies that past values of the series can provide some predictive power or influence future values. ### _Iterative estimation_ For positive j and \(\omega_{0}=1\), weights can be generated iteratively as \(\omega_{j}=-\omega_{j-1}\frac{\mathrm{d}-j+1}{j}\). Fig. 5 displays the changes in weight (\(\omega\)) with different fractional orders: \(\mathrm{d}\in(0,1)\), \(\omega=\mathrm{weight}\) for each data sample was estimated for each day of SPY index price and plotted for comparison. In the case of \(d=0\), all weights are 0 except for \(\omega_{0}=1\) where the differenced series overlaps with the original series, and in the case of \(\mathrm{d}=1\), the weights are 0 except for \(\omega_{0}=1\) and \(\omega_{1}=-1\) which show first-order integer differentiation. Since \(d\in[0,1]\), all the \(\omega\) after \(\omega_{0}\) (\(\omega_{0}=1\)) are negative and stronger than \(-1\). \(\omega\) was decided here by the level of FD to be done. The absolute value of \(\omega\) can be represented as: if \(\omega_{0}\)(-1) \(\neq\) 0: \[|\frac{\omega_{I}}{\omega_{j-1}}|=|\frac{(\mathrm{d}-j+1)}{j}|<1\] else: \(\omega_{j}=0\) Fig. 4: Daily return chaos plot (z level = log returns(t+1)) Fig. 5: Lag weights (\(w_{i}\)) for various differencing values (d). Fig. 3: Temporal evolution of the daily continuously compounded rate of return, r(\(i\)), Fig. 2: SPY volatile series When d is +ve and j \(<\) (d+1), we obtain that \(\frac{(d-j+1)}{j}\geq 0\); this changes the initial \(\omega\) sign to alternate. When, d \(\in\) [0, 1], int(d) = 0, all weights are negative following j \(\geq\) 1 {j \(\geq\) d + 1}. It can be inferred that, \(\lim\limits_{j\rightarrow\infty}\omega_{j}=\)0' when int[d] is even, and \(\lim\limits_{J\rightarrow\infty}\omega_{j}=\)0' when int[d] is odd. It follows that in the case d \(\in\) (0, 1), this means that \(-1<\omega<0\), \(\forall j>0\). This change in \(\omega\) was needed to achieve \(\{Y_{t}\}_{t=1,\cdots T}\) stationary because memory deteriorates with time. To simplify the above, in the case of integer differencing orders, such as d = 1, the coefficient of the first lag (lag 1) is exactly -1 because we are subtracting the previous observation. The coefficients for the remaining lags are zero because they are not involved in the differencing process. However, when the differencing order is fractional (e.g., d = 0.5), the coefficients of the lags are no longer exactly -1 for lag 1 and zero for the remaining lags. Instead, each lag has a weight, and these weights converge to zero. Higher orders of differencing typically lead to faster convergence towards zero. Fig. 6 displays a comparison plot. The plot allows for visual comparison of the effects of differencing on both the original and logarithmic series. We see that, the higher the differencing order, the more stationary the series becomes, indicating the removal of long-term dependencies and trends. The logarithmic transformation provides additional stability to the series and reduces the influence of extreme values. The p-values for the ADF (Augmented Dicky Fuller) test were calculated for d \(\in\) (0,1) by setting a low threshold (\(\rho\) = 1e-4) and plotted as shown in Fig 7. Fig. 7 depicts both the original and logarithmic series break \(\rho\) for the ADF p-value of about 0.4, showing that an integer is not necessary to achieve stationarity. FD acts as a filter to make the series stationary and keep the maximum possible mathematical memory. To have more clarity on \(\rho\) vs. difference values, impact analysis was conducted with 130 combinations of \(\rho\) values \(\{1e-3,\ 9e-4,\ 7e-4,\ 5e-4,\ 3e-4,\ 1e-4,\ 9e-5,\ 7e-5,\ 5e-5,3e-5\}\) and difference values \(\{0.8,0.75,0.7,0.65,0.6,0.55,0.5,0.45,0.4,0.35,0.3,0.25,0.2\}\). Fig. 8 displays the heatmap plot showing the parameter change of the differenced series with various fractional values. It can be concluded that trend-stationarity can be reached without significantly altering a series. If \(\rho\) is increased, the test statistics improve marginally because the FD series has access to more data points. When all characteristics are transferred via fractional derivative, certain data points will be reduced. The values of d and threshold can be used to modify the number of remaining data points. We have employed a threshold value of 1e-4. ### _Optimal d value_ The weights were applied to each data value based on relative weight loss, to be \(\lambda_{l}=\frac{\underline{r}_{l}^{T}=\underline{\omega}_{l}}{\underline{r}_ {l}^{T}=\underline{\omega}_{l}}\) as the memory at every point will be different as per the data availability; so, with \(d\in\) (0, 1) the amount of memory to be preserved can be decided. Here, the number of historical data points were taken as a fixed window; \(\rho\) was configured to fix its length, and data points outside this window were all removed. The ideal value of d was found from the ADF test statistic in Table III. Fig. 8: Heatmap of 130 combinations of threshold values and diff values with ADF test-statistics Fig. 6: Comparison of FD on Original and Logarithmic Series Fig. 7: ADF p-values for traditional differenced series Table III shows that, at \(d=0.3\), the FD series passes the ADF test, \(\mathbf{p}Val=0.0002<0.05\), which is quite early in the differentiation process with a correlation of 93%. ADF test statistics and the (linear) correlation to the original series with different orders of differencing have been displayed to show the trade-off between stationarity and memory. Fig. 9 displays the shape of the original and FD series plotted along the original price and differenced price axes. The low p-value (0.000, Table IV) proves that the data has neither a unit root nor a non-stationary trend. KPSS (Kwiatkowski Phillips Schmidt Shin) p-value \(<\) 0.05 rejects H\({}_{0}\) around a level, showing the white noise around a trend and showing a stationary process. The series at this stage displays statistical characteristics that are independent of the time point and preserves much more memory. LRD has proven to have high persistence in data from an empirical standpoint. This satisfies findings from de Prado (2018) that all price series achieve stationarity at around \(d<0.6\), and most of them are stationary even at \(d<0.3\). Fig. 10 displays 3D chaos plot with FD (0.3). Here, the points in the plot appear to be more clustered, closer together, and follow some discernible patterns; it suggests less chaos compared to Fig. 4 with more scattered and random-looking points. Fig. 11 provides an insight into the temporal structure and autocorrelation patterns of the data. There was no significant dependence between the current and previous observations at different time lags. This suggests that log returns exhibit random behavior in the short term, without any noticeable patterns. The FD series displays the persistence of autocorrelation at longer lags. This indicates short- and long-range dependencies in the data. The autocorrelation patterns suggest that past values of the FD data can provide information and influence future values, indicating the presence of some underlying structure or trends in the data. Fig. 12 displays the boxplots of OHLC series. Asymmetry in the distribution implies that the data is skewed, meaning it is not symmetrically distributed around the mean. This skewness introduces nonlinear transformations to the series. Excessive shortness in the distribution indicates the presence of heavy tails or outliers in the data. These observations suggest that FD has influenced the statistical properties of the historical series. Fig. 11: Autocorrelation plots Fig. 10: Chaotic attractor plot (z label=FD with order 3) Fig. 9: Shape of original (logarithmic) & FD (0.3) series ## V Modelling The SPY sentiment was computed as the average sentiment score of 35 companies for each day. \[\mathit{Sentiment}_{\mathit{SPY}}^{(t)}=\frac{1}{n}\sum_{i=1}^{N}\mathit{Sentiment}_{(i)}^{(t)} \tag{15}\] Data, including date, price spectrum, closing price, traded volume, and sentiment score, were used to predict future price directions. Depending on the sign of the difference, the volume movement was chosen either as -1 or as 1. The goal is to accurately forecast whether the volume will increase or decrease daily. Daily price changes were included as an additional predictor in the dataset. \[\text{Daily Price Change}=\frac{\mathit{ClosePrice}(t)-\mathit{OpenPrice}(t)}{ \mathit{OpenPrice}(t)} \tag{16}\] The target variable, which is the outcome of the future direction of Volume was computed as Outcome, Next Day \(\text{Direction}=\text{Volume}(\text{t})\) (backward difference) \(-\) Volume(t), where Outcome, Next Day Direction \(>\) 0 or \(=\) 1 else -1 which makes the target a binary class target. Volume is the number of shares or contracts traded and is crucial for analyzing market dynamics. Spikes in volume indicate increased activity and interest. By examining the relationship between buying and selling volumes, resistance and support levels can be identified. Resistance occurs when selling volume surpasses buying volume, indicating higher supply than demand. Support happens when buying volume exceeds selling volume, signaling more demand than supply. Understanding volume helps gauge market sentiment and identify potential price levels where assets struggle or find support. The sentiment score training data is shown in a scatter plot depicted in Fig. 13. A data point's positive score is represented on the x-axis and its negative score is represented on the y-axis. The neutral result is represented by the size of the ball. Particularly near the Centre of the graph, there appears to be a fair amount of randomness. The areas where one of the two sentiment groups predominates over the other can be clearly seen in Fig. 11. Table V displays the accuracy metrics of different machine learning (ML) models. One of the most important activities in statistics and ML is binary classification, although there is still no broad agreement among scientists on the statistical indicator for assessing binary-class confusion matrices with true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). Metrics used here are: * Accuracy \(=\) Predicted output / Actual observation \(\approx\) (TP + TN) / (TP+TN+FP+FN) * Recall \(=\) TP / (TP+FN) * Precision \(=\) TP / (TP + FP) * ROCAUC: can be viewed using a ROC (Receiver Operating Characteristic) curve showing the variation at each point between TP rate and FP rate. ROCAUC is important here because equal weight was given to both classes' prediction abilities. * MCC \(=\)\(\frac{\text{TP+TN -FP+FN}}{\sqrt{\text{(TP+FP) where, (worst value \(=-1\); best value \(=+1\)), \(\mathit{Cov}(\mathit{c},\mathit{l})\) is the covariance of the actual classes c and predicted labels l, and \(\sigma_{c}\) and \(\sigma_{l}\) are the standard deviations, respectively. * 0.6, substantial: \(>\) 0.6 - 0.8 and perfect: \(<\) 0.80 (Landis & Koch, 1977). Matthew's correlation coefficient (MCC) is a more dependable statistical measure that only yields a high score if the prediction performed well in each of the four categories of the confusion matrix, proportionally to the size of the dataset's positive and negative elements (Chicco et al., 2021). KNN and RF can handle non-linear relationships in the data more effectively than others like LogReg and SVM. KNN and RF have the ability to capture complex patterns and interactions, which could be advantageous for modelling non-linearity. Fig. 12: Boxplots of FD series (OHLC) Fig. 13: Data distribution in space For the FD series, a noticeable overall improvement was observed. Table VI shows the top scores for each category. MCC was considered here over all the other scores as it is a more balanced approach to classifier assessment, no matter which class is positive or negative. The precision, recall, MCC, and Kappa values are consistent across both the LogReg and KNN models indicate that both models have similar performance in terms of correctly identifying positive instances (precision), correctly classifying true positive instances (recall), and overall agreement with the true labels (MCC and Kappa). This study adds to the existing literature by using a formal model setup based on the FD series. In an environment with fractionally differenced integrated variables (d), this method may serve as a statistical framework for examining and differentiating between short- and long-memory effects. Even when prices are intended to follow a random walk, sentiment scores can improve the accuracy with which statistical models predict stock-price movements. Orabi et al., 2020 reported that there are always low-quality posts, which may skew the performance factor. Thus, investor sentiment should be considered to reduce the influence of poor-quality sentiment. The scientific community has not yet developed a standardized reporting accuracy method. However, the \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & **ML model** & **Accuracy** & **ROCAUC** & **Precision** & **Recall** & **MCC** & **Kappa** \\ \hline 1 & LogReg (solver = liblinear) & 75.90\% & 73.84\% & 74.98\% & 75.86\% & 47.69\% & 47.67\% \\ \hline 2 & k-nearest neighbour (k = 200) & 75.85\% & 73.84\% & 76.70\% & 75.86\% & 47.69\% & 47.67\% \\ \hline 3 & SVM (kernel = poly) & 76.22\% & 71.96\% & 75.53\% & 87.35\% & 44.00\% & 43.88\% \\ \hline 4 & RF (criterion = gini, nodes = 15) & 75.27\% & 75.22\% & 78.05\% & 75.33\% & 49.53\% & 49.40\% \\ \hline \end{tabular} *LogReg: Logistic regression, ***SVM: Support Vector Machine, ***RForest: Random Forest \end{table} TABLE VI: Classification accuracy of the fractional differences SERIES statistical accuracy reported by some authors, as displayed in Table VII. Our study outperformed previously reported precision methods, demonstrating that our methodology outperforms existing methodologies in terms of accuracy. This is a noteworthy accomplishment because the goal is to create accurate prediction results and risk-adjusted profits using simple algorithms with low data needs, which is consistent with the suggestions of other researchers, such as Zhong & Enke, 2019. To this end, our work incorporates an FD series, enabling the modelling of both short- and long-term dependencies in time-series analysis while preserving autocorrelation structures. We leverage sentiment ratings to enhance the accuracy of machine learning models in predicting stock price fluctuations, highlighting the importance of sentiment analysis in financial forecasting. Additionally, we mitigated the impact of low-quality posts by incorporating sentiment data quality assessment, resulting in more reliable outcomes. By advocating the use of Matthews Correlation Coefficient (MCC) scores as a comprehensive evaluation metric for classifier performance, our study adds standardized reporting accuracy approaches. These contributions advance our understanding of efficient methods for classification accuracy and financial forecasting by displaying improved accuracy, highlighting the benefits of FD series, underscoring the importance of sentiment scores, and introducing MCC scores as a robust evaluation metric. ## VI Conclusion This study demonstrated the effectiveness of fractional differencing (FD) in minimizing long-term dependencies while preserving short-term dependencies in price series. Through this process, the study found that the FD series significantly improves the accuracy of the empirical data models compared with the integer differenced series. Using a difference order of 0.3 for the SPY series, which exhibits stationary properties and a high correlation (\(>\)90%) with the original series, the study revealed the presence of long memory. Various supervised classification algorithms, including LogReg, KNN, SVM, and RF are experimented in this work to demonstrate the overall improvements in classification tasks. The accuracy measures have improved noticeably, particularly the ROCAUC (Receiver Operating Characteristic Area Under the Curve) and MCC (Matthews Correlation Coefficient) values. This emphasized the importance of careful time-series modeling and advises against using the default stationarity format (d=1) without considering the statistical properties of the data. Although the current findings are promising, further improvements and advancements can be made in future research. This empirical investigation adds original insights to time-series modeling.
2305.19597
What does the Failure to Reason with "Respectively" in Zero/Few-Shot Settings Tell Us about Language Models?
Humans can effortlessly understand the coordinate structure of sentences such as "Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, respectively". In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of "respectively". We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions.
Ruixiang Cui, Seolhwa Lee, Daniel Hershcovich, Anders Søgaard
2023-05-31T06:45:09Z
http://arxiv.org/abs/2305.19597v1
What does the Failure to Reason with "Respectively" in Zero/Few-Shot Settings Tell Us about Language Models? ###### Abstract Humans can effortlessly understand the coordinate structure of sentences such as "Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, _respectively_". In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of "respectively". We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions. ## 1 Introduction Transformer-based language models (LMs) (Devlin et al., 2019; Raffel et al., 2019; Brown et al., 2020) induce useful representations for a wide range of natural language understanding (NLU) tasks, including natural language inference (NLI; Wang et al., 2018; Hu et al., 2020), especially in in zero-shot or few-shot settings. To what extent this usefulness results from memorization, generalization or the ability of LMs to draw common sense inferences remains an open question. To approach it, the linguistic phenomenon of respective readings (Gawron and Kehler, 2004) serves as an excellent probe. This phenomenon has so far been underexplored in NLP, even though it has been studied extensively in linguistic semantics (McCawley, 1968; Pullum and Gazdar, 1982; Dalrymple and Kehler, 1995; Eggert, 2000). In English, "respectively" is a rare word1 used to establish a one-to-one mapping between two sets of participants and to distribute predicates over sets (Okada, 1999). For example, in Figure 1, the first conjunct in the subject corresponds to the first conjunct in the object and the second conjunct in the subject corresponds to the second conjunct in the object. The respective relation is bijective and respects the relative order of the elements of two different coordinate expressions; it is, in other words, cross-serial. "Respectively" can have different syntactic or semantic properties depending on the context, e.g., as a conjunction or adverb. Footnote 1: In terms of frequency, in the British National Corpus, “respectively” is ranked 13,606th among 18,089 words, and 233rd among 429 adverbs (Leechet et al., 2014). In this paper, we investigate how LMs reason with respective readings. We propose two datasets, WikiResNLI (a controlled synthetic dataset) and NatResNLI (a naturally occurring dataset) to cover various explicit and implicit realizations of "respectively". Our research questions are: 1. Can NLI models reason with "respectively" constructions in zero-shot settings? Figure 1: An example of explicit (top, evoked by “respectively”) and implicit (middle, with no overt marker) respecitve readings. Humans can infer that both sentences have the same “cross-serial” meaning (bottom) by relying on commonsense knowledge (that a person is only born in one location) and world knowledge (that Copenhagen and Seattle are mutually exclusive). 2. Can LMs generalize from explicit to implicit respective readings? 3. Can LMs generalize from synthetic to natural respective readings? 4. What cues do LMs leverage for prediction? We experiment with state-of-the-art LMs and analyze the results to gain insights into the limitations of current models and potential directions for future research. We show that LMs are able to generalize effectively in a few-shot learning scenario when the word "respectively" is present. However, when the reading is evoked implicitly, a greater number of training instances are necessary. LMs require significantly more instances to generalize to naturally occurring datasets than humans. In conclusion, our study demonstrates that LMs continue to exhibit a deficit in generalizability to infrequent linguistic constructions with limited coverage in their training data. ## 2 Respective Readings Respective readings are closely related to several types of readings instantiated by plurals and mass terms: distributive readings, collective readings and cumulative readings [1]. Distributive readings.These usually refer to the application of a predicate to the subsets of a set or group. As for sentence 1(a), it is equivalent to "John smiled and Mary smiled". The reading is available because of the nature of the predicate is _atomic_[15], similar instances including "sing" and "sleep". Distributive reading can be enforced with overt distributive markers, i.e., "every" and "each" [11]. In example 1(b), we enforce the reading by adding "each" at the end of the sentence so as to rule out the reading "John and Mary earn 200 dollars together". 1. [label=0.] 2. **Distributive reading:** John and Mary smiled. 3. **Distributive reading with an enforced marker:** John and Mary earn 200 dollars _each_. Collective readings.These are the opposite of distributive readings in that the predicates apply to the whole plural entity instead of individuals. The quantifiers "all" and "most" instead of "every" and "each" are usually compatible with collective readings as in example 2(b) [12]. 1. [label=0.] 2. **Collective reading with overt marker:** _All_ of the men gathered. Cumulative readings.These involves two entities but in a symmetric non-scopal relation as in the canonical example 3 [1]. The sentence can be paraphrased into "There are three boys and two girls, each of the three boy saw at least one of the two girls, and each of the two girls was seen by at least one of the three boys.". It is discussed sometimes with weak reciprocity [1]. 1. **Cumulative reading:** Three boys saw two girls. Respective readings.These are thought to be a special case of cumulative readings in which a bijective relation holds between the two (or more) sets of entities that enter into the cumulative relation [10]. For example 4(a), the pair [1] and the pair [15] are grouped under the _died in_ relation. Respective reading can also arise without the adverb _respectively_, and the absence is even sometimes preferred. As in example 4(b), the binomial expression "husband and wife" is so strong that the adverb "respectively" is unwarranted. 1. [label=0.] 2. **Respective reading with overt marker:** EinilianoZapata and GerhartMunch. died in Morelos and Michoacan, _respectively_. 3. **Respective reading without overt marker:** John and Mary are husband and wife. ## 3 An NLI Benchmark for "Respectively" Understanding the coordinate structures in respective readings is effortless for humans, but it remains a question whether LMs, after being pre-trained on billions of tokens and fined-tuned on thousands of NLI instances, can reliably process them. To probe LMs' behaviour in the presence of respective readings, we construct two English NLI datasets: WikiResNLI, a synthetic dataset based on an analogy corpus, and NatResNLI, a dataset sourced and created from natural occurrences. We release both datasets on Github2 and describe the detailed creation steps below. Footnote 2: [https://github.com/ruixiangcui/WikiResNLI_NatResNLI](https://github.com/ruixiangcui/WikiResNLI_NatResNLI) NatResNLI ### Synthetic Dataset: WikiResNLI To generate a controlled synthetic challenge set for reasoning with respective readings, we exploit a useful relationship between coordination constructions and _analogies_. Analogy is concerned with similarities between observable properties and causal similarities. Analogy dataset.Garneau et al. (2021) proposed WiQueen, a multilingual analogy dataset consisting of 78,000 analogies extracted from Wikidata. A subset of 9,000 instances is annotated where all four entities are _unique_. These are the analogies in which all relations are informative (Newman-Griffis et al., 2017). See Table 1 for an example. Their experiment showed that pretrained LMs can predict 29% of analogous entities in a zero-shot setting and 41% after training. This indicates that analogical knowledge already exists in pretrained models and can be enhanced by training. Generating premises with "respectively".Given four analogical entities \(\langle w_{1},w_{2},w_{3},w_{4}\rangle\) and the predicate \(p\), we form a natural language premise consisting of the analogical information in a respective reading setting of 5(a) after adapting \(p\) for phrasing and conjugation. Such a premise is unambiguous and equivalent to 5(b), where the predication is distributed over the two pairs of entities. 5(a) is marked by an explicit respective reading indicator. As an implicit respective reading case, 5(c) has the same meaning as 5(b) but there is no explicit respective operator. In such implicit cases, the predicate \(p\) is usually mutually exclusive in that each subject can have only one object. For example, in Sentence 6(a) a person can only die in one place but not two places. Non-mutually exclusive predicates are disqualified for an implicit respective reading since they causes ambiguity, as in Sentence 6(b). 1. [label=5.] 2. 1. \(w_{1}\,\text{and}\,w_{3}\,p\,w_{2}\,\text{and}\,w_{4}\),respectively. 2. \(w_{1}\,\,p\,w_{2}\) and \(w_{3}\,\,p\,w_{4}\). 3. \(w_{1}\) and \(w_{3}\,\,p\,w_{2}\) and \(w_{4}\). 3. 2. EmilianoZapata and Gerhart Munch died in Morelos and Michoacan. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 12. 13. 14. 15. 16. 17. 19. 21. 22. 23. 24. 25. 26. 27. 28. 29. 20. 21. 23. 26. 28. 29. 21. 24. 29. 22. 23. 24. 25. 26. 27. 28. 29. 20. 21. 22. 23. 24. 25. 26. 28. 29. 21. 22. 23. 24. 25. 27. 29. 21. 24. 28. 29. 22. 20. 22. 20. 22. 20. 22. 20. 22. 20. 22. 20. 22. 20. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 22. 2. 22. 22. pairs for the development set. The rest is used as the training set, with 1,577 premises and 12,616 premise-hypothesis pairs. Generating premises with implicit "respectively".We aim to test whether LMs can reason with respective readings and generalize from explicit construction to instances without overt markers. For this purpose, we derive an implicit dataset from WikiResNLI\({}_{\text{EXPLICIT}}\) by simply removing the word "respectively" from the premises. We call this dataset WikiResNLI\({}_{\text{IMPLICIT}}\). In this process, we need to pay special attention to the fact that ambiguity usually occurs in the 1S2O setting when the predicates allow conjunction of objects; given the sentence 6(b), it is ambiguous whether the hypothesis "John ate a falafel and a tortilla" is entailed. To form a high-quality test set for WikiResNLI\({}_{\text{IMPLICIT}}\), we first need to exclude the ambiguous contradiction hypotheses. Therefore, two of the authors manually annotate the 139 predicates for whether they allow a single subject predicating conjunction of two objects. In total, 13 predicates are annotated by both authors as unambiguous. Subsequently, we keep only the premises with these predicates from the complete WikiResNLI, and for each predicate, we cap it if the number of premises exceeds 100. Eventually, we are left with 451 premises for the 13 predicates. The 3,608 premise-hypotheses pairs are used as the test set. ### Naturally-occurring Dataset: NatResNLI While the synthetic dataset is well-controlled, it does not necessarily cover the natural usage of "respectively". To address this, we also collect a dataset of naturally-occuring usages. Collecting premises.As data resources for "respectively" in publicly available naturally-occuring data, we leverage two online dictionaries3 and a writing advice blog,4 which provide English examples containing specific words in real-world examples. We curate the sentences that included "respectively" and further filter some of them to avoid context ambiguity. In total, 76 sentences remain as the premise set. Footnote 3: [https://sentence.yourdictionary.com/respectively/andthttps://www.dictionary.com/browse/respectively](https://sentence.yourdictionary.com/respectively/andthttps://www.dictionary.com/browse/respectively) Generating hypotheses.Two of the authors manually write hypotheses based on the fine-grained categorization of Table 1 for each collected premise. Given that the labels are pre-assumed, and to determine whether these inference relations align with humans, we employ crowd workers to verify them. See the annotation details in Appendix A. Statistics.The resulting dataset, which we call NatResNLI, consists of 76 premises and 608 hypotheses. The average sentence lengths of NatResNLI's premise and hypothesis are 20.1 and 10.1, respectively. Sentences have 2.32 conjuncts in average, with 4 as the maximum. Variety.NatResNLI's sentences have more complicated linguistic constuctions than WikiResNLI, such as relative clauses, e.g., sentence 7(a), implicit coreferences in sentence 7(b), and inverted sentences in sentence 7(c). 7. 1. The annual value of theHulse endowmentis between PS800and £900, of which eight-tenths go to the professor of divinity and one-tenth to the prize and lectureship, respectively. 2. In 1910 the export of palm kernels was 6,141 tons, of palm oil 2,160 tons; in 1916 the figures were 22,391 tons and 3,852 tons respectively. 3. Above this, approached by a stair, are the Lesche and the theatre, occupying respectively the north-east and northwest corner of the precinct. Inter-annotator Agreement.The inter-annotator agreement [11, 10] of the workers for NatResNLI is 0.65, lower than ANLI's (0.67-0.74) and SNLI's (0.70). This can be attributed to that we have five annotators rather than the commonly chosen three annotators, as a larger number of annotators can sometimes lead to more diverse interpretations and disagreements, potentially lowering the inter-annotator agreement. \begin{table} \begin{tabular}{l c c c} \hline \hline **Human** & Entailment & Neutral & Contradiction \\ **Reference** & & & \\ \hline Entailment & 93.4 & 2.1 & 4.5 \\ Contradiction & 5.9 & 4.1 & 90 \\ \hline \hline \end{tabular} \end{table} Table 2: NatResNLI human annotated label distribution in percentages for each assigned reference label. Humans mostly agree with the pre-assigned reference labels (demonstrated in Table 1), but not always. Verification of pre-assigned labels.In Table 2, we calculate the average agreement percentage of human annotation with reference labels, showing that humans do not always agree with them. Investigating the examples where the majority votes are distinct from the pre-assigned labels, we find nine instances distributed over four premises. For the sentence in 8(a), humans actually correct the label as the respective reading here does not cause a mutually exclusive effect. For sentence 8(b), humans show more caution towards sentence ambiguity caused by unknown world knowledge of Kilia and Dniester's locations, and hence the neutral label. * [noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,sep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=0pt,topsep=topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=topsep,topsep=topsep,topsep=topsep=topsep,topsep=topsep=topsep,topsep=topsep=topsep,topsep=topsepsep=top NLI corpora. The accuracy is just 10% above the chance level, and it completely fails in the 1S2O and 2S1O settings. Results on both datasets show that when training with more data, models improve on respective readings. However, the question of what leads to improvement remains. We examine how many times explicit respective readings appear in the training and testing datasets of MNLI, SNLI Fever-NLI and ANLI. We find that the adverb "respectively" occurs 177 and 12 times in the MNLI training and dev sets, 15 and 0 times in the SNLI training and test sets, 1,064 and 64 times in the Fever-NLI training and test sets, and 216 and 5 times in the combined ANLI training and dev sets. We randomly sampled a subset of each dataset and manually check whether they tackle reasoning over coordination structure. We find that in most cases, "respectively" works simply as a context word and has little to do with the actual inference relations. Thus it is still not clear whether it is simply the exposure to the explicit cues (the word "respectively") or some instances with implicit coordinate structures that result in the performance improvement. We thus ask the following three research questions and experiment with few-shot learning. **Question 2**: _Can LMs Generalize from Explicit to Implicit Respective Readings?_ Instances of WikiResNLI have the coordinate structures of an equal number of conjuncts, and linguists have argued that such semantic relations are reflected in the syntactic relations Goodall (1987); Moltmann (1992). It is essentially semantic but also relies on pragmatically available information of the truth conditions. Respective readings in fact also commonly omit explicit lexical indicators but remain available and preferred as 2(a) Gawron and Kehler (2004). We are therefore interested in whether LMs can learn the semantic-pragmatic meaning of respective reading sentences rather than only making use of lexical and syntactic cues. We fine-tune the DeBERTa model previously fine-tuned with M, F, Ling and WANLI with different numbers of WikiResNLI\({}_{\text{EXPLICIT}}\) examples without a dev set, since we do not want to bias the model towards our datasets hence hurting performance on the other NLI tasks. We fine-tune the model with WikiResNLI\({}_{\text{EXPLICIT}}\) and WikiResNLI\({}_{\text{IMPLICIT}}\) separately and report the overall accuracy on both dataset in Figure 2. Training with WikiResNLI\({}_{\text{EXPLICIT}}\) contributes to a steady performance increase on both WikiResNLI\({}_{\text{EXPLICIT}}\) and WikiResNLI\({}_{\text{IMPLICIT}}\). Especially, 1-shot learn Figure 3: DeBERTa’s Performances on WikiResNLI\({}_{\text{IMPLICIT}}\) after fine-tuning on WikiResNLI\({}_{\text{EXPLICIT}}\) or WikiResNLI\({}_{\text{IMPLICIT}}\). The result is broken down by contradiction fine-grained set. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **\# Shots** & 1 & 2 & 4 & 8 & 16 & 32 & 64 & Full \\ **Type** & & & & & & & & \\ \hline All & 8 & 16 & 32 & 64 & 128 & 256 & 512 & 12,616 \\ Basic & 4 & 8 & 16 & 32 & 64 & 128 & 256 & 6,308 \\ \hline \hline \end{tabular} \end{table} Table 5: Number of training instances for each number of shots. A “shot” contains multiple training instances since we always take a premise along with all of its generated hypotheses—8 in the general case and 4 in the basic case. Figure 2: Overall performance of DeBERTa on WikiResNLI\({}_{\text{EXPLICIT}}\), WikiResNLI\({}_{\text{IMPLICIT}}\) and NatResNLI from zero-shot to fully supervised. _Wiki_ex-Wiki_ex_ refers to training with WikiResNLI\({}_{\text{EXPLICIT}}\) instances and evaluating on WikiResNLI\({}_{\text{EXPLICIT}}\) test set. Similarly, _Wiki_im-Nat_ refers to training with WikiResNLI\({}_{\text{IMPLICIT}}\) and testing on NatResNLI. ing enhances the performance clearly, with a 10% increase for in-domain evaluation, and a remarkable 30% increase for explicit to implicit generalization. The improvements are small from 1-shot to 8-shot. Only at 16-shot, both WikiResNLI\({}_{\text{EXPLICIT}}\) in-domain learning and transferring to WikiResNLI\({}_{\text{IMPLICIT}}\) reach 100% accuracy. This shows the possibility to learn respective readings, despite the need to see relevant instances 128 times (see Table 5). Interestingly, in-domain few-shot learning of WikiResNLI\({}_{\text{IMPLICIT}}\) witnesses a relatively cold start. The accuracy does not increase above 60% until 16 shots. Generalization from implicit respective reading to explicit reading is surprisingly not reaching 100% accuracy even after full supervision. We are keen to investigate what types of instances are difficult to learn for explicit to implicit respective reading generalization. In Figure 3, we break down WikiResNLI\({}_{\text{IMPLICIT}}\) with contradiction labels by categories (1S1O, 1S2O and 2S1O) and plot the accuracy against number of shot. As can be seen, the performance on explicit readings is always better than on implicit readings across all three contradiction types. Among them, 1S2O and 2S1O instances are the most difficult. Their accuracies are below 40% and 20%, respectively before 16 shots. And only until 32 shots do both types reach above 95% accuracy. Unlike in-domain learning, 1S2O never gets perfectly solved. **Question 3**: _Can LMs Generalize from Synthetic to Natural Respective Readings?_ WikiResNLI is a synthetic dataset, and it remains unclear whether models can reason with respective readings in realistic settings if we generate enough synthetic data and feed it to models. With NatResNLI, we are able to investigate LM's respective reading reasoning generalizability from synthetic to natural data and its alignment with humans. We evaluate the models fine-tuned with WikiResNLI\({}_{\text{EXPLICIT}}\) on NatResNLI and plot the performance in Figure 4. We can observe that scores on NatResNLI are almost always lower than on WikiResNLI due to domain drift. Particularly, 1S2O and 2S1O are 10% and 20% lower in zero-shot settings. 1S2O manage to reach on-par performance with WikiResNLI after 16 shots, while 2S1O after 32 shots. Interestingly, the models are able to surpass 95% after 32 shots, while pre-assigned labels only have 90% match (see Table 2). Although we are comparing a rule-based method with 32-shot (256 examples) training, we can conclude that models are able to align with humans for respective reading reasoning. In addition, we notice that for 1S2O and 2S1O generalization, the complex linguistic structures discussed in Section 3.2 do have a high impact in the low-number few-shot learning, but the difficulty diminished as more training data are used. **Question 4**: _What Cues do LMs Rely on?_ So far we have discussed LMs' ability to generalize on the syntactic-semantic level, from explicit to Figure 4: Performance of DeBERTa on NatResNLI after being fine-tuned on WikiResNLI\({}_{\text{EXPLICIT}}\). To facilitate comparison, we mark performances on WikiResNLI\({}_{\text{EXPLICIT}}\) in darker colours. Figure 5: Performance of DeBERTa on WikiResNLI\({}_{\text{EXPLICIT}}\) and WikiResNLI\({}_{\text{IMPLICIT}}\) after being fine-tuned only with the basic types (entailment and 1S1O contradiction) of WikiResNLI\({}_{\text{EXPLICIT}}\). implicit and from synthetic to natural in respective readings. But it is yet to be determined whether the model is simply adopting the lexical-syntactic heuristics for prediction and whether it leverages common sense and world knowledge. If models can reason over basic hypothesis structures (1S1O entailment and 1S1O contradiction), it would be expected they are aware that the one-to-one relation correspondences should exclude 1S2O and 2S1O propositions due to common sense and world knowledge. Although there are cases such as 8(a) where one object entity includes the other in NatResNLI, all cases of the WikiResNLI test set dissolve the situation due to the mutually exclusive properties. Therefore, we fine-tuned the DeBERTa models with only WikiResNLI\({}_{\text{EXPLICIT}}\) instances of basic structures and evaluated their performances on both WikiResNLI\({}_{\text{EXPLICIT}}\) and WikiResNLI\({}_{\text{IMPLICIT}}\) 1S2O and 2S1O. The results can be seen in Figure 5. We can observe that the generalization from basic structures to unseen structures is indeed difficult: while training with all structures and evaluating will all structures achieve perfect scores on 1S2O and 2S1O of WikiResNLI\({}_{\text{EXPLICIT}}\) at 16 shots, training with basic structures are only 58% and 75% accuracies. It is worth noting that all fine-tuning instances have either entailment or contradiction labels, and therefore a random-guessing baseline would be 50% instead of 33.3%. The generalization performances from explicit respective readings with basic structures to implicit 1S2O and 2S1O are more disappointing. At 16 shots, the accuracies are only 18% and 30%, respectively, well below the chance level. Even full supervision can only achieve around 60% accuracy for both structures. The results indicate that the models do not effectively learn the abstract respective reading relations due to not understanding the commonsense and world knowledge. We look into the intersection errors of 32-shot, 64-shot and fully-supervised models which are fine-tuned on WikiResNLI\({}_{\text{EXPLICIT}}\) and are evaluated on WikiResNIL\({}_{\text{IMPLICIT}}\). There are 358 1S2O and 248 2S1O instances that are consistently mistaken by the models. The top-5 frequent properties are: twinned administrative bodies, took place, are capitals of, buried in, and family names. Knowledge about relative location 9(a) and knowledge about humans 9(b) thus seem to play an important role in reasoning with implicit respective readings. ment **Impact on other NLI tasks.** We evaluate all models fine-tuned with WikiResNLI above on other NLI tasks, i.e, MNLI-m and ANLI-R3, to check whether fine-tuning on such a label-imbalanced dataset hurts performance. Interestingly, full supervision with WikiResNILMPLICIT of basic structures results in new state-of-the-art performance for DeBERTa. On MNLI-m, the score improves from 90.8% to 91.4%; and on ANLI-R3, the performance raises from 63.6% to 64.1%. **Experiments on LLaMA, FLAN-T5 and GPT-JT** Significant advancements in large generative LMs have been achieved in the realm of general natural language understanding. These improvements can be attributed to enhanced training strategies, such as incorporating code and human instructions into pretraining/fine-tuning data and RLHF (Christiano et al., 2017; OpenAI, 2023). We assess the zero-shot and in-context learning abilities of three open-source generative models, that is, LLaMA-7B (Touvron et al., 2023), FLAN-T5-XL (Chung et al., 2022) and GPT-JT-6B (Wang and Komatsuzaki, 2021; Together, 2022). In this study, our focus is on two representative scenarios, namely generalizing from explicit to implicit readings and generalizing from synthetic to natural readings. We adopt the template _{premise} Question: Does this imply that [hypothesis]?_ as it attains top-tier results for NLI tasks (Webson and Pavlick, 2022). Figure 6 illustrates the explicit to implicit generalization results. Notably, FLAN-T5 achieved a near-perfect score on zero-shot entailment pairs, comparable to the fine-tuned DeBERTa. However, GPT-JT, despite being instruction-tuned on NLI datasets, performed at a mere chance level on entailment pairs, while LLaMA scored below 10% accuracy. In terms of contradiction instances, all three models scored below 60% accuracy, with in-context learning offering limited improvement at the 4-shot level. Specifically, FLAN-T5's performance decreased after in-context learning. For the generalization from WikiResNLI to NatResNLI, in Figure 7, we observed similar trends as in the previous experiments. FLAN-T5 outperformed the other models on entailment instances, and LLaMA demonstrated significant improvement within a few shots. However, for contradiction pairs, all models experienced only a modest increase in accuracy from 1 to 4 shots, with the highest accuracy remaining below 60%. To conclude, while large generative models have made significant strides in natural language understanding, they still face substantial challenges in reasoning with respective readings, highlighting the need for further research and development in the long tail of linguistic constructions. ## 5 Related Work Logical relations between two sentences are a core aspect of language understanding (Frege, 1879; Heijenoort, 1967; Blackburn et al., 2006). To facilitate large-scale model evaluation, NLP researchers have developed manually labelled NLI corpora, typically for 2/3-way classification (Dagan et al., 2013; Bowman et al., 2015; Williams et al., 2018). In recent years, researchers start to analyze the characteristics of these datasets, such as annotation artefacts (Gururangan et al., 2018), syntactic heuristics (McCoy et al., 2019) and adversarial collection process (Williams et al., 2022). In computational linguistics, distributive predication has been analyzed through means of distributivity operators (Massey, 1976; Link et al., 1983; Roberts, 1987; Lasersohn, 1998). And linguists have been working on extending first-order logical forms to include distributive and collective readings (Martin, 1981; Alshawi and van Eijck, 1989). Scha and Stallard (1988) present a recursive translation rule scheme to account for multi-level plurals. Aone (1991) proposed a reasoner consisting of domain-dependent constraints and domain-independent axioms for collective and distributive ambiguity. Shaw and McKeown (2000) described a simplified quantifier system to minimize distributive and collective ambiguities. box model cannot reason over it, it should be able to learn with as few examples as possible. We proposed two datasets, WikiResNLI (a controlled synthetic dataset) and NatResNLI (a naturally occurring dataset) to probe their ability to do so in zero-shot and few-shot settings. We find that explicit reasoning is easier to learn than implicit reasoning, and LMs fail to generalize when common sense inference is needed. We confirm that diverse and complex training data are necessary to achieve human-level performance. ## 7 Limitation Linguistic studies have shown that respective readings are not necessary to have two coordinate structures in the same sentence Dalrymple and Kehler (1995). Both WikiResNLI and NatResNLI have only one sentence in the premise and do not exhaust all possible and complicated realizations of respective readings. However, we are able to discuss and investigate LMs' generalizability with "respectively" with three constructions, i.e., 1S1O, 1S2O and 2S1O. Our experiments are English-specific and are limited to LMs that can be run with an academic budget. However, our conclusions about the generalizability towards respective readings should be viewed as language-agnostic given there are linguistic constructions under-discussed in many other languages and it is worth researchers' attention to study them. ## 8 Acknowledgments We would like to thank the members of the CoAStaL NLP group and the anonymous reviewers for their helpful suggestions.
2308.16880
Text2Scene: Text-driven Indoor Scene Stylization with Part-aware Details
We propose Text2Scene, a method to automatically create realistic textures for virtual scenes composed of multiple objects. Guided by a reference image and text descriptions, our pipeline adds detailed texture on labeled 3D geometries in the room such that the generated colors respect the hierarchical structure or semantic parts that are often composed of similar materials. Instead of applying flat stylization on the entire scene at a single step, we obtain weak semantic cues from geometric segmentation, which are further clarified by assigning initial colors to segmented parts. Then we add texture details for individual objects such that their projections on image space exhibit feature embedding aligned with the embedding of the input. The decomposition makes the entire pipeline tractable to a moderate amount of computation resources and memory. As our framework utilizes the existing resources of image and text embedding, it does not require dedicated datasets with high-quality textures designed by skillful artists. To the best of our knowledge, it is the first practical and scalable approach that can create detailed and realistic textures of the desired style that maintain structural context for scenes with multiple objects.
Inwoo Hwang, Hyeonwoo Kim, Young Min Kim
2023-08-31T17:37:23Z
http://arxiv.org/abs/2308.16880v1
# Text2Scene: Text-driven Indoor Scene Stylization with Part-aware Details ###### Abstract We propose Text2Scene, a method to automatically create realistic textures for virtual scenes composed of multiple objects. Guided by a reference image and text descriptions, our pipeline adds detailed texture on labeled 3D geometries in the room such that the generated colors respect the hierarchical structure or semantic parts that are often composed of similar materials. Instead of applying flat stylization on the entire scene at a single step, we obtain weak semantic cues from geometric segmentation, which are further clarified by assigning initial colors to segmented parts. Then we add texture details for individual objects such that their projections on image space exhibit feature embedding aligned with the embedding of the input. The decomposition makes the entire pipeline tractable to a moderate amount of computation resources and memory. As our framework utilizes the existing resources of image and text embedding, it does not require dedicated datasets with high-quality textures designed by skillful artists. To the best of our knowledge, it is the first practical and scalable approach that can create detailed and realistic textures of the desired style that maintain structural context for scenes with multiple objects. ## 1 Introduction Virtual spaces provide an immersive experience for meta-verse, films, or games. With increasing demands for virtual environments, various applications seek practical methods to create realistic 3D scenes with high-quality textures. Currently, skillful artists need to manually create 3D assets and accompanying textures with careful parameterization, which is not scalable enough to account for the diverse content the industry is heading for. Scenes can also be populated with existing 3D database models or created with recent shape-generation approaches using data-driven methods [36, 58]. However, most of them lack texture information or are limited to simple coloring. To build realistic content, we need fine details containing the artistic nuances of styles that obey the implicit correlations with geometric shapes and semantic structure. Recent works provide methods to color a single object with the help of differentiable rendering [5], but often they are limited to single texture or blurred boundaries [6, 21, 29, 31, 56]. More importantly, the 3D objects are often textured in isolation, and only limited attempts exist to add visual appearances for large-scale scenes with multiple objects [14, 15, 19]. The biggest challenge is adding consistent style for an entire scene, but still accounting for the boundaries of different materials due to the functional and semantic relationship between parts, as observed within real-world scenes. Our proposed Text2Scene adds plausible texture details on 3D scenes without explicit part labels or large-scale data with complex texturing. We take inspiration from abundant 3D shape and image datasets and decompose the problem into sub-parts such that the entire scene can be processed with a commodity memory and computation. Given scenes of multiple objects of 3D mesh geometry, we separately handle walls and individual objects. Specifically, the stylization of walls is formulated as texture retrieval, and the objects are initialized with base colors. From the base color assignment, we can deduce the part-level relationship for stylization and further refine them in later stages, such that their rendered images are close to the input text within the joint embedding space of foundational models. Our coarse-to-fine strategy keeps the problem tractable yet generates high-quality texture with clean part boundaries. We first create segments of input mesh such that the segment boundaries align with low-level geometric cues. Then we start with the simplified problem of assigning a color per segment. Interestingly, the prior obtained from large-scale image datasets assign similar colors for the parts with similar textures, reflecting the semantic context or symmetry as shown in Figure 1. We add the detailed texture on individual objects as an additional perturbation on the assigned base colors by enforcing constraints on the image features of their projections. The additional perturbations are high-frequency neural fields added to the base color. In summary, Text2Scene is a new method that * can easily generate realistic texture colors of the scene with the desired style provided by text or an image; * can add detailed texture that respects the semantic part boundaries of individual objects; and * can process the entire scene without a large amount of textured 3D scenes or an extensive memory footprint. We expect the proposed approach to enable everyday users to quickly populate virtual scenes of their choices, and enjoy the possibility of next-generation technology with high-quality visual renderings. ## 2 Related works 3D Shape Understanding and SegmentationOur proposed method creates a realistic texture that abides by implicit rules of the material composition of different object parts. Different materials or textures are often assigned to different objects' functional parts, whose boundaries align with geometric feature lines. A handful of previous works find segments that constitute a 3D model with geometric information [7, 12, 20, 37], while others train 3D features to distinguish part labels provided in datasets [9, 32, 41]. Recently, PartGlot [25] suggested discovering part segments Figure 2: Our scene stylization results. Given a target image \(I_{t}\) and the style text \(t_{s}\), Text2Scene can produce the stylized results for the entire scene. from language references, which should contain functional information. However, the geometric or functional distinction does not always clarify the texture boundaries, with possible additional diversity originating from the designers' choice. The intricate rules are multi-modal distributions composed of a mixture of discrete part assignments and continuous texture details, whose results are contained within visual datasets of scenes. Neural Fields and TextureTraditional texture atlas of 3D geometry is represented as a mapping from a planar image to a manifold embedded in 3D space and involves complex parameterization. With the increasing popularity of neural representation, TextureFields [33] represented texture as a mapping from 3D surface points to RGB color values without separate parameterization. It is deeply connected with graphics pipelines that adapt coordinate-based functions to depict SDF shapes [27, 48, 35] or novel-view synthesis using implicit volumes [30]. Various works show neural implicit representation is highly flexible and free from domain structure or resolution [23, 26, 40, 53, 40]. Recently, Text2Mesh [29] generated the deformation and coloring of input mesh with neural fields guided by the joint embedding of rendered image and input text. Interestingly, the generated mesh deformation and coloring also contain semantic part information. Our Text2Scene framework further extends the ability and observes distinct part boundaries, which could not be captured in previous works. We also increase the reality of resulting scenes with high-frequency details. We encode the input to the neural network with a high-order basis of intrinsic features [24] and apply the coarse-to-fine strategy as shown with geometric details of Yifan _et al._[55]. Text-Driven 3D StylizationRecently neural networks trained with large-scale images and texts demonstrated powerful performance in many tasks with their extensive representation power. Here we primarily focus on 3D stylization attempts using image features. CLIP [42] learns latent space with a large amount of image text pairs, and the additional text input allows semantic manipulation for various generative tasks, including images [38, 43, 44, 45, 3], videos [3, 17], motions [49, 50], and 3D assets [18, 21, 39, 46]. However, for text-driven 3D stylization, it is still difficult to define clean texture boundaries and correlation with instance-level subtleties [31, 29, 6] or focus only on the specific type, such as human [57, 16]. As another way to stylize a 3D scene [4, 54], Yeh _et al._[54] matches the input image features with CAD input using differentiable material graphs [47]. However, the representation is inherently limited to a combination of material libraries and cannot handle non-homogeneous details such as painting. On the other hand, our approach can generate texture beyond the repetitive low-level patterns of input materials and introduce part-aware texture with fine details that maintain geometric and semantic consistency. ## 3 Method We stylize a 3D indoor scene without sophisticated techniques or software tools for 3D modeling or texturing. The input 3D scene \(\mathcal{S}=\{\mathcal{W},\mathcal{O}\}\) is a set of structure components \(\mathcal{W}\) and a set of objects \(\mathcal{O}\). The structural components \(\mathcal{W}\) are walls, ceilings, and floors, whereas the objects \(\mathcal{O}=\{M_{i}\}\) are the 3D mesh models \(M_{i}\). We assume that all the components have their corresponding class labels and optionally have text descriptions. The desired color distribution is provided as a target im Figure 3: Overall pipeline of Text2Scene. Given a 3D scene \(\mathcal{S}\) with optional text description, we generate textures guided by a target image \(I_{t}\) and the appearance style provided as a text \(t_{s}\). We first stylize the structure by texture retrieval, and each object is pre-processed for part discovery for stylization. Then, we assign base colors to object parts and add local details for each object with the designated LNSF. age \(I_{t}\), which could be retrieved from the web. Also, a specific appearance style description \(t_{s}\) is provided as input to enforce style consistency among a set of objects \(\mathcal{O}\). The color distribution compares the color histogram of the input target image \(I_{t}\) against the rendering of the current stylized scene \(I\). Specifically, the histogram loss \(\mathcal{L}_{\text{hist}}\) is defined as below \[\mathcal{L}_{\text{hist}}\left(I,I_{t}\right)=\|H^{1/2}\left(I\right)-H^{1/2}( I_{t})\|_{2}, \tag{1}\] where \(H\left(\cdot\right)\) indicates differentiable color histogram operator [1]. Also, we augment the losses derived from the joint embedding of text and images [42] by generating text descriptions \(T\) of the context using semantic labels and comparing them against the rendered image \(I\). If we denote the pre-trained encoder for image and text as \(E_{1}\) and \(E_{2}\), respectively, the CLIP similarity loss is defined as \[\mathcal{L}_{\text{clip}}\left(I,T\right)=1-sim(E_{1}\left(I\right),E_{2} \left(T\right)), \tag{2}\] where \(sim\left(\mathbf{x},\mathbf{y}\right)=\frac{\mathbf{x}^{\top}\mathbf{y}}{ \left\|\mathbf{x}\right\|_{2}\left\|\mathbf{y}\right\|_{2}}\) is the cosine similarity. The overall pipeline is described in Fig. 3. We obtain the texture for the structure \(\mathcal{W}\) by texture retrieval, which is described in Sec. 3.1. The objects are stylized with additional decomposition to respect local part boundaries and, simultaneously, to practically handle multiple entities with details (Sec. 3.2). ### Structure Stylization We assign one coherent texture per structural element of \(\mathcal{W}\). Compared to objects, the structural elements, such as walls or ceilings, are of simple planar geometry, and their textures are not heavily dependent on the relationships between different functional parts. For structural elements, it suffices to pick texture from the texture set of an existing material library of MATch [47], which contains homogeneous material. If we instead utilize visual features or CLIP embeddings for them, the resulting stylization exhibits undesired artifacts of various sizes instead of constant patterns, as shown in the supplementary. We randomly initialize the texture from the texture set and render the structure image \(I_{s}\), a bare room only containing the structural elements \(\mathcal{W}\) without objects. Then the materials are compared to the target image \(I_{t}\) for the histogram losses of Equation (1). The additional text prompt \(T_{s}\) is given as 'a structure of a room' to provide the context with \(\mathcal{L}_{\text{clip}}\left(I_{s},T_{s}\right)\). In summary, the texture of the structural element is retrieved to have the lowest score on the following criteria: \[\mathcal{L}_{\text{hist}}\left(I_{s},I_{t}\right)+\lambda_{1}\cdot\mathcal{L} _{\text{clip}}\left(I_{s},T_{s}\right). \tag{3}\] ### Object Stylization Object stylization involves understanding the semantic structure hidden behind the mesh representation. As a pre-processing, we first subdivide individual objects \(M_{i}\) in \(\mathcal{O}\) into part segments \(\{s_{ik}\}\) as described in Sec. 3.2.1. Then the scene is stylized in two steps. First, we assign base colors into individual parts to minimize the style loss for the entire scene \(\mathcal{S}=\left\{\mathcal{W},\mathcal{O}\right\}\) (Sec. 3.2.2). Here we are optimizing for the discrete set of colors assigned to subdivided parts obtained from the pre-processing. Then the textures for individual objects are further optimized to generate fine details (Sec. 3.2.3). #### 3.2.1 Part Discovery for Object Stylization We first decompose individual objects into parts such that each part is composed of the same material or texture. The distinctive part boundaries are critical in providing semantic consistency and, therefore, perceptual realism toward the scene. Similar to 3D part segmentation methods, we first find super-segments based on geometric features which provide the granularity to define textural parts. For a given 3D object mesh \(M_{i}\), the initial segments \(\{s^{0}_{ik}\}\) are the decomposition applying the method by Katz _et al._[20]. The decomposition is designed to be an over-segmentation of our aim. We generate a graph \(\mathcal{G}^{0}_{i}\) where each node is the segment and edges connect neighboring segments. Then we incrementally merge segments that belong to the same texture until convergence. Note that, our part discovery method operates robustly regardless of the initial composition, but we use [20] which preserves the original geometry and details. The challenge here is that, unlike semantic segmentation approaches, no large-scale public dataset exists that provides the ground truth for 'texture similarity' as the segmentation labels. We find a supervision signal from the large-scale pre-trained model, and create a simple text prompt \(T_{i,c}\) using the class name, such as _a bed_ or _a chair_. At \(l^{\text{th}}\) iteration, we assign a color \(c(s^{l}_{ik})\) to each segment \(\{s^{l}_{ik}\}\), which is optimized to minimize the distance between the rendered images \(I_{M_{i}}\) of multiple viewpoints and the text \(T_{i,c}\) in the Figure 4: Overall pipeline of part discovery. We discover parts for object stylization from the super-segments of 3D object mesh. Given a 3D object mesh \(M_{i}\) with segments \(\{s^{l}_{ik}\}\) at \(l^{\text{th}}\) iteration, we assign a color \(c(s^{l}_{ik})\) per segment and generate a graph \(\mathcal{G}^{l}_{i}\). The pair of neighboring nodes is merged if the distance between assigned colors \(c(s^{l}_{ik})\) is within a threshold, then, move to \((l+1)^{\text{th}}\) iteration. joint embedding space of CLIP, or \(\mathcal{L}_{\text{clip}}(I_{M_{i}},T_{i,c})\) as defined in Equation (2). If the resulting colors assigned to two adjacent segments are similar, the two parts are likely to be the parts with the same texture source. Therefore we merge the two segments for the next iteration \(\{s_{ik}^{l+1}\}\). In particular, we merge segments if the assigned color has a distance of less than a threshold \(\lambda_{th}\) in the CIE color space which is known to be related to human perception. Note that, while the initial color is gray for the assignment, merging segments happen with the optimized color. We repeat the process until the number of segments does not decrease anymore and empirically found that it usually converges within 2-3 steps. The overall pipeline for part discovery is also described in Fig. 4. #### 3.2.2 Part-level Base Color Assignment After the objects \(M_{i}\in\mathcal{O}\) are decomposed into parts \(\{s_{ik}\}\), we assign a solid color per part, namely \(c\left(s_{ik}\right)\). The base color assignment handles a low-dimensional optimization space with a coarse set of discrete parts but still observes the holistic distribution of the entire scene. Then the base color is combined with the output of designed neural style fields in Sec. 3.2.3, which generate high-frequency local texture. We optimize the base color using the combined loss as before, \(\mathcal{L}_{\text{color,scene}}+\mathcal{L}_{\text{clip,scene}}\). The color loss again considers the similarity with the target image \(\mathcal{L}_{\text{hist}}\left(I,I_{t}\right)\). The scene being optimized \(I\) is rendered with the stylized structure \(\mathcal{W}\) and the current estimates of the base colors. The clip loss considers both individual objects and the global scene and is calculated as the sum of object clip loss and global clip loss. \[\mathcal{L}_{\text{clip,scene}}=\lambda_{2}\cdot\sum_{i}\mathcal{L}_{\text{ clip}}\left(I_{M_{i}},T_{i}\right)+\lambda_{3}\cdot\mathcal{L}_{\text{clip}} \left(I,T\right). \tag{4}\] Unlike Sec. 3.2.1, text description \(T_{i}\) for object \(M_{i}\) could be a simple text prompt using the class name, or a detailed text prompt based on user choice. We render individual objects from various angles \(I_{M_{i}}\) and compare them with the text description \(T_{i}\). The embedding for the scene is also compared against the text embedding \(T\) represents the type of scene, for example, 'a bedroom'. By jointly applying the loss, the base color \(c\left(s_{ik}\right)\) is selected as a representative color that harmonizes nicely with the global context. #### 3.2.3 Detailed Stylization The base color is combined with additional details regressed from a neural network to express the detailed local texture. We define _local neural style field (LNSF)_ for each object, which generates the local textures added to the base color. The color for point \(p\) is defined by the following equation, \[c\left(s_{p}\right)+\alpha\cdot\mathcal{F}_{i}\left(\gamma\left(p\right), \phi\left(p\right),s_{p}\right). \tag{5}\] We train a LNSF \(\mathcal{F}_{i}\) per object, which outputs the color to be added to the base color \(c(s_{p})\), where \(s_{p}\) indicates the part segment id. The color range \(\alpha\) maintains the final color to be similar to the base color. \(\gamma\left(\cdot\right)\) is the positional encoding of the \(xyz\) coordinate to capture high-frequency details, and \(\phi(\cdot)\) represents the coefficients of the eigenfunctions for the intrinsic geometry using the Laplace-Beltrami operator. Therefore the object-specific neural fields respect the part boundaries and local geometric details. LNSF for each object are trained with additional style context \(\mathcal{L}_{\text{clip}}(I_{M_{i}},T_{i}^{+})\). The text prompt \(T_{i}^{+}\) augments the object description of \(T_{i}\) with appearance style description \(t_{s}\), such as'minimal style' or 'Mid-Century style'. By enforcing the same appearance style description, we can weakly bind the styles of individual objects in the same scene while optimizing separately. We could also use \(T_{i}^{+}\) instead of \(T_{i}\) for optimizing the base color, and it shows slightly better results. Part-aware Geometric DeformationEven though our main focus is to add colors to objects with local part-aware details, we could also concurrently produce part-aware geometric deformation with the same architecture. We can slightly change the LNSF \(\mathcal{F}\) to have two branches of output that estimate color and displacement. For each point on the surface, we assign color by Eq. 5 and adjust displacement along the vertex normal direction. To learn the effective deformation, we add geometric loss \(\mathcal{L}_{\text{clip}}(I_{M_{i}}^{geo},T_{i}^{+})\), where \(I_{M_{i}}^{geo}\) is an image rendering textureless geometry as [29]. Rendering and Implementation DetailsTo render each object, the individual objects \(M_{i}\) are scaled to fit a unit box. The predicted color of each vertex allows the entire mesh to be differentiable rendering through interpolation using [5]. We render individual objects with random augmented backgrounds (white, black, random Gaussian, chess board), which helps the pipeline focus on the foreground object [16, 18]. Inspired by [39], since CLIP has a bias for the canonical pose, we augment the text prompt with the azimuth angle of view, namely 'front view of','side view of' or 'back view of'. Finally, in Sec 3.2.3, random perspective transformation and random crop boost learning local details. To render a scene, we randomly sampled from pre-define 20 camera poses to evenly cover the entire scene \(\mathcal{S}\). ## 4 Experiments Our stylization is first evaluated for individual objects in Sec. 4.1. We evaluate the quality of textured object meshes and also assess that our pipeline can discover part segments to assign realistic stylization. Then we demonstrate the stylization results of room-scale scenes with multiple objects in Sec. 4.2. ### Object Stylization We show that our method can stylize various objects and produce realistic 3D assets to populate virtual scenes. We use object meshes of various types and sizes, collected from Turbo Squid [51], 3D-FUTURE [13], and Amazon Berkeley Objects [10]. For detailed stylization, we subdivide the object into an average of 119769 faces and 60437 vertices. For large general objects such as beds, sofas, tables, etc., we use class labels and the text input for the specific style as 'a [_class label_], [_specific_] style'. However, we need a detailed explanation for small objects that cannot be explained only by class labels, such as books. For these objects, we configure detailed text individually, for example, 'a book of Harry Potter and the Sorcerer's Stone'. Figure 5 shows the renderings of stylized results. We render objects in Blender [11] with a fixed lighting setup. The input mesh and the text are also provided in the left column. Since our method utilizes the obtained part information and coarse-to-fine stylization scheme with a two-step approach, it creates a more realistic texture with clear boundaries for each part of the 3D assets. Text2Mesh [29] is another text-driven 3D stylization approach, which adds the RGB color and deformation fields on the vertices of mesh. While it augments the detailed variations on the input mesh, the deformation map can occasionally introduce undesired artifacts and the part boundaries are only roughly estimated. We also provide the results of an ablated version that directly uses LNSF without a two-step stylization scheme. Our part discovery module and coarse-to-fine stylization scheme play a critical role in producing realistic assets with high-quality stylization. Figure 1 contains more results on diverse objects. Because stylization is a subjective task, and no ground Figure 5: Visual comparison with baselines. Our method utilizes the found part information and generates detailed textures through the designed network in combination with a coarse-to-fine stylization scheme. As a result, it generates a globally harmonious 3D style that clearly classifies part segments. We can handle diverse text such as simple categories or styles, detailed descriptions, or text that includes different two object categories. truth or metrics exist, we conduct a user study for quantitative evaluation. We ask 98 users to rate the quality of generated outputs on a scale of 1 (worst) to 5 (best) in response to the following questions: (Q1) 'How natural are the output results?', (Q2) 'How well does the output contain text information?', and (Q3) 'How well does the output reflect part information?' Figure 6 contains the mean and the standard deviation of the scores. Our method outperforms competing methods in all aspects. Therefore we conclude that our method generates a realistic texture that abides by input text description and semantic part information. We also provide a parallel comparison against Text2Mesh by incorporating deformation fields with our approach. Our original method preserves the original shape of the mesh, while Text2Mesh deforms vertices along the normal direction in addition to generating texture. We can make a similar version of our LNSF and produce additional deformation as described in Sec. 3.2.3. Figure 7 shows the deformed results from the source mesh compared to Text2Mesh. Since our approach explicitly considers the part information with the two-stage approach, our results with deformation fields also respect different semantic parts of the object. Part DiscoveryAs a side product of object stylization, we can discover different parts with distinguished boundaries that can guide a realistic color assignment (Sec. 3.2.1). Our part discovery combines geometric super-segments and implicit clues from image-text embeddings. The initial super-segments guide the algorithm to follow geometric feature lines, and the final results stably find part information despite different initializations. Figure 8 shows examples initialized with Katz _et al._[20] (top) and BSP-Net [7] (bottom). Even for input mesh with bad topology or different initialization, our iterative part discovery quickly finds visually coherent parts without any training with segmentation labels. ### Scene Stylization Now we demonstrate that our text2scene framework can quickly generate a realistic texture for a room with multiple objects. Recall that the 3D scene \(\mathcal{S}=\{\mathcal{W},\mathcal{O}\}\) is composed of the structure components \(\mathcal{W}\) and a set of objects \(\mathcal{O}\). We use the same objects as described in Sec. 4.1 and arrange them to constitute scenes. As there is no existing dataset composite of complete object meshes with labels, we built a total of four scenes: two bedrooms and two living rooms. Each scene contains an average of 20 objects of various sizes and classes. Additionally, we provide a target image \(I_{t}\) for the color distribution, and a text prompt describing the desired style \(t_{s}\). While it can be daunting to define the desired style for the entire scene, images and texts can provide a simple way to deliver the information. Figure 2 shows stylized results of the same geometry, but observing various target images and style prompts. The generated textures respect the semantic labels of various furniture and different parts and contain localized diverse details. This is in contrast to many prior stylization methods where the fine perturbations are spread throughout the scenes. We also show scenes with different types of rooms containing different objects, but stylized based on the same target image in Fig. 9. Additional results of various input configurations are available in the supplementary material. Note that our target image \(I_{t}\) does not need to restrict as Figure 8: The robustness against initial super-segmentation. The super-segments of the top and the bottom rows are generated by [20] and [7], respectively (left). Starting from the initial color assignment (middle), our approach stably finds part decomposition to assign different base colors and therefore different texture information (right). The text _a chair_ is used. Figure 6: Results of user study for object stylization Figure 7: With a slightly modified LNSF \(\mathcal{F}\), we can simultaneously generate the texture with both color and displacement fields, and all of the generated style information respects the part information discovered. an indoor image, and can be replaced such as natural photographs. And by changing random seeds, diverse results can be obtained from the same input conditions. Also, since we stylize the entire object, we can easily edit the 3D scene through object relocation. These results can also be found in the supplementary. We also provide users' evaluation of the quality of our stylization in Table 1. Users evaluate the results by answering the following questions: (Q1) 'How realistic is the output result?' (Q2) 'How similar is the color distribution of the scene to the given images?' Users assess the overall quality of the outputs, and how well they match the target image. Since there are no previous works that use the same setting, we compare the results of ablated versions: _-retrieval_ replaces the separate texture retrieval module (Sec. 3.1) with a stylization network of objects for structure components; _-hist,glo_ removes the color loss and the global clip loss for the base color assignment (Sec. 3.2.2); and _-detail_ removes the detailed stylization step for objects (Sec. 3.2.3). The responses of (Q1) indicate that Text2Scene generates high-quality scenes and each of the components plays a crucial role to achieve reality. The effect of texture retrieval is the most prominent. The color distribution results (Q2) indicate that the base color assignment is critical. The added local details are more important for the realism of the results. Figure 10 shows exemplar images of the ablated versions used for the user study. GPU Cost and ScalabilityText2Scene first assigns base colors to discovered parts for all objects in the scene, and generates details of objects individually. The cardinality for the base color assignment is only a few hundred and it does not require much memory and allows us to consider the whole scene while expanding the scale. The most memory-intense process is the detail generation using a neural network, which only processes a single object at a time. Therefore the entire process can be trained only on a single 11 GB GPU, making it an accessible tool for casual users. In a single GPU, the base color allocation of the entire space takes 5 hours, and learning the details for each object takes 10 minutes. LimitationsWhile our approach results in scene stylization, the pipeline separately handles individual objects after the base color assignment. This is a practical choice for scalability but may lack an understanding of the context of the entire space. Instead, we rely on the text description to weakly bind the objects into a similar style. We can design the pipeline to receive an additional input with a texture map or a lightweight network and extend our model to better observe the holistic scene context within a limited GPU memory. Also, our pipeline requires a class label or optional text description per object, which can be further automated. ## 5 Conclusions We introduce Text2Scene, a novel framework to generate a texture for 3D scenes. Our hierarchical framework can handle a variety of objects, including highly detailed textures for objects such as book or paintings. By leveraging the representation power of pre-trained CLIP, the framework does not require any 3D datasets with texture or part annotation. Given the 3D mesh models and the class labels or text descriptions of objects, our framework easily produces stylized results by picking a target image and a simple text description. We hope that Text2Scene to facilitate the automatic interior recommendation or realistic virtual space generation. AcknowledgementsThis work was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. RS-2023-00208197) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.2021-0-02068, Artificial Intelligence Innovation Hub). Inwoo Hwang is supported by Hyundai Motor Chung Mong-Koo Foundation. Young Min Kim is the corresponding author. \begin{table} \begin{tabular}{c c c c c} \hline \hline & _-retrieval_ & _-hist,glo_ & _-detail_ & Ours \\ \hline (Q1): Realistic (Q2): Color & \(2.23(\pm 0.48)\) & \(3.68(\pm 0.49)\) & \(\textbf{4.02}(\pm 0.49)\) & \(\textbf{4.22}(\pm 0.42)\) \\ (Q2): Color & \(2.09(\pm 0.48)\) & \(2.63(\pm 0.59)\) & \(\textbf{3.95}(\pm 0.47)\) & \(\textbf{3.86}(\pm 0.49)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Results of user study for scene stylization Figure 10: Ablation results for the scene. We used target image number 1 in Fig. 2 in all spaces. Figure 9: Result for the different arrangement of objects. We used target image number 3 in Fig. 2 in both spaces.
2309.06762
Dark age consistency in the 21cm global signal
We propose a new observable for the 21cm global signal during the dark ages, the dark-age consistency ratio, which is motivated from the fact that the shape of the functional form of the brightness temperature against the frequency is cosmological-parameter independent in the standard $\Lambda$CDM model. The dark-age consistency ratio takes a certain definite value in the $\Lambda$CDM case, which can serve as a critical test of the model and probe those beyond the standard one. The new observable just needs measurements of the brightness temperature at a few frequency bands during the dark ages, and thus it allows us to test cosmological scenarios even with limited information on the global signal.
Fumiya Okamatsu, Teppie Minoda, Tomo Takahashi, Daisuke Yamauchi, Shintaro Yoshiura
2023-09-13T07:19:27Z
http://arxiv.org/abs/2309.06762v1
# Dark age consistency in the 21cm global signal ###### Abstract We propose a new observable for the 21cm global signal during the dark ages, _the dark-age consistency ratio_, which is motivated from the fact that the shape of the functional form of the brightness temperature against the frequency is cosmological-parameter independent in the standard \(\Lambda\)CDM model. The dark-age consistency ratio takes a certain definite value in the \(\Lambda\)CDM case, which can serve as a critical test of the model and probe those beyond the standard one. The new observable just needs measurements of the brightness temperature at a few frequency bands during the dark ages, and thus it allows us to test cosmological scenarios even with limited information on the global signal. pacs: 98.80.-k _Introduction_-- The 21cm line of neutral hydrogen can probe the evolution of the Universe over a wide range of redshift, especially at its higher end. Current observations of the global signal (sky-averaged signal) of the 21cm line such as EDGES [1], SARAS [2] and its fluctuations such as LOFAR [3; 4], MWA [5; 6], OVRO-LWA [7] and so on have already reached high redshifts at the cosmic dawn. In the near future, SKAO [8] will also be operative on such high redshift with much better sensitivities. Actually, the 21cm signal can also appear even during the so-called dark ages (\(30\lesssim z\lesssim 150\)), however, the observation of which is really challenging due to the Earth's ionosphere, the radio frequency interference (RFI), and so on. Indeed the dark ages have never been observed by any means, however this era lies prior to the formation of the first objects in the Universe, and hence the pristine information of cosmology can be derived without astrophysical uncertainties. The 21cm line of neutral hydrogen would be a unique probe of this era and there have been theoretical works to study what aspects of cosmology can be probed by using its signal during the dark ages such as primordial power spectrum [9; 10], primordial non-Gaussianities [11; 12; 13; 14; 15], isocurvature fluctuations [16; 17], gravitational waves [18; 19], primordial black holes [20; 21; 22], dark matter-baryon interaction [23; 24], dark matter annihilation/decay [25; 26; 27], a test of statistical isotropy and homogeneity [28], neutrino masses [9] and so on, in most of which fluctuations of the 21cm line are mainly studied although the global signal can also be useful. Currently, various observation plans are being discussed to probe the 21cm signal during the dark ages using a telescope on the moon or a satellite orbiting around the moon, which can avoid the Earth's ionosphere and the RFI and make its detection possible, such as FARMSIDE [29; 30], DAPPER [31; 32], NCLE [33; 34], LCRT [35; 36], DSL [37], and so on (see also [38]). On the theoretical side, a precision cosmology using the dark age 21cm signal has been put forward by [39] (for an early work on the parameter estimation using the global signal during the dark ages, see [40]). Although the 21cm fluctuations would bring a lot of information on various cosmological aspects, the first target of these lunar missions would be the global signal. It is therefore a timely issue indeed to consider what we can learn from the 21cm global signal during the dark ages, having planned these missions in mind. _21cm global signal_-- Here we give some basic formulas to calculate the 21cm signal, particularly during the dark ages. For reviews of the 21cm physics, we refer the readers to e.g., [41; 42]. The 21cm signal is characterized by the so-called differential brightness temperature \(T_{b}\): \[T_{b}=\frac{T_{s}-T_{\gamma}}{1+z}\left(1-e^{-\tau_{\nu}}\right)\,, \tag{1}\] where \(T_{s}\) and \(T_{\gamma}\) are the spin and radiation temperatures. Since we consider photons of cosmic microwave background (CMB) as a backlight, \(T_{\gamma}\) coincides with the CMB temperature so that \(T_{\gamma}=T_{\rm CMB}\). \(\tau_{\nu}\) is the optical depth, which is given by \[\tau_{\nu}=\frac{3ch_{p}\lambda_{21}^{2}A_{10}x_{\rm HI}(1-Y_{p})n_{b}}{32\pi k _{B}T_{s}H}\,, \tag{2}\] where \(\lambda_{21}\) is the wave length for the 21cm line, \(A_{10}\) is the Einstein A coefficient for the spontaneous decay, \(x_{\rm HI}\) is the neutral fraction of hydrogen, \(Y_{p}\) is the primordial mass fraction of helium, \(n_{b}\) is the number density of baryon and \(H\) is the Hubble parameter. \(c,h_{p}\) and \(k_{B}\) are the speed of light, the Planck constant and the Boltzmann constant. By inserting Eq. (2) to Eq. (1), and assuming that the Universe is matter dominated and \(\tau_{\nu}\ll 1\), one obtains \[T_{b} \simeq 85\,{\rm mK}\,\,\left(\frac{T_{s}-T_{\gamma}}{T_{s}}\right) \left(\frac{\omega_{b}}{0.02237}\right) \tag{3}\] \[\times \left(\frac{0.144}{\omega_{m}}\right)^{1/2}\left(\frac{1-Y_{p}}{1 -0.24}\right)\left(\frac{1+z}{100}\right)^{1/2}x_{\rm HI}\,.\] From this expression one can see that we just need to specify the following cosmological parameters to calculate \(T_{b}\) in the standard \(\Lambda\)-Cold-Dark-Matter (\(\Lambda\)CDM) model): baryon density \(\omega_{b}\equiv\Omega_{b}h^{2}\), cold dark matter density \(\omega_{m}\equiv\Omega_{c}h^{2}\), with \(\Omega_{i}\) the energy density of \(i\)-th component normalized by the critical energy density and \(h\) the Hubble constant in units of \(100\,{\rm km/s/Mpc}\). The evolution of the spin temperature can be given by [43] \[T_{s}^{-1}=\frac{T_{\gamma}^{-1}+x_{c}T_{k}^{-1}+x_{\alpha}T_{k}^{-1}}{1+x_{c }+x_{\alpha}}\,, \tag{4}\] where \(T_{k}\) is the matter temperature and \(x_{c}\) is the coefficient for atomic interactions, which is mainly determined by HH collisions during the dark ages and depends on the cosmological parameters as \(x_{c}\propto\omega_{b}(1-Y_{p})\) since it depends on the number density of the hydrogen. \(x_{\alpha}\) is the coefficient for the Wouthuysen-Field effect [43; 44], in which Lyman-\(\alpha\) photons effectively induce the transition between the hyperfine states. Actually it can be neglected during the dark ages in the standard \(\Lambda\)CDM case, although one needs to take it into account in some cosmological scenarios beyond the \(\Lambda\)CDM model. Notice that Eq. (4) in the \(\Lambda\)CDM model during the dark ages gives, \[\frac{T_{s}-T_{\gamma}}{T_{s}}=\frac{x_{c}}{1+x_{c}}\left(1-\frac{T_{\gamma}}{ T_{k}}\right)\,, \tag{5}\] and in the later stage of the dark ages (\(30\lesssim z\lesssim 80\)), \(x_{c}\ll 1\) is realized, and then one can find the the scaling of the brightness temperature against the cosmological parameters as \[T_{b}\propto\frac{\omega_{b}^{2}(1-Y_{p})^{2}}{\omega_{m}^{1/2}}\,. \tag{6}\] The scaling of \(\omega_{b}\) and \(\omega_{m}\) has also been noticed in [39]. Actually, by defining the following quantity \[C(\omega_{b},\omega_{m},Y_{p})\equiv\frac{\omega_{b}^{2}(1-Y_{p})^{2}}{\omega _{m}^{1/2}}\,, \tag{7}\] and rescaling \(T_{b}\) as \[T_{b}^{\rm sc}(\nu;\widetilde{\mathbf{\theta}},\mathbf{\theta})=T_{b}(\nu;\mathbf{\theta} )\frac{C(\widetilde{\mathbf{\theta}})}{C(\mathbf{\theta})}\,, \tag{8}\] where \(\mathbf{\theta}=(\omega_{b},\omega_{m},Y_{p})\), one can obtain an almost identical brightness temperature. In Fig. 1, we show the \(T_{b}\) with and without the rescaling according to Eq. (8), where we varied the cosmological parameters in the range of \(5\sigma\) bounds from Planck data [45] for \(\omega_{b}\) and \(\omega_{m}\), and that from Hsyu et al. [46] for \(Y_{p}\)[47]. For calculations of the brightness temperature, we used a modified version of recfast[48; 49; 50; 51]. As seen from the figure, the rescaled \(T_{b}\) (red) have almost the identical shape, due to the fact that \(T_{b}\) scales as Eq. (6) and the position of the absorption trough remains unchanged even when the cosmological parameters are varied. On the other hand, \(T_{b}\) without the rescaling (blue) are widely distributed. To see more clearly that the shape of \(T_{b}\) as a function of the frequency is cosmological-parameter independent in the \(\Lambda\)CDM model, in Fig. 2, we plot \(T_{b}(\nu)\) divided by that at some reference frequency \(\nu_{*}\) (red), which we take \(\nu_{*}=30\,{\rm MHz}\) for illustration purposes, with \(\omega_{b}\), \(\omega_{m}\) and \(Y_{p}\) being varied within \(5\sigma\) ranges as done in Fig. 1. For comparison, we also show \(T_{b}(\nu)\) just divided by \(-20.3\) mK (blue), which corresponds to the value of \(T_{b}(\nu=30\,{\rm MHz})\) for the case assuming the mean values for the cosmological parameters. As seen from the figure, different cosmological parameters give almost identical shapes for \(T_{b}(\nu)\), particularly for the frequency range of \(20\,{\rm MHz}<\nu<50\,{\rm MHz}\), which corresponds to the later stage of the dark ages \(30\lesssim z\lesssim 80\). _Consistency ratio as a new observable--_ The above arguments motivate us to consider the ratio of \(T_{b}\) at two different frequencies, which should take an almost certain definite value regardless of the cosmological parameters and can be used as a consistency check of the model. We define the ratio as \[R_{\nu_{i}/\nu_{j}}\equiv\frac{T_{b}(\nu=\nu_{i}\,[{\rm MHz}])}{T_{b}(\nu= \nu_{j}\,[{\rm MHz}])}\,, \tag{9}\] where \(\nu_{i}\) and \(\nu_{j}\) are two different frequencies, which we call the "_the dark-age consistency ratio_" since this ratio would remain to take the same value in the \(\Lambda\)CDM model to a high accuracy (although it is not exact) even when we vary the cosmological parameters, particularly for \(\nu_{i}\) and \(\nu_{j}\) being taken in the range of \(20\,\mathrm{MHz}<\nu<50\,\mathrm{MHz}\). In Table 1, we show the ratios for several values of \(\nu_{i}\) with the reference frequency \(\nu_{j}=30\,\mathrm{MHz}\). As one can see from the table, the dark-age consistency ratios in the \(\Lambda\)CDM model are determined better than one percent accuracy regardless of the values of the cosmological parameters. Therefore if some observation indicates a deviation of \(R_{\nu_{i}/\nu_{j}}\) from the prediction of the \(\Lambda\)CDM model, it suggests a model beyond the standard one. In particular, the consistency ratio proposed here would be very useful, even in the early stage of lunar missions mentioned in the introduction where the data of some limited frequency bands may be available. Even in such a case, the consistency ratio just needs measurements of \(T_{b}\) at just two separate frequency bands. Detailed discussion on expected constraints on the consistency ratio in future missions will be given in a separate work [52]. _Testing cosmology with the consistency ratio --_ The consistency ratio defined in Eq. (9) should be useful to probe cosmological models since, as discussed above, it takes a definite constant value to a high accuracy during the dark ages in the \(\Lambda\)CDM model as shown in Table 1. A possible deviation from the \(\Lambda\)CDM value can arise by violating (one or more) following assumptions during the dark ages: 1. The Universe is matter-dominated. 2. Lyman-\(\alpha\) sources are negligible. 3. Matter and photons are coupled via the Compton scattering. 4. Radiation field is determined by CMB. An example of the violation of (i) is the so-called early dark energy (EDE) scenario where a component behaving like dark energy exists in some early times much before the current accelerating Universe. EDE has been attracting attention in several occasions such as a possible solution to the Hubble tension [53; 54] (for the current status of the tension, see, e.g., [55; 56]). EDE may also be able to address the so-called Helium anomaly [57] where the primordial Helium abundance measured by EMPRESS [58] may suggest a non-standard cosmological scenario. Actually EDE has also been considered to explain the EDGES signal [59], in which EDE can become a non-negligible component, or even a dominant one during the dark ages. To describe the energy density of EDE, we can consider the following functional form adopted in [59]: \[\rho_{\mathrm{EDE}}=C_{\mathrm{EDE}}\frac{1+a_{c}^{p}}{a^{p}+a_{c}^{p}}\,, \tag{10}\] where \(a_{c}\) is the scale factor at which the behavior of the EDE energy density changes from \(\rho_{\mathrm{EDE}}=\mathrm{const.}\) to \(\rho_{\mathrm{EDE}}\propto a^{-p}\). \(C_{\mathrm{EDE}}\) can be fixed by giving the fraction of EDE at \(a_{c}\) which is defined as \[f_{\mathrm{EDE}}=\left.\frac{\rho_{\mathrm{EDE}}(z)}{\rho_{\mathrm{tot}}(z)} \right|_{z=z_{c}}=\frac{\rho_{\mathrm{EDE}}(z_{c})}{\rho_{r,m,\Lambda}(z_{c}) +\rho_{\mathrm{EDE}}(z_{c})}\,, \tag{11}\] where \(\rho_{r,m,\Lambda}(z)\) is the sum of energy densities of radiation, matter and the cosmological constant. In Fig. 3, we show \(T_{b}\) in the EDE model with \((f_{\mathrm{EDE}},z_{c},p)=(0.8,150,6)\) and \((0.8,300,4)\) as examples. The assumptions (ii) and/or (iii) can be violated, for instance, in models where dark matter (DM) annihilates or decays since DM annihilation/decay can produce photons in the energy range of Lyman-\(\alpha\) and give an extra heating source for the gas temperature \(T_{k}\). Indeed there have been many works regarding the effects of DM annihilation/decay on the 21cm signal, in particular see [25; 26; 27] for its implications for the 21cm signal during the dark ages. In Fig. 3, we show \(T_{b}\) in models with light DM decay for the mass of \(3\,\mathrm{MeV}\) and \(10\,\mathrm{MeV}\) for illustration which are calculated in the same manner as in [26]. The details of the calculations and cases with some other \begin{table} \begin{tabular}{|c|c|c|} \hline \(\nu_{i}\) & \(R_{\nu_{i}/30}\) & \(T_{b}(\nu_{i})\,[\mathrm{mK}]\) \\ \hline \(40\) & \(0.3873\pm 0.0029\) (\(0.76\%\)) & \(-7.923\pm 1.0107\) (\(12.76\%\)) \\ \hline \(35\) & \(0.6401\pm 0.0016\) (\(0.24\%\)) & \(-13.10\pm 1.7301\) (\(13.20\%\)) \\ \hline \(25\) & \(1.4454\pm 0.0023\) (\(0.16\%\)) & \(-29.59\pm 3.9133\) (\(13.23\%\)) \\ \hline \(20\) & \(1.8487\pm 0.0126\) (\(0.68\%\)) & \(-37.82\pm 4.8074\) (\(12.71\%\)) \\ \hline \end{tabular} \end{table} Table 1: Ratio \(R_{\nu_{i}/30}\) for several cases of \(\nu_{i}\) in the \(\Lambda\)CDM model. The uncertainty refers to the variation when the cosmological parameters \((\Omega_{b}h^{2},\Omega_{m}h^{2},Y_{p})\) are varied within the \(5\sigma\) range. For comparison, the range of \(T_{b}\) for the \(5\sigma\) variation of the cosmological parameters are also tabulated. scenarios will be given in a separate paper [52]. Other examples of the violation of the assumption (iii) include models with baryon-dark matter interaction [60; 61; 23], primordial magnetic field [62; 63; 64; 65], and so on, which have also been discussed in the context of the EDGES signal. The assumption (iv) can be affected by extra radio background, and it has been discussed as a possible explanation of the EDGES signal [66; 67; 68; 69; 70]. Such an extra radio source is also suggested by ARCADE2 [71] and LWA1 [72], whose results motivate the following parametrization for the radiation temperature [69]: \[T_{\gamma}=T_{\rm CMB,0}(1+z)\left[1+A_{R}\left(\frac{\nu}{\nu_{\rm ref}} \right)^{\beta}\right]\,. \tag{12}\] Here \(A_{R}\) is the relative size of the extra source to the CMB temperature at the reference frequency \(\nu_{\rm ref}\) and \(\beta\) describes the frequency dependence of the radiation. We should note that the functional form should depend on the generation mechanism, and it may have some cutoff at some frequency. However, we adopt the form (12) for illustration purpose. In [69], it has been suggested that the case with \(A_{R}=5.7\), \(\nu_{\rm ref}=78\,{\rm MHz}\) and \(\beta=-2.6\) could explain the EDGES signal at \(z=17\), which however would significantly distort \(T_{b}\) at the dark ages. In Fig. 3, \(T_{b}\) for the cases with \(A_{R}=0.05\) and \(0.005\) for \(\nu_{\rm ref}=78\,{\rm MHz}\) and \(\beta=-2.6\) are shown. Most models mentioned above can be tested by using the consistency ratio introduced in Eq. (9). In Fig. 4, the predictions of \(R_{40/30}\) and \(R_{20/30}\) for the \(\Lambda\)CDM model, and some other example models, such as EDE, excess radio background and DM decay, are shown. As discussed above, the \(\Lambda\)CDM model predicts certain definite values for the ratios with a very small uncertainty regardless of the values of the cosmological parameters, and its prediction is represented just by a point in the \(R_{40/30}\)-\(R_{20/30}\) plane. In other models, the ratios are deviated from those of \(\Lambda\)CDM, from which one can clearly see that the new observable \(R_{40/30}\)-\(R_{20/30}\) should be useful to probe models beyond the standard model. Notice that, when model parameters are varied, its predictions for \(R_{\nu_{i}/\nu_{j}}\) also change. _Conclusion --_ We proposed a new observable for the 21cm global signal during the dark ages, _the dark-age consistency ratio_, which is motivated from the fact that the shape of \(T_{b}\) as a function of the frequency is almost independent of the cosmological parameters in the \(\Lambda\)CDM model. Since it takes a certain definite value in the \(\Lambda\)CDM, it can be used as a consistency check of the model. If the deviation from the \(\Lambda\)CDM value is observed, it would signal a model beyond the standard scenario. The new observable only needs measurements at a few separate frequency bands, and hence fruitful information on cosmology can be derived from the dark age 21cm global signal even at the early stage of lunar or satellite missions in the foreseeable future. ###### Acknowledgements. This work was supported by JSPS KAKENHI 19K03874 (TT), 23K17691 (TT), 19H01891 (DY), 22K03627 (DY), 21J00416 (SY), 22KJ3092 (SY) and MEXT KAKENHI 23H04515 (TT). SY is supported by Figure 3: Plots of \(T_{b}\) for several models: EDE with \((f_{\rm EDE},z_{c},p)=(0.8,150,6)\) and \((0.8,300,4)\) (blue solid and dashed), dark matter decay with \(m_{\rm DM}=3\,{\rm MeV}\) and \(10\,{\rm MeV}\) (orange solid and dashed), excess radio background with \(A_{R}=0.05\) and \(0.005\) (purple solid and dashed). The cosmological parameters are fixed as \(\Omega_{b}h^{2}=0.02237,\Omega_{c}h^{2}=0.12\) and \(Y_{p}=0.2436\). For reference, \(T_{b}\) for the \(\Lambda\)CDM case is shown with the cosmological parameters varied within \(5\sigma\) ranges. Figure 4: Predictions for \(R_{40/30}\) and \(R_{20/30}\) of several example models are shown. Red point corresponds to the prediction of the \(\Lambda\)CDM model. Blue, orange and purple points are those for models with EDE (\(p=4,z_{c}=150\)), excess radio background (\(\nu_{\rm ref}=78\,{\rm MHz},\beta=-2.6\)), and DM decay, respectively. Model parameters are depicted in the figure. JSPS Research Fellowships for Young Scientists. This research was also supported by the grant of OML Project by the National Institutes of Natural Sciences (NINS program No, OML022303).
2309.09451
Fitchean Ignorance and First-order Ignorance: A Neighborhood Look
In a seminal work~\cite{Fine:2018}, Fine classifies several forms of ignorance, among which are Fitchean ignorance, first-order ignorance, Rumsfeld ignorance, and second-order ignorance. It is shown that there is interesting relationship among some of them, which includes that in ${\bf S4}$, all higher-order ignorance are reduced to second-order ignorance. This is thought of as a bad consequence by some researchers. It is then natural to ask how to avoid this consequence. We deal with this issue in a much more general framework. In detail, we treat the forms of Fitchean ignorance and first-order ignorance as primitive modalities and study them as first-class citizens under neighborhood semantics, in which Rumsfeld ignorance and second-order ignorance are definable. The main contributions include model-theoretical results such as expressivity and frame definability, and axiomatizations. Last but not least, by updating the neighborhood models via the intersection semantics, we extend the results to the dynamic case of public announcements, which gives us some applications to successful formulas.
Jie Fan
2023-09-18T03:10:14Z
http://arxiv.org/abs/2309.09451v2
# Fitchean Ignorance and First-order Ignorance: A Neighborhood Look ###### Abstract In a seminal work [15], Fine classifies several forms of ignorance, among which are Fitchean ignorance, first-order ignorance, Rumsfeld ignorance, and second-order ignorance. It is shown that there is interesting relationship among some of them, which includes that in **S4**, all higher-order ignorance are reduced to second-order ignorance. This is thought of as a bad consequence by some researchers. It is then natural to ask how to avoid this consequence. We deal with this issue in a much more general framework. In detail, we treat the forms of Fitchean ignorance and first-order ignorance as primitive modalities and study them as first-class citizens under neighborhood semantics, in which Rumsfeld ignorance and second-order ignorance are definable. The main contributions include model-theoretical results such as expressivity and frame definability, and axiomatizations. Last but not least, by updating the neighborhood models via the intersection semantics, we extend the results to the dynamic case of public announcements, which gives us some applications to successful formulas. Keywords: Fitchean ignorance, first-order ignorance, contingency, accident, unknown truths, expressivity, frame definability, axiomatizations, intersection semantics, successful formulas ## 1 Introduction Ignorance has been a hotly discussed theme in epistemology and many other fields since Socrates, who professed ignorance in e.g. the _Apology_[1]. Just as there has been no consensus on the definition of knowledge, there has been no consensus on the definition of ignorance. Instead, there has been at least three views in the literature: the standard view, the new view, and the logical view.1 The standard view thinks that ignorance is merely the negation of propositional knowledge, the new view thinks that ignorance is the lack of true belief,2 whereas the logical view thinks that ignorance means neither knowing nor knowing not [13, 14, 27, 33, 34, 35].3 Footnote 2: For the discussion on the standard and new view, see [20] and references therein. Footnote 3: To the best of our knowledge, the first to evidently investigate ignorance from the logical view is [34] — also see its extended journal version [35], which though includes an _unsound_ transitive axiomatization, as shown in [14, pp. 102–103]. Recently there has been a flurry of research on ignorance. Various forms of ignorance are proposed in the literature, such as pluralistic ignorance [26, 2, 30], circumscriptive ignorance [18], chronological ignorance [32], factive ignorance [19], relative ignorance [16], disjunctive ignorance [10]. In a seminal paper [15], instead of discussing the definition of ignorance, Fine classifies several forms of ignorance, among which are 'ignorance of (the fact that)' (also called 'Fitchean ignorance' there), 'first-order ignorance (whether)', 'Rumsfeld ignorance' and'second-order ignorance'. One is _ignorant of_ (the fact that) \(\varphi\), if \(\varphi\) is the case but one does not know it. One is _(first-order) ignorant whether \(\varphi\)_, if one neither knows \(\varphi\) nor knows its negation. One is _Rumsfeld ignorant of \(\varphi\)_, if one is ignorant of the fact that one is ignorant whether \(\varphi\). One is _second-order ignorant whether \(\varphi\)_, if one is ignorant whether one is ignorant whether \(\varphi\). As Fine [15] shows, there is interesting relationship among some of the forms. For instance, within the context of the system **S4**, second-order ignorance implies first-order ignorance; second-order ignorance implies Rumsfeld ignorance, and vice versa; one does not know one is Rumsfeld ignorant; one does not know one is second-order ignorant. However, all these results are based on the context of **S4**. It is then natural to ask what relationship among these forms there is in other contexts, based on the following reasons: firstly, although knowledge is usually based on **S4** (for instance in [17]), ignorance is not -- it is argued on the new view that ignorance is _not_ not-knowing (e.g. [29]); secondly, in the first explicitly logical studies on ignorance [34, 35], the semantic condition is arbitrary, without any restriction; moreover, in **S4**, all higher-order ignorance are reduced to second-order ignorance -- this is called the _black hole_ of ignorance in [15] and a _quite problematic phenomenon_ in [3, p. 1060]. One may easily check that the latter two forms are definable with the former two ones. It is the former two forms that are our focus here.4 It is important to distinguish these two forms. For instance, the Fitchean ignorance satisfies the so-called _Factivity Principle_ (that is, if an agent is ignorant of \(\varphi\) then \(\varphi\) is true), but the first-order ignorance does not.5 Moreover, since the operators of the two forms and their duals are not normal, the logic of Fitchean ignorance and first-order ignorance is not normal. As is well known, neighborhood semantics has been a standard semantics tool for non-normal modal logics since its introduction in 1970 [28, 24, 31, 4]. In the current paper, we will investigate the logical properties of the two forms of ignorance and their relationship under the neighborhood semantics. As we will show, there is interesting relationship among first-order ignorance, second-order ignorance, and Rumsfeld ignorance. For example, under any condition, Rumsfeld ignorance implies first-order ignorance, and second-order ignorance plus first-order ignorance implies Rumsfeld ignorance, whereas under the condition \((c)\), Rumsfeld ignorance implies second-order ignorance, and thus Rumsfeld ignorance amounts to second-order ignorance plus first-order ignorance. However, similar to the case for relational semantics [7], the situation may become quite involved if we study the two notions in a unified framework under the neighborhood semantics. For instance, we will be confronted with a difficulty in axiomatizing the bimodal logic, since we have only one neighborhood function to deal with two modal operators uniformly, which makes it hard to find suitable interaction axioms. The remainder of the paper is organized as follows. After briefly reviewing the syntax and the neighborhood semantics of the bimodal logic of Fitchean ignorance and first-order ignorance and also some related logics (Sec. 2), we compare the relative expressivity (Sec. 3) and investigate the frame definability of the bimodal logic (Sec. 4). We axiomatize the bimodal logic over various classes of neighborhood frames (Sec. 5). By updating the neighborhood models via the intersection semantics, we find suitable reduction axioms and thus reduce the public announcements operators to the bimodal logic, which gives us good applications to successful formulas (Sec. 6), where, as we shall show, any combination of \(p\), \(\neg p\), \(\neg\bullet p\), and \(\neg\nabla p\) via conjunction (or, via disjunction) is successful under the intersection semantics. Finally, we conclude with some future work in Sec. 7. ## 2 Syntax and Neighborhood Semantics This section introduces the languages and their neighborhood semantics involved in this paper. Fix a nonempty set \(\mathbf{P}\) of propositional variables, and let \(p\in\mathbf{P}\). In what follows, \(\mathcal{L}(\square)\) is the language of standard epistemic logic, \(\mathcal{L}(\nabla)\) is the language of the logic of (first-order) ignorance, \(\mathcal{L}(\bullet)\) is the language of the logic of Fitchean ignorance6, and \(\mathcal{L}(\nabla,\bullet)\) is the language of the bimodal logic of Fitchean ignorance and first-order ignorance. We will mainly focus on \(\mathcal{L}(\nabla,\bullet)\). For the sake of simplicity, we only exhibit the single-agent languages, but all our results also apply to multi-agent cases. Footnote 6: \(\mathcal{L}(\bullet)\) is also called ‘the logic of essence and accident’ or ‘the logic of unknown truths’, see e.g. [23, 33]. **Definition 1** (Languages).: \[\begin{array}{lclclcl}\mathcal{L}(\square)&\varphi&::=&p\mid\neg\varphi \mid\varphi\land\varphi\mid\square\varphi\\ \mathcal{L}(\nabla)&\varphi&::=&p\mid\neg\varphi\mid\varphi\land\varphi\mid \nabla\varphi\\ \mathcal{L}(\bullet)&\varphi&::=&p\mid\neg\varphi\mid\varphi\land\varphi\mid \bullet\varphi\\ \mathcal{L}(\nabla,\bullet)&\varphi&::=&p\mid\neg\varphi\mid\varphi\land\varphi \mid\nabla\varphi\mid\bullet\varphi\end{array}\] \(\square\varphi\) is read "one knows that \(\varphi\)", \(\nabla\varphi\) is read "one is _(first-order) ignorant whether \(\varphi\)"_, and \(\bullet\varphi\) is read "one is _ignorant of_ (the fact that) \(\varphi\)", or "\(\varphi\) is an unknown truth". In the metaphysical setting, \(\nabla\varphi\) and \(\bullet\varphi\) are read, respectively, "it is contingent that \(\varphi\)" and "it is accidental that \(\varphi\)". Among other connectives, \(\lozenge\varphi\), \(\Delta\varphi\), and \(\circ\varphi\) abbreviate, respectively, \(\neg\square\neg\varphi\), \(\neg\nabla\varphi\), and \(\neg\bullet\varphi\), read "it is epistemically possible that \(\varphi\)", " one knows whether \(\varphi\)", and "one is non-ignorant of \(\varphi\)". Note that the forms of 'Rumsfeld ignorance (of \(\varphi\))' and'second-ignorance (whether \(\varphi\))' can be defined as, respectively, \(\bullet\nabla\varphi\) and \(\nabla\nabla\varphi\). The above languages are interpreted over neighborhood models. **Definition 2** (Neighborhood structures).: A _(neighborhood) model_ is a triple \(\mathcal{M}=\langle S,N,V\rangle\), where \(S\) is a nonempty set of states (also called 'points' or 'possible worlds', \(N\) is a neighborhood function from \(S\) to \(\mathcal{P}(\mathcal{P}(S))\), and \(V\) is a valuation function. A _(neighborhood) frame_ is a model without a valuation; in this case, we say that the model is based on the frame. A _pointed model_ is a pair of a model with a point in it. Given an \(s\in S\), an element of \(N(s)\) is called 'a neighborhood of \(s\)'. The following list of neighborhood properties come from [12, Def. 3]. **Definition 3** (Neighborhood properties).: Let \(\mathcal{F}=\langle S,N\rangle\) be a frame, and \(\mathcal{M}\) be a model based on \(\mathcal{F}\). Let \(s\in S\) and \(X,Y\subseteq S\). We define various neighborhood properties as follows. * \((n)\): \(N(s)\) _contains the unit_, if \(S\in N(s)\). * \((r)\): \(N(s)\) _contains its core_, if \(\bigcap N(s)\in N(s)\). * \((i)\): \(N(s)\) _is closed under intersections_, if \(X,Y\in N(s)\) implies \(X\cap Y\in N(s)\). * \((s)\): \(N(s)\) is _supplemented_, or _closed under supersets_, if \(X\in N(s)\) and \(X\subseteq Y\subseteq S\) implies \(Y\in N(s)\). * \((c)\): \(N(s)\) is _closed under complements_, if \(X\in N(s)\) implies \(S\backslash X\in N(s)\).7 Footnote 7: The property \((c)\) provides a new perspective for \(\mathcal{L}(\nabla)\), see [6] for details. * \((d)\): \(X\in N(s)\) implies \(S\backslash X\notin N(s)\). * \((t)\): \(X\in N(s)\) implies \(s\in X\). * \((b)\): \(s\in X\) implies \(\{u\in S\mid S\backslash X\notin N(u)\}\in N(s)\). * \((4)\): \(X\in N(s)\) implies \(\{u\in S\mid X\in N(u)\}\in N(s)\). * \((5)\): \(X\notin N(s)\) implies \(\{u\in S\mid X\notin N(u)\}\in N(s)\). The function \(N\) possesses such a property, if for all \(s\in S\), \(N(s)\) has the property. \(\mathcal{F}\) (and \(\mathcal{M}\)) has a property, if \(N\) has. In particular, we say that \(\mathcal{F}\) (and \(\mathcal{M}\)) is _monotone_, if \(N\) has \((s)\). \(\mathcal{F}\) (and \(\mathcal{M}\)) is a _quasi-filter_, if \(N\) has \((i)\) and \((s)\); \(\mathcal{F}\) (and \(\mathcal{M}\)) is a _filter_, if \(N\) has also \((n)\). Also, in what follows, we will use \(\mathbb{C}_{n}\) to denote the class of \((n)\)-models, and similarly for \(\mathbb{C}_{r}\), etc. We use \(\mathbb{C}_{\text{all}}\) for the class of all neighborhood models. **Definition 4** (Semantics).: Let \(\mathcal{M}=\langle S,N,V\rangle\) be a model. Given a pointed model \((\mathcal{M},s)\), the truth condition of formulas is defined recursively as follows: \[\begin{array}{|lcl|}\hline\mathcal{M},s\vDash p&\Longleftrightarrow&s\in V(p) \\ \mathcal{M},s\vDash\neg\varphi&\Longleftrightarrow&\mathcal{M},s\nvDash \varphi\\ \mathcal{M},s\vDash\varphi\land\psi&\Longleftrightarrow&\mathcal{M},s\vDash \varphi\text{ and }\mathcal{M},s\vDash\psi\\ \mathcal{M},s\vDash\Box\varphi&\Longleftrightarrow&\varphi^{\mathcal{M}} \in N(s)\\ \mathcal{M},s\vDash\nabla\varphi&\Longleftrightarrow&\varphi^{\mathcal{M}} \notin N(s)\text{ and }S\backslash\varphi^{\mathcal{M}}\notin N(s)\\ \mathcal{M},s\vDash\bullet\varphi&\Longleftrightarrow&\mathcal{M},s\vDash \varphi\text{ and }\varphi^{\mathcal{M}}\notin N(s)\\ \hline\end{array}\] where \(\varphi^{\mathcal{M}}\) denotes the _truth set_ of \(\varphi\) in \(\mathcal{M}\), in symbols, \(\varphi^{\mathcal{M}}=\{s\in S\mid\mathcal{M},s\vDash\varphi\}\); given a set \(X\subseteq S\), \(S\backslash X\) denotes the complement of \(X\) with respect to \(S\). We say that \(\varphi\) is _true_ in \((\mathcal{M},s)\), if \(\mathcal{M},s\vDash\varphi\); we say that \(\varphi\) is valid on a model \(\mathcal{M}\), notation: \(\mathcal{M}\vDash\varphi\), if for all \(s\) in \(\mathcal{M}\), we have \(\mathcal{M},s\vDash\varphi\); we say that \(\varphi\) is valid on a frame \(\mathcal{F}\), notation: \(\mathcal{F}\vDash\varphi\), if for all \(\mathcal{M}\) based on \(\mathcal{F}\), we have \(\mathcal{M}\vDash\varphi\); we say that \(\varphi\) is valid over a class \(\mathbb{F}\) of frames, notation: \(\mathbb{F}\vDash\varphi\), if for all \(\mathcal{F}\) in \(\mathbb{F}\), we have \(\mathcal{F}\vDash\varphi\); we say that \(\varphi\) is satisfiable over the class \(\mathbb{F}\), if \(\mathbb{F}\nvDash\neg\varphi\). Similar notions go to a set of formulas. For the sake of reference, we also list the semantics of the aforementioned defined modalities as follows: \[\begin{array}{lcl}\mathcal{M},s\vDash\Diamond\varphi&\Longleftrightarrow&S \backslash\varphi^{\mathcal{M}}\notin N(s)\\ \mathcal{M},s\vDash\Delta\varphi&\Longleftrightarrow&\varphi^{\mathcal{M}} \in N(s)\text{ or }S\backslash\varphi^{\mathcal{M}}\in N(s)\\ \mathcal{M},s\vDash\circ\varphi&\Longleftrightarrow&\mathcal{M},s\vDash \varphi\text{ implies }\varphi^{\mathcal{M}}\in N(s).\end{array}\] ## 3 Expressivity In this section, we compare the relative expressivity of \(\mathcal{L}(\nabla,\bullet)\) and other languages introduced before, over various classes of neighborhood models. Some expressivity results over the class of relational models have been obtained in [7] and [9]. To make our presentation self-contained, we introduce some necessary technical terms. **Definition 5**.: Let \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) be two languages that are interpreted on the same class of models \(\mathbb{C}\), where \(\mathbb{C}\) ranges over classes of models which are models for \(\mathcal{L}_{1}\) and for \(\mathcal{L}_{2}\). * \(\mathcal{L}_{2}\) is _at least as expressive as \(\mathcal{L}_{1}\) over \(\mathbb{C}\)_, notation: \(\mathcal{L}_{1}\preceq\mathcal{L}_{2}[\mathbb{C}]\), if for all \(\varphi\in\mathcal{L}_{1}\), there exists \(\psi\in\mathcal{L}_{2}\) such that for all \(\mathcal{M}\in\mathbb{C}\) and all \(s\) in \(\mathcal{M}\), we have that \(\mathcal{M},s\vDash\varphi\) iff \(\mathcal{M},s\vDash\psi\). * \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are _equally expressive over \(\mathbb{C}\)_, notation: \(\mathcal{L}_{1}\equiv\mathcal{L}_{2}[\mathbb{C}]\), if \(\mathcal{L}_{1}\preceq\mathcal{L}_{2}[\mathbb{C}]\) and \(\mathcal{L}_{2}\preceq\mathcal{L}_{1}[\mathbb{C}]\). * \(\mathcal{L}_{1}\) is _less expressive than \(\mathcal{L}_{2}\) over \(\mathbb{C}\)_, notation: \(\mathcal{L}_{1}\prec\mathcal{L}_{2}[\mathbb{C}]\), if \(\mathcal{L}_{1}\preceq\mathcal{L}_{2}[\mathbb{C}]\) but \(\mathcal{L}_{2}\not\preceq\mathcal{L}_{1}[\mathbb{C}]\). * \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) are incomparable in expressivity over \(\mathbb{C}\), notation: \(\mathcal{L}_{1}\asymp\mathcal{L}_{2}[\mathbb{C}]\), if \(\mathcal{L}_{1}\not\preceq\mathcal{L}_{2}[\mathbb{C}]\) and \(\mathcal{L}_{2}\not\preceq\mathcal{L}_{1}[\mathbb{C}]\). It turns out that over the class of \((c)\)-models and the class of \((t)\)-models, \(\mathcal{L}(\nabla)\) is at least as expressive as \(\mathcal{L}(\bullet)\) (Prop. 10 and Prop. 11), whereas \(\mathcal{L}(\nabla)\) is _not_ at least as expressive as \(\mathcal{L}(\bullet)\) over the class of models possessing either of other eight neighborhood properties (Prop. 6-Prop. 8). **Proposition 6**.: \(\mathcal{L}(\bullet)\not\preceq\mathcal{L}(\nabla)[\mathbb{C}]\)_, where \(\mathbb{C}\in\{\mathbb{C}_{\text{all}},\mathbb{C}_{r},\mathbb{C}_{i},\mathbb{ C}_{s},\mathbb{C}_{d}\}\)._ Proof.: Consider the following models, which comes from [12, Prop. 2]. An arrow from a state \(x\) to a set \(X\) means that \(X\) is a neighborhood of \(x\) (Idem for other arrows). It has been shown in [12, Prop. 2] that both \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) satisfy \((r)\), \((i)\), \((s)\) and \((d)\), and \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) cannot be distinguished by \(\mathcal{L}(\nabla)\). However, both pointed models can be distinguished by an \(\mathcal{L}(\bullet)\). To see this, note that \(p^{\mathcal{M}}=\{s\}\) and \(\{s\}\notin N(s)\), and thus \(\mathcal{M},s\vDash\bullet p\), whereas \(\mathcal{M}^{\prime},s^{\prime}\nvDash\bullet p\), as \(p^{\mathcal{M}^{\prime}}=\{s^{\prime},t^{\prime}\}\in N^{\prime}(s^{\prime})\). **Proposition 7**.: \(\mathcal{L}(\bullet)\not\preceq\mathcal{L}(\nabla)[\mathbb{C}]\)_, where \(\mathbb{C}\in\{\mathbb{C}_{n},\mathbb{C}_{b}\}\)._ Proof.: Consider the following models, which comes from [12, Prop. 3]: It has been shown in [12, Prop. 3] that both \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) satisfy \((n)\) and \((b)\), and \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) cannot be distinguished by \(\mathcal{L}(\nabla)\). However, both pointed models can be distinguished by an \(\mathcal{L}(\bullet)\). To see this, note that \(p^{\mathcal{M}}=\{s\}\) and \(\{s\}\notin N(s)\), and thus \(\mathcal{M},s\vDash\bullet p\), whereas \(\mathcal{M}^{\prime},s^{\prime}\nvDash\bullet p\), as \(p^{\mathcal{M}^{\prime}}=\{s^{\prime},t^{\prime}\}\in N^{\prime}(s^{\prime})\). **Proposition 8**.: \(\mathcal{L}(\bullet)\not\preceq\mathcal{L}(\nabla)[\mathbb{C}]\)_, where \(\mathbb{C}\in\{\mathbb{C}_{4},\mathbb{C}_{5}\}\)._ Proof.: Consider the following models, which is a revision of the figures in [12, Prop. 4]: Firstly, \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) satisfy \((4)\) and \((5)\). In what follows we only show the claim for \(\mathcal{M}\); the proof for the case \(\mathcal{M}^{\prime}\) is analogous. * For \((4)\): Suppose that \(X\in N(s)\). Then \(X=\emptyset\) or \(X=\{s\}\). Notice that \(\{u\mid X\in N(u)\}=\{s\}\in N(s)\). Similarly, we can demonstrate that \((4)\) holds for \(N(t)\). * For \((5)\): Assume that \(X\notin N(s)\). Then \(X=\{t\}\) or \(X=\{s,t\}\). Notice that \(\{u\mid X\notin N(u)\}=\{s\}\in N(s)\). A similar argument goes for \(N(t)\). Secondly, \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) cannot be distinguished by \(\mathcal{L}(\nabla)\), that is to say, for all \(\varphi\in\mathcal{L}(\nabla)\), we have that \(\mathcal{M},s\vDash\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\vDash\varphi\). The proof goes by induction on \(\varphi\), where the only nontrivial case is \(\nabla\varphi\). By semantics, we have the following equivalences: \[\begin{array}{ll}&\mathcal{M},s\vDash\nabla\varphi\\ \Longleftrightarrow&\varphi^{\mathcal{M}}\notin N(s)\text{ and }(\neg\varphi)^{ \mathcal{M}}\notin N(s)\\ \Longleftrightarrow&\varphi^{\mathcal{M}}\notin\{\emptyset,\{s\}\}\text{ and }(\neg\varphi)^{\mathcal{M}}\notin\{\emptyset,\{s\}\}\\ \Longleftrightarrow&\varphi^{\mathcal{M}}\neq\emptyset\text{ and }\varphi^{\mathcal{M}}\neq\{s\}\text{ and }(\neg\varphi)^{\mathcal{M}}\neq\emptyset\text{ and }(\neg\varphi)^{\mathcal{M}}\neq\{s\}\\ \Longleftrightarrow&\varphi^{\mathcal{M}}\neq\emptyset\text{ and }\varphi^{\mathcal{M}}\neq\{s\}\text{ and }\varphi^{\mathcal{M}}\neq\{s,t\}\text{ and }\varphi^{\mathcal{M}}\neq\{t\}\\ \Longleftrightarrow&\text{false}\end{array}\] \[\begin{array}{ll}&\mathcal{M}^{\prime},s^{\prime}\vDash\nabla\varphi\\ \Longleftrightarrow&\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{ \prime})\text{ and }(\neg\varphi)^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{\prime})\\ \Longleftrightarrow&\varphi^{\mathcal{M}^{\prime}}\notin\{\{s^{\prime},t^{ \prime}\},\{s^{\prime}\}\}\text{ and }(\neg\varphi)^{\mathcal{M}^{\prime}}\notin\{\{s^{\prime},t^{\prime}\},\{s^{ \prime}\}\}\\ \Longleftrightarrow&\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime},t^{ \prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime}\}\text{ and }(\neg\varphi)^{\mathcal{M}^{\prime}}\neq\{s^{\prime},t^{\prime}\}\text{ and }(\neg\varphi)^{\mathcal{M}^{\prime}}\neq\{s^{\prime}\}\\ \Longleftrightarrow&\text{false}\end{array}\] In either case, the penultimate line of the proof merely states that \(\varphi\) cannot be interpreted on the related model: its denotation is _not_ one of all possible subsets of the domain. We conclude that \(\mathcal{M},s\vDash\nabla\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\vDash\nabla\varphi\). Finally, we show that \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) can be distinguished by \(\mathcal{L}(\bullet)\). To see this, note that \((\neg p)^{\mathcal{M}}=\{s,t\}\notin N(s)\), and thus \(\mathcal{M},s\vDash\bullet\neg p\). However, since \((\neg p)^{\mathcal{M}^{\prime}}=\{s^{\prime},t^{\prime}\}\in N^{\prime}(s^{ \prime})\), we have \(\mathcal{M},s\nvDash\bullet\neg p\). **Remark 9**.: The reader may ask whether the figure in [12, Prop. 4] (as below) applies to the above proposition. The answer is negative. This is because the pointed models \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) in this figure cannot be distinguished by \(\mathcal{L}(\bullet)\) either. To see this, note that \(\mathcal{M}\), \(s\vDash\bullet\varphi\) iff \(\mathcal{M},s\vDash\varphi\) and \(\varphi^{\mathcal{M}}\notin N(s)\), which by the construction of \(N(s)\) implies that \(s\in\varphi^{\mathcal{M}}\) and \(\varphi^{\mathcal{M}}\neq\{s\}\) and \(\varphi^{\mathcal{M}}\neq\{s,t\}\), which is impossible. It then follows that \(\mathcal{M},s\nvDash\bullet\varphi\). A similar argument can show that \(\mathcal{M}^{\prime},s^{\prime}\nvDash\bullet\varphi\). Therefore, \(\mathcal{M},s\vDash\bullet\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\vDash\bullet\varphi\). **Proposition 10**.: \(\mathcal{L}(\bullet)\preceq\mathcal{L}(\nabla)[\mathbb{C}_{c}]\)_._ Proof.: It suffices to show that \(\bullet\varphi\leftrightarrow(\varphi\wedge\nabla\varphi)\) is valid over the class of \((c)\)-models. Let \(\mathcal{M}=\langle S,N,V\rangle\) be a \((c)\)-model and \(s\in S\). Suppose that \(\mathcal{M},s\vDash\bullet\varphi\), it remains only to prove that \(\mathcal{M},s\vDash\varphi\wedge\nabla\varphi\). By supposition, we have \(\mathcal{M},s\vDash\varphi\) and \(\varphi^{\mathcal{M}}\notin N(s)\). We have also \(S\backslash\varphi^{\mathcal{M}}\notin N(s)\): otherwise, by \((c)\), \(S\backslash(S\backslash\varphi^{\mathcal{M}})\in N(s)\), that is, \(\varphi^{\mathcal{M}}\in N(s)\): a contradiction. Thus \(\mathcal{M},s\vDash\nabla\varphi\), and therefore \(\mathcal{M},s\vDash\varphi\wedge\nabla\varphi\). The converse is clear from the semantics. **Proposition 11**.: \(\mathcal{L}(\bullet)\preceq\mathcal{L}(\nabla)[\mathbb{C}_{t}]\)_._ Proof.: It suffices to show that \(\bullet\varphi\leftrightarrow(\varphi\wedge\nabla\varphi)\) over the class of \((t)\)-models. The proof is almost the same as that in Prop. 10, except that \(S\backslash\varphi^{\mathcal{M}}\notin N(s)\) (that is, \((\neg\varphi)^{\mathcal{M}}\notin N(s)\)) is obtained from \(\mathcal{M},s\vDash\varphi\) and the property \((t)\). Conversely, on the class of \((c)\)-models and the class of \((t)\)-models, \(\mathcal{L}(\bullet)\) is at least as expressive as \(\mathcal{L}(\nabla)\) (Prop. 15 and Prop. 16), whereas on the class of models possessing either of other eight neighborhood properties, \(\mathcal{L}(\bullet)\) is _not_ at least as expressive as \(\mathcal{L}(\nabla)\) (Prop. 12-Prop. 14). As a corollary, on the class of \((c)\)-models and the class of \((t)\)-models, \(\mathcal{L}(\nabla)\), \(\mathcal{L}(\bullet)\), and \(\mathcal{L}(\nabla,\bullet)\) are equally expressive, whereas over the class of models possessing the eight neighborhood properties in question, \(\mathcal{L}(\nabla)\) and \(\mathcal{L}(\bullet)\) are both less expressive than \(\mathcal{L}(\nabla,\bullet)\) (Coro. 17). **Proposition 12**.: \(\mathcal{L}(\nabla)\not\preceq\mathcal{L}(\bullet)[\mathbb{C}]\)_, where \(\mathbb{C}\in\{\mathbb{C}_{\mathrm{all}},\mathbb{C}_{n},\mathbb{C}_{r}, \mathbb{C}_{i},\mathbb{C}_{s},\mathbb{C}_{d},\mathbb{C}_{b}\}\)._ Proof.: Consider the following models: It is straightforward to check that both \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) satisfy \((n)\), \((r)\), \((i)\), \((s)\), and \((d)\). In what follows, we show that \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) both have the property \((b)\). * For \(\mathcal{M}\): suppose that \(s\in X\). Then \(X=\{s\}\) or \(X=\{s,t\}\). This implies that \(\{u\mid S\backslash X\notin N(u)\}=\{s,t\}\in N(s)\). Similarly, we can show that \((b)\) holds for \(N(t)\). * For \(\mathcal{M}^{\prime}\): assume that \(s^{\prime}\in X\). Then \(X=\{s^{\prime}\}\) or \(X=\{s^{\prime},t^{\prime}\}\). If \(X=\{s^{\prime}\}\), then \(\{u\mid S^{\prime}\backslash X\notin N^{\prime}(u)\}=\{t^{\prime}\}\in N^{ \prime}(s^{\prime})\); if \(X=\{s^{\prime},t^{\prime}\}\), then \(\{u\mid S^{\prime}\backslash X\notin N^{\prime}(u)\}=\{s^{\prime},t^{\prime}\} \in N^{\prime}(s^{\prime})\). Now assume that \(t^{\prime}\in X\). Then \(X=\{t^{\prime}\}\) or \(X=\{s^{\prime},t^{\prime}\}\). If \(X=\{t^{\prime}\}\), then \(\{u\mid S^{\prime}\backslash X\notin N^{\prime}(u)\}=\{s^{\prime},t^{\prime}\} \in N^{\prime}(t^{\prime})\); if \(X=\{s^{\prime},t^{\prime}\}\), we can also show that \(\{u\mid S^{\prime}\backslash X\notin N^{\prime}(u)\}=\{s^{\prime},t^{\prime}\} \in N^{\prime}(t^{\prime})\). Moreover, \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) cannot be distinguished by \(\mathcal{L}(\bullet)\). Here we use the notion of \(\bullet\)-morphisms introduced in [11, Def. 4.1].8 Define a function \(f:S\to S^{\prime}\) such that \(f(s)=s^{\prime}\) and \(f(t)=t^{\prime}\). We prove that \(f\) is a \(\bullet\)-morphism from \(\mathcal{M}\) to \(\mathcal{M}^{\prime}\). The condition (Var) follows directly from the valuations. For the condition (\(\bullet\)-Mor), we first prove that it holds for \(s\): assume that \(s\in f^{-1}[X^{\prime}]\) and \(f^{-1}[X^{\prime}]\notin N(s)\), then it must be that \(X^{\prime}=\{s^{\prime}\}\). Then we have \(f(s)=s^{\prime}\in X^{\prime}\) and \(X^{\prime}\notin N^{\prime}(f(s))\). The converse is similar. In a similar way, we can show that (\(\bullet\)-Mor) also holds for \(t\). Then by [11, Prop. 4.1] (see also fn. 8), we have \(\mathcal{M},s\vDash\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\vDash\varphi\) for all \(\varphi\in\mathcal{L}(\bullet)\). Footnote 8: Recall that the notion of \(\bullet\)-morphisms is defined as follows. Let \(\mathcal{M}=\langle S,N,V\rangle\) and \(\mathcal{M}^{\prime}=\langle S^{\prime},N^{\prime},V^{\prime}\rangle\) be neighborhood models. A function \(f:S\to S^{\prime}\) is a \(\bullet\)-morphism from \(\mathcal{M}\) to \(\mathcal{M}^{\prime}\), if for all \(s\in S\), (Var) \(s\in V(p)\) iff \(f(s)\in V^{\prime}(p)\) for all \(p\in\textbf{P}\), (\(\bullet\)-Mor) for all \(X^{\prime}\subseteq S^{\prime}\), \([s\in f^{-1}[X^{\prime}]\) and \(f^{-1}[X^{\prime}]\notin N(s)]\Longleftrightarrow[f(s)\in X^{\prime}\) and \(X^{\prime}\notin N^{\prime}(f(s))]\). It is then demonstrated in [11, Prop. 4.1] that the formulas of \(\mathcal{L}(\bullet)\) are invariant under \(\bullet\)-morphisms. In details, let \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) be neighborhood models, and let \(f\) be a \(\bullet\)-morphism from \(\mathcal{M}\) to \(\mathcal{M}^{\prime}\). Then for all \(s\in S\), for all \(\varphi\in\mathcal{L}(\bullet)\), we have that \(\mathcal{M},s\vDash\varphi\) iff \(\mathcal{M}^{\prime},f(s)\vDash\varphi\). However, these pointed models can be distinguished by \(\mathcal{L}(\nabla)\). This is because \(\mathcal{M},s\vDash\nabla p\) (as \(p^{\mathcal{M}}=\{t\}\notin N(s)\) and \((\neg p)^{\mathcal{M}}=\{s\}\notin N(s)\)) and \(\mathcal{M}^{\prime},s^{\prime}\nvDash\nabla p\) (as \(p^{\mathcal{M}^{\prime}}=\{t^{\prime}\}\in N^{\prime}(s^{\prime})\)). **Proposition 13**.: \(\mathcal{L}(\nabla)\not\preceq\mathcal{L}(\bullet)[\mathbb{C}_{4}]\)_._ Proof.: Consider the following models: Firstly, both \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) have \((4)\). * For \(\mathcal{M}\): Suppose that \(X\in N(s)\). Then \(X=\{s,t\}\), and so \(\{u\mid X\in N(u)\}=\{s,t\}\in N(s)\). Now assume that \(X\in N(t)\). Then \(X=\{t\}\) or \(X=\{s,t\}\). If \(X=\{t\}\), then \(\{u\mid X\in N(u)\}=\{t\}\in N(t)\); if \(X=\{s,t\}\), then \(\{u\mid X\in N(u)\}=\{s,t\}\in N(t)\). * For \(\mathcal{M}^{\prime}\): Suppose that \(X\in N^{\prime}(s^{\prime})\). Then \(X=\{t^{\prime}\}\) or \(X=\{s^{\prime},t^{\prime}\}\). Either case implies that \(\{u\mid X\in N^{\prime}(u)\}=\{s^{\prime},t^{\prime}\}\in N^{\prime}(s^{\prime})\). Now assume that \(X\in N^{\prime}(t^{\prime})\). Then \(X=\{t^{\prime}\}\) or \(X=\{s^{\prime},t^{\prime}\}\). Again, either case implies that \(\{u\mid X\in N^{\prime}(u)\}=\{s^{\prime},t^{\prime}\}\in N^{\prime}(t^{\prime})\). Secondly, similar to the proof of the corresponding part in Prop. 12, we can show that \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) cannot be distinguished by \(\mathcal{L}(\bullet)\). It remains only to show that \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) can be distinguished by \(\mathcal{L}(\nabla)\). The proof for this is analogous to that in Prop. 12. **Proposition 14**.: \(\mathcal{L}(\nabla)\not\preceq\mathcal{L}(\bullet)[\mathbb{C}_{5}]\)_._ Proof.: Consider the following models: Firstly, both \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) possess the property \((5)\). Since for all \(X\subseteq S=\{s,t\}\), \(X\in N(t)\), the property \((5)\) is possessed vacuously by \(N(t)\) Similarly, \((5)\) is also possessed vacuously by \(N^{\prime}(t^{\prime})\). It remains only to show that both \(N(s)\) and \(N^{\prime}(s^{\prime})\) have \((5)\). * For \(N(s)\): suppose that \(X\notin N(s)\), then \(X=\emptyset\) or \(X=\{s,t\}\). Either case implies that \(\{u\in S\mid X\notin N(u)\}=\{s\}\in N(s)\). * For \(N^{\prime}(s^{\prime})\): assume that \(X\notin N^{\prime}(s^{\prime})\), then \(X=\{s^{\prime},t^{\prime}\}\). This follows that \(\{u\in S^{\prime}\mid X\notin N^{\prime}(u)\}=\{s^{\prime}\}\in N^{\prime}(s^{ \prime})\). Secondly, \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s)\) cannot be distinguished by \(\mathcal{L}(\bullet)\). Again, this can be shown as the corresponding part in Prop. 12. Finally, \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) can be distinguished by \(\mathcal{L}(\nabla)\). On one hand, \(p^{\mathcal{M}}=\{s,t\}\notin N(s)\) and \(S\backslash p^{\mathcal{M}}=\emptyset\notin N(s)\), thus \(\mathcal{M},s\vDash\nabla p\). On the other hand, \(S^{\prime}\backslash p^{\mathcal{M}^{\prime}}=\emptyset\in N^{\prime}(s^{ \prime})\), thus \(\mathcal{M}^{\prime},s^{\prime}\nvDash\nabla p\). **Proposition 15**.: \(\mathcal{L}(\nabla)\preceq\mathcal{L}(\bullet)[\mathbb{C}_{c}]\)_._ Proof.: We claim that over the class of \((c)\)-models, \(\vDash\nabla\varphi\leftrightarrow\bullet\varphi\vee\bullet\neg\varphi\). First, we show that \(\vDash\nabla\varphi\rightarrow\bullet\varphi\vee\bullet\neg\varphi\). Let \(\mathcal{M}=\langle S,N,V\rangle\) be any model and \(s\in S\). Suppose that \(\mathcal{M},s\vDash\nabla\varphi\). Then \(\varphi^{\mathcal{M}}\notin N(s)\) and \(S\backslash\varphi^{\mathcal{M}}\notin N(s)\), that is, \((\neg\varphi)^{\mathcal{M}}\notin N(s)\). We have either \(\mathcal{M},s\vDash\varphi\) or \(\mathcal{M},s\vDash\neg\varphi\). If \(\mathcal{M},s\vDash\varphi\), since \(\varphi^{\mathcal{M}}\notin N(s)\), we infer that \(\mathcal{M},s\vDash\bullet\varphi\); if \(\mathcal{M},s\vDash\neg\varphi\), since \((\neg\varphi)^{\mathcal{M}}\notin N(s)\), we derive that \(\mathcal{M},s\vDash\bullet\neg\varphi\). Therefore, \(\mathcal{M},s\vDash\bullet\varphi\vee\bullet\neg\varphi\). Since \((\mathcal{M},s)\) is arbitrary, this establishes the validity of \(\nabla\varphi\rightarrow\bullet\varphi\vee\bullet\neg\varphi\). Conversely, we prove that over the class of \((c)\)-models, \(\vDash\varphi\vee\bullet\neg\varphi\rightarrow\nabla\varphi\). Suppose that \(\mathcal{M}=\langle S,N,V\rangle\) be a \((c)\)-model and \(s\in S\). Assume that \(\mathcal{M},s\vDash\bullet\varphi\vee\bullet\neg\varphi\). Then \(\mathcal{M},s\vDash\bullet\varphi\) or \(\mathcal{M},s\vDash\bullet\neg\varphi\). If \(\mathcal{M},s\vDash\bullet\varphi\), then \(\varphi^{\mathcal{M}}\notin N(s)\). By \((c)\), we have \(S\backslash\varphi^{\mathcal{M}}\notin N(s)\), so \(\mathcal{M},s\vDash\nabla\varphi\). If \(\mathcal{M},s\vDash\bullet\neg\varphi\), then \((\neg\varphi)^{\mathcal{M}}\notin N(s)\), namely \(S\backslash\varphi^{\mathcal{M}}\notin N(s)\). By \((c)\) again, we obtain \(\varphi^{\mathcal{M}}\notin N(s)\), and thus \(\mathcal{M},s\vDash\nabla\varphi\). Therefore, \(\mathcal{M},s\vDash\nabla\varphi\). **Proposition 16**.: \(\mathcal{L}(\nabla)\preceq\mathcal{L}(\bullet)[\mathbb{C}_{t}]\)_._ Proof.: We claim that over the class of \((t)\)-models, \(\vDash\nabla\varphi\leftrightarrow\bullet\varphi\vee\bullet\neg\varphi\). The proof is almost the same as that in Prop. 15, except that in the proof of the validity of \(\bullet\varphi\vee\bullet\neg\varphi\rightarrow\nabla\varphi\), \(\mathcal{M},s\vDash\nabla\varphi\) is obtained as follows: if \(\mathcal{M},s\vDash\bullet\varphi\), then \(\mathcal{M},s\vDash\varphi\) and \(\varphi^{\mathcal{M}}\notin N(s)\), thus \(\mathcal{M},s\nvDash\neg\varphi\), namely \(s\notin(\neg\varphi)^{\mathcal{M}}\), and then by \((t)\), we infer that \((\neg\varphi)^{\mathcal{M}}\notin N(s)\), namely \(S\backslash\varphi^{\mathcal{M}}\notin N(s)\), so \(\mathcal{M},s\vDash\nabla\varphi\); similarly, we can show that if \(\mathcal{M},s\vDash\bullet\neg\varphi\) then \(\mathcal{M},s\vDash\nabla\varphi\). With the above results in mind, we have the following result, which extends the expressivity results over Kripke models in [7]. **Corollary 17**.: Where \(\mathbb{C}\in\{\mathbb{C}_{\text{all}},\mathbb{C}_{n},\mathbb{C}_{r},\mathbb{ C}_{i},\mathbb{C}_{s},\mathbb{C}_{d},\mathbb{C}_{b},\mathbb{C}_{4},\mathbb{C}_{5}\}\), \(\mathcal{L}(\nabla)\asymp\mathcal{L}(\bullet)[\mathbb{C}]\), and \(\mathcal{L}\prec\mathbb{L}(\nabla,\bullet)[\mathbb{C}]\), where \(\mathcal{L}\in\{\mathcal{L}(\nabla),\mathcal{L}(\bullet)\}\). Where \(\mathbb{C}\in\{\mathbb{C}_{c},\mathbb{C}_{t}\}\), \(\mathcal{L}_{1}\equiv\mathcal{L}_{2}[\mathbb{C}]\), where \(\mathcal{L}_{1},\mathcal{L}_{2}\in\{\mathcal{L}(\nabla),\mathcal{L}(\bullet), \mathcal{L}(\nabla,\bullet)\}\). Moreover, over the class of \((c)\)-models and the class of \((t)\)-models, \(\mathcal{L}(\nabla,\bullet)\) and \(\mathcal{L}(\square)\) are equally expressive (Prop. 20), whereas over the class of models possessing either of other eight neighborhood properties except for \((d)\), \(\mathcal{L}(\nabla,\bullet)\) is less expressive than \(\mathcal{L}(\square)\) (Prop. 18 and Prop. 19). **Proposition 18**.: \(\mathcal{L}(\nabla,\bullet)\prec\mathcal{L}(\square)[\mathbb{C}]\)_, where \(\mathbb{C}\in\{\mathbb{C}_{\text{all}},\mathbb{C}_{r},\mathbb{C}_{i},\mathbb{C }_{4},\mathbb{C}_{5}\}\)._ Proof.: Use Remark 9 and [12, Prop. 4]. **Proposition 19**.: \(\mathcal{L}(\nabla,\bullet)\prec\mathcal{L}(\square)[\mathbb{C}]\)_, where \(\mathbb{C}\in\{\mathbb{C}_{n},\mathbb{C}_{s},\mathbb{C}_{b}\}\)._ Proof.: Consider the following models \(\mathcal{M}=\langle S,N,V\rangle\) and \(\mathcal{M}^{\prime}=\langle S^{\prime},N^{\prime},V^{\prime}\rangle\), where \(S=\{s,t\}\) and \(S^{\prime}=\{s^{\prime},t^{\prime}\}\). \(\mathcal{M}\)\(\mathcal{M}^{\prime}\)\(\mathcal{M}\)\(s\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\)\(\emptyset\ Next, we show \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) cannot be distinguished by \(\mathcal{L}(\nabla,\bullet)\). That is, for all \(\varphi\in\mathcal{L}(\nabla,\bullet)\), we have \(\mathcal{M},s\vDash\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\vDash\varphi\). The proof proceeds by induction on \(\varphi\), where the nontrivial cases are \(\nabla\varphi\) and \(\bullet\varphi\). The proof for the case \(\bullet\varphi\) is shown as in Remark 9. For the case \(\nabla\varphi\), we have the following equivalences. \[\begin{array}{ll}&\mathcal{M},s\vDash\nabla\varphi\\ \Longleftrightarrow&\varphi^{\mathcal{M}}\notin N(s)\text{ and }S\backslash\varphi^{\mathcal{M}}\notin N(s)\\ \Longleftrightarrow&\varphi^{\mathcal{M}}\neq\emptyset\text{ and }\varphi^{\mathcal{M}}\neq\{s,t\}\text{ and }\varphi^{\mathcal{M}}\neq\{s\}\text{ and }\varphi^{\mathcal{M}}\neq\{t\}\\ \Longleftrightarrow&\text{false}\end{array}\] \[\begin{array}{ll}&\mathcal{M}^{\prime},s^{\prime}\vDash\nabla\varphi\\ \Longleftrightarrow&\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{ \prime})\text{ and }S^{\prime}\backslash\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{ \prime})\\ \Longleftrightarrow&\varphi^{\mathcal{M}^{\prime}}\notin\{\{s^{\prime},t^{ \prime}\},\{s^{\prime}\}\}\text{ and }S\backslash\varphi^{\mathcal{M}^{\prime}}\notin\{\{s^{\prime},t^{ \prime}\},\{s^{\prime}\}\}\\ \Longleftrightarrow&\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime},t^{ \prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\emptyset\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{t^{\prime}\}\\ \Longleftrightarrow&\text{false}\end{array}\] In either case, the penultimate line of the equivalences states that \(\varphi\) cannot be interpreted on the related model: its denotation is not one of all possible subsets of the domain. We therefore conclude that \(\mathcal{M},s\vDash\nabla\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\vDash\nabla\varphi\). Finally, \((\mathcal{M},s)\) and \((\mathcal{M}^{\prime},s^{\prime})\) can be distinguished by \(\mathcal{L}(\Diamond)\). To see this, note that \(p^{\mathcal{M}}=\emptyset\in N(s)\), thus \(\mathcal{M},s\vDash\square p\); however, \(p^{\mathcal{M}^{\prime}}=\emptyset\notin N^{\prime}(s^{\prime})\), and thus \(\mathcal{M}^{\prime},s^{\prime}\vDash\square p\). **Proposition 20**.: \(\mathcal{L}(\nabla,\bullet)\equiv\mathcal{L}(\square)[\mathbb{C}]\), where \(\mathbb{C}\in\{\mathbb{C}_{c},\mathbb{C}_{t}\}\). Proof.: Straightforward from \(\mathcal{L}(\nabla,\bullet)\equiv\mathbb{L}(\nabla)[\mathbb{C}]\) (see Coro. 17) and \(\mathcal{L}(\nabla)\equiv\mathcal{L}(\square)[\mathbb{C}]\)[12, Prop. 5, Prop. 6], where \(\mathbb{C}\in\{\mathbb{C}_{c},\mathbb{C}_{t}\}\). We do not know whether \(\mathcal{L}(\nabla,\bullet)\) is less expressive than \(\mathcal{L}(\square)\) over the class of \((d)\)-models. We conjecture the answer is positive. We leave it for future work. We summarize the results in this section as follows. \[\begin{array}{ll}\mathcal{L}(\nabla)\asymp\mathcal{L}(\bullet)[\mathbb{C} ],\text{ where }\mathbb{C}\in\{\mathbb{C}_{\text{all}},\mathbb{C}_{n},\mathbb{C}_{r}, \mathbb{C}_{i},\mathbb{C}_{s},\mathbb{C}_{d},\mathbb{C}_{b},\mathbb{C}_{4}, \mathbb{C}_{5}\}&\text{(Coro. 17)}\\ \mathcal{L}(\nabla)\equiv\mathcal{L}(\bullet)[\mathbb{C}],\text{ where }\mathbb{C}\in\{\mathbb{C}_{c},\mathbb{C}_{t}\}&\text{(Coro. 17)}\\ \mathcal{L}(\nabla)\prec\mathcal{L}(\nabla,\bullet)[\mathbb{C}],\text{ where }\mathbb{C}\in\{\mathbb{C}_{\text{all}},\mathbb{C}_{n},\mathbb{C}_{r},\mathbb{C}_{i },\mathbb{C}_{s},\mathbb{C}_{d},\mathbb{C}_{b},\mathbb{C}_{4},\mathbb{C}_{5}\} &\text{(Coro. 17)}\\ \mathcal{L}(\bullet)\prec\mathcal{L}(\nabla,\bullet)[\mathbb{C}],\text{ where }\mathbb{C}\in\{\mathbb{C}_{\text{all}},\mathbb{C}_{n},\mathbb{C}_{r},\mathbb{C}_{i },\mathbb{C}_{s},\mathbb{C}_{d},\mathbb{C}_{b},\mathbb{C}_{4},\mathbb{C}_{5}\} &\text{(Coro. 17)}\\ \mathcal{L}(\nabla)\equiv\mathcal{L}(\nabla,\bullet)[\mathbb{C}],\text{ where }\mathbb{C}\in\{\mathbb{C}_{c},\mathbb{C}_{t}\}&\text{(Coro. 17)}\\ \mathcal{L}(\bullet)\equiv\mathcal{L}(\nabla,\bullet)[\mathbb{C}],\text{ where }\mathbb{C}\in\{\mathbb{C}_{c},\mathbb{C}_{t}\}&\text{(Coro. 17)}\\ \mathcal{L}(\nabla,\bullet)\prec\mathcal{L}(\square)[\mathbb{C}],\text{ where }\mathbb{C}\in\{\mathbb{C}_{\text{all}},\mathbb{C}_{n},\mathbb{C}_{r},\mathbb{C}_{i },\mathbb{C}_{s},\mathbb{C}_{b},\mathbb{C}_{4},\mathbb{C}_{5}\}&\text{(Props. 18, 19)}\\ \mathcal{L}(\nabla,\bullet)\equiv\mathcal{L}(\square)[\mathbb{C}],\text{ where }\mathbb{C}\in\{\mathbb{C}_{c},\mathbb{C}_{t}\}&\text{(Coro. 20)}\\ \end{array}\] ## 4 Frame Definability We have shown in the previous section that \(\mathcal{L}(\nabla,\bullet)\) is more expressive than \(\mathcal{L}(\nabla)\) and \(\mathcal{L}(\bullet)\) (at the level of models). It may then be natural to ask whether a similar situation holds at the level of frames. Recall that it is shown in [12, Prop. 7] that all frame properties in Def. 3, in particular \((n)\), are undefinable in \(\mathcal{L}(\nabla)\). In what follows, we shall show that all frame properties in question except for \((n)\) are undefinable in \(\mathcal{L}(\nabla,\bullet)\), thus \(\mathcal{L}(\nabla,\bullet)\) is also more expressive than \(\mathcal{L}(\nabla)\) and \(\mathcal{L}(\bullet)\) at the level of frames. First, we need some related notion. **Definition 21**.: Let \(\Gamma\) be a set of \(\mathcal{L}(\nabla,\bullet)\)-formulas, and \(P\) a neighborhood property. We say that \(\Gamma\) defines \(P\), if for all frames \(\mathcal{F}\), \(\mathcal{F}\) has \(P\) if and only if \(\mathcal{F}\vDash\Gamma\). If \(\Gamma\) is a singleton, say \(\{\varphi\}\), we will write \(\mathcal{F}\vDash\varphi\) rather than \(\mathcal{F}\vDash\{\varphi\}\). We say that \(P\) is definable in \(\mathcal{L}(\nabla,\bullet)\), if there exists a set of \(\mathcal{L}(\nabla,\bullet)\)-formulas that defines it. **Proposition 22**.: The frame property \((n)\) is definable in \(\mathcal{L}(\nabla,\bullet)\). Proof.: [11] has shown that \((n)\) is defined in \(\mathcal{L}(\bullet)\), by \(\circ\top\). Therefore, \((n)\) is also definable in \(\mathcal{L}(\nabla,\bullet)\), by \(\circ\top\). **Proposition 23**.: The frame properties \((r)\), \((i)\), \((c)\), \((d)\), \((t)\) and \((b)\) are undefinable in \(\mathcal{L}(\nabla,\bullet)\). Proof.: Consider the following frames \(\mathcal{F}_{1}=\langle S_{1},N_{1}\rangle\), \(\mathcal{F}_{2}=\langle S_{2},N_{2}\rangle\), and \(\mathcal{F}_{3}=\langle S_{3},N_{3}\rangle\)9: Footnote 9: This come from [12, Prop. 7]. \(\mathcal{F}_{1}\)\(\mathcal{F}_{2}\)\(\mathcal{F}_{3}\)\(\mathcal{F}_{1}\)\(\mathcal{F}_{2}\)\(\mathcal{F}_{3}\)\(\mathcal{F}_{4}\)\(\mathcal{F}_{5}\)\(\mathcal{F}_{6}\)\(\mathcal{F}_{7}\) It has been observed in [12, Prop. 7] that \(\mathcal{F}_{1}\) satisfies \((d)\) and \((t)\) but \(\mathcal{F}_{2}\) does not. Also, it is straightforward to check that \(\mathcal{F}_{2}\) satisfies \((c)\) but \(\mathcal{F}_{1}\) does not. Moreover, \(\mathcal{F}_{2}\) satisfies \((r)\), \((i)\) and \((b)\), whereas \(\mathcal{F}_{3}\) does not. To see \(\mathcal{F}_{3}\) does not satisfy \((b)\), note that \(s_{3}\in\{s_{3}\}\) but \(\{u\in S_{3}\mid\{t_{3}\}\notin N_{3}(u)\}=\emptyset\notin N_{3}(s_{3})\). In what follows, we show that for all \(\varphi\in\mathcal{L}(\nabla,\bullet)\), \(\mathcal{F}_{1}\vDash\varphi\) iff \(\mathcal{F}_{2}\vDash\varphi\) iff \(\mathcal{F}_{3}\vDash\varphi\). Suppose that \(\mathcal{F}_{1}\nvDash\varphi\). Then there exists \(\mathcal{M}_{1}=\langle\mathcal{F}_{1},V_{1}\rangle\) such that \(\mathcal{M}_{1},s_{1}\nvDash\varphi\). Define a valuation \(V_{2}\) on \(\mathcal{F}_{2}\) as \(s_{2}\in V_{2}(p)\) iff \(s_{1}\in V_{1}(p)\) for all \(p\in\mathbf{P}\). By induction on \(\varphi\), we show that \((*)\): \(\mathcal{M}_{1},s_{1}\vDash\varphi\) iff \(\mathcal{M}_{2},s_{2}\vDash\varphi\), where \(\mathcal{M}_{2}=\langle\mathcal{F}_{2},V_{2}\rangle\). The nontrivial cases are \(\nabla\varphi\) and \(\bullet\varphi\). The case \(\nabla\varphi\) can be shown as in [12, Prop. 7]. For the case \(\bullet\varphi\), notice that \(\mathcal{M}_{1},s_{1}\vDash\bullet\varphi\) iff \((\mathcal{M}_{1},s_{1}\vDash\varphi\) and \(\varphi^{\mathcal{M}_{1}}\notin N_{1}(s_{1}))\) iff \((\mathcal{M}_{1},s_{1}\vDash\varphi\) and \(\varphi^{\mathcal{M}_{1}}\neq\{s_{1}\}\)), where the last one is a contradiction, and thus \(\mathcal{M}_{1},s_{1}\nvDash\bullet\varphi\); a similar argument gives us \(\mathcal{M}_{2},s_{2}\nvDash\bullet\varphi\). We have thus proved \((*)\). This entails that \(\mathcal{M}_{2},s_{2}\nvDash\varphi\), and thus \(\mathcal{F}_{2}\nvDash\varphi\). The converse is similar. Therefore, \(\mathcal{F}_{1}\vDash\varphi\) iff \(\mathcal{F}_{2}\vDash\varphi\). It remains only to show that \(\mathcal{F}_{2}\vDash\varphi\) iff \(\mathcal{F}_{3}\vDash\varphi\). Assume that \(\mathcal{F}_{2}\nvDash\varphi\). Then there exists \(\mathcal{M}_{2}=\langle\mathcal{F}_{2},V_{2}\rangle\) such that \(\mathcal{M}_{2},s_{2}\nvDash\varphi\). Define a valuation \(V_{3}\) on \(\mathcal{F}_{3}\) such that \(s_{3}\in V_{3}(p)\) iff \(s_{2}\in V_{2}(p)\) for all \(p\in\mathbf{P}\). By induction on \(\varphi\in\mathcal{L}(\nabla,\bullet)\), we show that \((**)\): \(\mathcal{M}_{2},s_{2}\vDash\varphi\) iff \(\mathcal{M}_{3},s_{3}\vDash\varphi\), where \(\mathcal{M}_{3}=\langle\mathcal{F}_{3},V_{3}\rangle\). The nontrivial cases are \(\nabla\varphi\) and \(\bullet\varphi\). Again, the case \(\nabla\varphi\) can be shown as in [12, Prop. 7]. For the case \(\bullet\varphi\), just note that \(\mathcal{M}_{3},s_{3}\vDash\bullet\varphi\) iff \((\mathcal{M}_{3},s_{3}\vDash\varphi\) and \(\varphi^{\mathcal{M}_{3}}\notin N_{3}(s_{3}))\) iff \((\mathcal{M}_{3},s_{3}\vDash\varphi\) and \(\varphi^{\mathcal{M}_{3}}\neq\{s_{3}\}\) and \(\varphi^{\mathcal{M}_{3}}\neq\{t_{3}\}\) and \(\varphi^{\mathcal{M}_{3}}\neq\{s_{3},t_{3}\}\) iff false. Thus \((**)\) holds. This implies that \(\mathcal{M}_{3},s_{3}\nvDash\varphi\), and then \(\mathcal{F}_{3}\nvDash\varphi\). The converse is analogous. Therefore, \(\mathcal{F}_{2}\vDash\varphi\) iff \(\mathcal{F}_{3}\vDash\varphi\). If \((r)\) were to be defined by a set of \(\mathcal{L}(\nabla,\bullet)\)-formulas, say \(\Sigma\), then as \(\mathcal{F}_{2}\) satisfies \((r)\), we have \(\mathcal{F}_{2}\vDash\Sigma\). Then we should also have \(\mathcal{F}_{3}\vDash\Sigma\), which means that \(\mathcal{F}_{3}\) has \((r)\): a contradiction. Therefore, \((r)\) is undefinable in \(\mathcal{L}(\nabla,\bullet)\). Similarly, we can show other frame properties in question are undefinable in \(\mathcal{L}(\nabla,\bullet)\). **Proposition 24**.: The frame properties \((s)\) and \((4)\) are undefinable in \(\mathcal{L}(\nabla,\bullet)\). Proof.: Consider the following frames \(\mathcal{F}=\langle S,N\rangle\) and \(\mathcal{F}^{\prime}=\langle S^{\prime},N^{\prime}\rangle\), where \(S=\{s,t\}\) and \(S^{\prime}=\{s^{\prime},t^{\prime}\}\): \(\mathcal{F}\)\(\mathcal{F}\)\(\mathcal{F}\)\(\mathcal{F}\)\(\mathcal{F}\) Firstly, one may easily see that \(\mathcal{F}\) has \((s)\). Also, \(\mathcal{F}\) has \((4)\). Suppose that \(X\in N(s)\), to show that \(\{u\in S\mid X\in N(u)\}\in N(s)\). By supposition, \(X=\{s\}\) or \(X=\{s,t\}\). Either case implies that \(\{u\in S\mid X\in N(u)\}=\{s,t\}\in N(s)\). Thus \(N(s)\) has \((4)\). A similar argument applies to showing that \(N(t)\) has \((4)\). Secondly, \(\mathcal{F}^{\prime}\) does not have \((s)\), since \(\emptyset\in N^{\prime}(t^{\prime})\) and \(\emptyset\subseteq\{t^{\prime}\}\) but \(\{t^{\prime}\}\notin N^{\prime}(t^{\prime})\). Moreover, \(\mathcal{F}^{\prime}\) does not have \((4)\). This is because, for instance, \(\emptyset\in N^{\prime}(t^{\prime})\) but \(\{u\in S^{\prime}\mid\emptyset\in N^{\prime}(u)\}=\{t^{\prime}\}\notin N^{ \prime}(t^{\prime})\). Thirdly, for all \(\psi\in\mathcal{L}(\nabla,\bullet)\), we have that \(\mathcal{F}\vDash\psi\) iff \(\mathcal{F}^{\prime}\vDash\psi\). Suppose that \(\mathcal{F}\nvDash\psi\). Then there exists \(\mathcal{M}=\langle\mathcal{F},V\rangle\) and \(x\in S\) such that \(\mathcal{M},x\nvDash\psi\). Define \(V^{\prime}\) to be a valuation on \(\mathcal{F}^{\prime}\) such that \(s\in V(p)\) iff \(s^{\prime}\in V^{\prime}(p)\), and \(t\in V(p)\) iff \(t^{\prime}\in V^{\prime}(p)\). In what follows, we show \((*)\): for all \(\varphi\in\mathcal{L}(\nabla,\bullet)\), \(\mathcal{M},s\vDash\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\vDash\varphi\), and \(\mathcal{M},t\vDash\varphi\) iff \(\mathcal{M}^{\prime},t^{\prime}\vDash\varphi\), where \(\mathcal{M}^{\prime}=\langle\mathcal{F}^{\prime},N^{\prime}\rangle\). We proceed by induction on \(\varphi\), where the nontrivial cases are \(\nabla\varphi\) and \(\bullet\varphi\). For the case \(\nabla\varphi\), we have the following equivalences. \[\begin{array}{ll}&\mathcal{M},s\vDash\nabla\varphi\\ \iff&\varphi^{\mathcal{M}}\notin N(s)\text{ and }S\backslash\varphi^{\mathcal{M}} \notin N(s)\\ \iff&\varphi^{\mathcal{M}}\notin\{\{s\},\{s,t\}\}\text{ and }S\backslash\varphi^{\mathcal{M}} \notin\{\{s\},\{s,t\}\}\\ \iff&\varphi^{\mathcal{M}}\neq\{s\}\text{ and }\varphi^{\mathcal{M}}\neq\{s,t\}\text{ and }\varphi^{\mathcal{M}}\neq\{t\}\text{ and }\varphi^{\mathcal{M}}\neq\emptyset\\ \\ \iff&\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{\prime})\text{ and }S^{\prime} \backslash\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{\prime})\\ \iff&\varphi^{\mathcal{M}^{\prime}}\notin\{\{s^{\prime}\},\{s^{\prime},t^{\prime}\} \}\text{ and }S^{\prime}\backslash\varphi^{\mathcal{M}^{\prime}}\notin\{\{s^{\prime}\},\{s^{ \prime},t^{\prime}\}\}\\ \iff&\varphi^{\mathcal{M}}\neq\{s^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime},t^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{t^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\emptyset\end{array}\] \[\begin{array}{ll}&\mathcal{M}^{\prime},s^{\prime}\vDash\nabla\varphi\\ \iff&\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{\prime})\text{ and }S^{\prime}\backslash\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{\prime})\\ \iff&\varphi^{\mathcal{M}^{\prime}}\notin\{\{s^{\prime}\},\{s^{\prime},t^{\prime}\} \}\text{ and }S^{\prime}\backslash\varphi^{\mathcal{M}^{\prime}}\notin\{\{s^{\prime}\},\{s^{ \prime},t^{\prime}\}\}\\ \iff&\varphi^{\mathcal{M}}\neq\{s^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime},t^{\prime}\} \text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{t^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\emptyset\end{array}\] In each case, the last line of the above proofs states that \(\varphi\) cannot be interpreted on the related models, which is impossible. Thus \(\mathcal{M},s\Vdash\nabla\varphi\) and \(\mathcal{M}^{\prime},s^{\prime}\Vdash\nabla\varphi\). Analogously, we can show that \(\mathcal{M},t\Vdash\nabla\varphi\) and \(\mathcal{M}^{\prime},t^{\prime}\Vdash\nabla\varphi\). For the case \(\bullet\varphi\), we have the following equivalences. \[\begin{array}{ll}&\mathcal{M},s\Vdash\bullet\varphi\\ \Longleftrightarrow&\mathcal{M},s\Vdash\varphi\text{ and }\varphi^{\mathcal{M}} \notin N(s)\\ \Longleftrightarrow&\mathcal{M},s\Vdash\varphi\text{ and }\varphi^{\mathcal{M}} \neq\{s\}\text{ and }\varphi^{\mathcal{M}}\neq\{s,t\}\\ \Longleftrightarrow&\text{false}\\ \\ \Longleftrightarrow&\mathcal{M}^{\prime},s^{\prime}\Vdash\varphi\text{ and }\varphi^{\mathcal{M}^{\prime}} \notin N^{\prime}(s^{\prime})\\ \Longleftrightarrow&\mathcal{M}^{\prime},s^{\prime}\Vdash\varphi\text{ and }\varphi^{\mathcal{M}^{\prime}} \neq\{s^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime},t^{\prime}\}\\ \Longleftrightarrow&\text{false}\\ \end{array}\] This shows that \(\mathcal{M},s\Vdash\bullet\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\Vdash\bullet\varphi\). Therefore, \(\mathcal{M},s\Vdash\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\Vdash\varphi\) for all \(\varphi\in\mathcal{L}(\nabla,\bullet)\). \[\begin{array}{ll}&\mathcal{M},t\Vdash\bullet\varphi\\ \Longleftrightarrow&\mathcal{M},t\Vdash\varphi\text{ and }\varphi^{\mathcal{M}} \notin N(t)\\ \Longleftrightarrow&\mathcal{M},t\Vdash\varphi\text{ and }\varphi^{\mathcal{M}} \neq\{s\}\text{ and }\varphi^{\mathcal{M}}\neq\{s,t\}\\ \Longleftrightarrow&\mathcal{M},t\Vdash\varphi\text{ and }\mathcal{M},s\neq \varphi\\ \Longleftrightarrow&\mathcal{M}^{\prime},t^{\prime}\Vdash\varphi\text{ and }\varphi^{\mathcal{M}^{\prime}} \neq\{s^{\prime},t^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}} \neq\emptyset\\ \Longleftrightarrow&\mathcal{M}^{\prime},t^{\prime}\Vdash\varphi\text{ and }\varphi^{\mathcal{M}^{\prime}} \notin N^{\prime}(t^{\prime})\\ \Longleftrightarrow&\mathcal{M}^{\prime},t^{\prime}\Vdash\bullet\varphi\\ \end{array}\] This gives us that \(\mathcal{M},t\Vdash\varphi\) iff \(\mathcal{M}^{\prime},t^{\prime}\Vdash\varphi\) for all \(\varphi\in\mathcal{L}(\nabla,\bullet)\). We have now completed the proof of \((*)\). If \((s)\) were to be defined by a set of \(\mathcal{L}(\nabla,\bullet)\)-formulas, say \(\Gamma\), then as \(\mathcal{F}\) has \((s)\), we would have \(\mathcal{F}\Vdash\Gamma\), thus we should also have \(\mathcal{F}^{\prime}\Vdash\Gamma\), that is, \(\mathcal{F}^{\prime}\) has \((s)\): a contradiction. Therefore, \((s)\) is not definable in \(\mathcal{L}(\nabla,\bullet)\). Similarly, we can obtain the undefinability of \((4)\) in \(\mathcal{L}(\nabla,\bullet)\). **Proposition 25**.: The frame property \((5)\) is undefinable in \(\mathcal{L}(\nabla,\bullet)\). Proof.: Consider the following frames \(\mathcal{F}=\langle S,N\rangle\) and \(\mathcal{F}^{\prime}=\langle S^{\prime},N^{\prime}\rangle\), where \(S=\{s,t\}\) and \(S^{\prime}=\{s^{\prime},t^{\prime}\}\): \(\mathcal{F}\)\(\mathcal{F}^{\prime} Firstly, \(\mathcal{F}\) has \((5)\). Suppose that \(X\notin N(s)\), to prove that \(\{u\in S\mid X\notin N(u)\}\in N(s)\). By supposition, \(X=\emptyset\) or \(X=\{s\}\). Either case implies that \(\{u\in S\mid X\notin N(u)\}=\{s,t\}\in N(s)\). Thus \(N(s)\) has \((5)\). A similar argument shows that \(N(t)\) has \((5)\). Secondly, \(\mathcal{F}^{\prime}\) does not \((5)\). For instance, \(\emptyset\notin N^{\prime}(s^{\prime})\) and \(\{u\in S^{\prime}\mid\emptyset\notin N^{\prime}(u)\}=\{s^{\prime}\}\notin N^{ \prime}(s^{\prime})\). Thirdly, for all \(\psi\in\mathcal{L}(\nabla,\bullet)\), we have that \(\mathcal{F}\vDash\psi\) iff \(\mathcal{F}^{\prime}\vDash\psi\). Suppose that \(\mathcal{F}\nvDash\psi\). Then there exists \(\mathcal{M}=\langle\mathcal{F},V\rangle\) and \(x\in S\) such that \(\mathcal{M},x\nvDash\psi\). Define \(V^{\prime}\) to be a valuation on \(\mathcal{F}^{\prime}\) such that \(s\in V(p)\) iff \(s^{\prime}\in V^{\prime}(p)\), and \(t\in V(p)\) iff \(t^{\prime}\in V^{\prime}(p)\). In what follows, we show \((**)\): for all \(\varphi\in\mathcal{L}(\nabla,\bullet)\), \(\mathcal{M},s\vDash\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\vDash\varphi\), and \(\mathcal{M},t\vDash\varphi\) iff \(\mathcal{M}^{\prime},t^{\prime}\vDash\varphi\), where \(\mathcal{M}^{\prime}=\langle\mathcal{F}^{\prime},N^{\prime}\rangle\). We proceed by induction on \(\varphi\), where the nontrivial cases are \(\nabla\varphi\) and \(\bullet\varphi\). For the case \(\nabla\varphi\), we have the following equivalences. \[\begin{array}{ll}&\mathcal{M},s\vDash\nabla\varphi\\ \Longleftrightarrow&\varphi^{\mathcal{M}}\notin N(s)\text{ and }S\backslash \varphi^{\mathcal{M}}\notin N(s)\\ \Longleftrightarrow&\varphi^{\mathcal{M}}\notin\{\{t\},\{s,t\}\}\text{ and }S\backslash\varphi^{\mathcal{M}}\notin\{\{t\},\{s,t\}\}\\ \Longleftrightarrow&\varphi^{\mathcal{M}}\neq\{t\}\text{ and }\varphi^{\mathcal{M}}\neq\{s,t\}\text{ and }\varphi^{\mathcal{M}}\neq\{s\}\text{ and }\varphi^{\mathcal{M}}\neq\emptyset\\ \end{array}\] \[\begin{array}{ll}&\mathcal{M}^{\prime},s^{\prime}\vDash\nabla\varphi\\ \Longleftrightarrow&\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{ \prime})\text{ and }S^{\prime}\backslash\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{ \prime})\\ \Longleftrightarrow&\varphi^{\mathcal{M}^{\prime}}\notin\{\{t^{\prime}\},\{s^ {\prime},t^{\prime}\}\}\text{ and }S^{\prime}\backslash\varphi^{\mathcal{M}^{\prime}}\notin\{\{t^{\prime}\},\{s^ {\prime},t^{\prime}\}\}\\ \Longleftrightarrow&\varphi^{\mathcal{M}^{\prime}}\neq\{t^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime},t^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\emptyset\\ \end{array}\] In each case, the last line of the above proofs states that \(\varphi\) cannot be interpreted on the related models, which is impossible. Thus \(\mathcal{M},s\nvDash\nabla\varphi\) and \(\mathcal{M}^{\prime},s^{\prime}\nvDash\nabla\varphi\). Analogously, we can show that \(\mathcal{M},t\nvDash\nabla\varphi\) and \(\mathcal{M}^{\prime},t^{\prime}\nvDash\nabla\varphi\). For the case \(\bullet\varphi\), we have the following equivalences. \[\begin{array}{ll}&\mathcal{M},t\vDash\bullet\varphi\\ \Longleftrightarrow&\mathcal{M},t\vDash\varphi\text{ and }\varphi^{\mathcal{M}} \notin N(t)\\ \Longleftrightarrow&\mathcal{M},t\vDash\varphi\text{ and }\varphi^{\mathcal{M}} \neq\{t\}\text{ and }\varphi^{\mathcal{M}}\neq\{s,t\}\\ \Longleftrightarrow&\text{false}\\ \end{array}\] \[\begin{array}{ll}&\mathcal{M}^{\prime},t^{\prime}\vDash\bullet\varphi\\ \Longleftrightarrow&\mathcal{M}^{\prime},t^{\prime}\vDash\varphi\text{ and }\varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(t^{\prime})\\ \Longleftrightarrow&\mathcal{M}^{\prime},t^{\prime}\vDash\varphi\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\emptyset\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{t^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}}\neq\{s^{\prime},t^{\prime}\}\\ \Longleftrightarrow&\text{false}\\ \end{array}\] Thus \(\mathcal{M},t\vDash\bullet\varphi\) iff \(\mathcal{M}^{\prime},t^{\prime}\vDash\bullet\varphi\). Therefore, \(\mathcal{M},t\vDash\varphi\) iff \(\mathcal{M}^{\prime},t^{\prime}\vDash\varphi\) for all \(\varphi\in\mathcal{L}(\nabla,\bullet)\). \[\begin{array}{ll}&\mathcal{M},s\vDash\bullet\varphi\\ \Longleftrightarrow&\mathcal{M},s\vDash\varphi\text{ and }\varphi^{\mathcal{M}} \notin N(s)\\ \Longleftrightarrow&\mathcal{M},s\vDash\varphi\text{ and }\varphi^{\mathcal{M}} \neq\{t\}\text{ and }\varphi^{\mathcal{M}}\neq\{s,t\}\\ \Longleftrightarrow&\mathcal{M},s\vDash\varphi\text{ and }\mathcal{M},t\nvDash\varphi\\ \Longleftrightarrow&\mathcal{M}^{\prime},s^{\prime}\vDash\varphi\text{ and } \mathcal{M}^{\prime},t^{\prime}\nvDash\varphi\\ \Longleftrightarrow&\mathcal{M}^{\prime},s^{\prime}\vDash\varphi\text{ and } \varphi^{\mathcal{M}^{\prime}}\neq\{t^{\prime}\}\text{ and }\varphi^{\mathcal{M}^{\prime}} \neq\{s^{\prime},t^{\prime}\}\\ \Longleftrightarrow&\mathcal{M}^{\prime},s^{\prime}\vDash\varphi\text{ and } \varphi^{\mathcal{M}^{\prime}}\notin N^{\prime}(s^{\prime})\\ \Longleftrightarrow&\mathcal{M}^{\prime},s^{\prime}\vDash\bullet\varphi\end{array}\] Therefore, \(\mathcal{M},s\vDash\varphi\) iff \(\mathcal{M}^{\prime},s^{\prime}\vDash\varphi\) for all \(\varphi\in\mathcal{L}(\nabla,\bullet)\). This completes the proof of \((\ast\ast)\). If \((5)\) were to be defined by a set of \(\mathcal{L}(\nabla,\bullet)\)-formulas, say \(\Gamma\), then as \(\mathcal{F}\) has \((5)\), we would have \(\mathcal{F}\vDash\Gamma\), thus we should also have \(\mathcal{F}^{\prime}\vDash\Gamma\), that is, \(\mathcal{F}^{\prime}\) has \((5)\): a contradiction. Therefore, \((5)\) is not definable in \(\mathcal{L}(\nabla,\bullet)\). ## 5 Axiomatizations In this section, we axiomatize \(\mathcal{L}(\nabla,\bullet)\) over various classes of neighborhood frames. ### Classical logic #### 5.1.1 Proof system and Soundness **Definition 26**.: The classical logic of \(\mathcal{L}(\nabla,\bullet)\), denoted \(\mathbf{E}^{\nabla\bullet}\), consists of the following axioms and inference rules: \[\begin{array}{ll}\text{TAUT}&\text{all instances of tautologies}\\ \text{E1}&\nabla\varphi\leftrightarrow\nabla\neg\varphi\\ \text{E2}&\bullet\varphi\rightarrow\varphi\\ \text{E3}&\nabla\varphi\rightarrow\bullet\varphi\vee\bullet\neg\varphi\\ \text{MP}&\dfrac{\varphi,\varphi\rightarrow\psi}{\psi}\\ \text{RE}\nabla&\dfrac{\varphi\leftrightarrow\psi}{\nabla\varphi \leftrightarrow\nabla\psi}\\ \text{RE}\bullet&\dfrac{\varphi\leftrightarrow\psi}{\bullet\varphi \leftrightarrow\bullet\psi}\end{array}\] Intuitively, E1 says that one is (first-order) ignorant whether a proposition holds if and only if one is ignorant whether its negation holds; E2 says that one is (Fitchean) ignorant of the fact that \(\varphi\) only if it is the case that \(\varphi\); E3 describes the relationship between Fitchean ignorance and first-order ignorance: if one is ignorant whether \(\varphi\), then either one is ignorant of the fact that \(\varphi\) or one is ignorant of the fact that \(\varphi\) is not the case; RE\(\nabla\) and RE\(\bullet\) concerns the replacement of equivalences for first-order ignorance and Fitchean ignorance, respectively. It is straightforward by axiom E2 that \(\bullet\nabla\varphi\rightarrow\nabla\varphi\) is provable in \(\mathbf{E}^{\nabla\bullet}\), which says that under any condition, Rumsfeld ignorance implies first-order ignorance. The following result states how to derive Fitchean ignorance, which means that if one is _ignorant whether_ a _true_ proposition holds, then one is _ignorant of_ the proposition. It will be used in several places below (for instance, Lemma 31, Lemma 34, and Prop. 40). **Proposition 27**.: \(\vdash\nabla\varphi\wedge\varphi\rightarrow\bullet\varphi\)_. Equivalently, \(\vdash\circ\varphi\wedge\varphi\rightarrow\Delta\varphi\)._ Proof.: We have the following proof sequence: \[\begin{array}{llll}(i)&\nabla\varphi\rightarrow\bullet\varphi\vee\bullet \neg\varphi&\text{E3}\\ (ii)&\bullet\neg\varphi\rightarrow\neg\varphi&\text{E2}\\ (iii)&\nabla\varphi\rightarrow\bullet\varphi\vee\neg\varphi&(i),(ii)\\ (iv)&\nabla\varphi\wedge\varphi\rightarrow\bullet\varphi&(iii)\end{array}\] As a corollary, we have \(\vdash\nabla\nabla\varphi\wedge\nabla\varphi\rightarrow\bullet\nabla\varphi\), and thus \(\vdash\nabla\nabla\varphi\wedge\nabla\varphi\rightarrow\bullet\nabla\varphi\). This means, in Fine [15] terms, second-order ignorance plus first-order ignorance implies Rumsfeld ignorance. On one hand, this is not noticed in Fine [15]; on the other hand, this plus the transitivity entails that second-order ignorance implies Rumsfeld ignorance, a result in the paper in question. The following result indicates how to derive a proposition from Fitchean ignorance. **Proposition 28**.: \(\vdash\bullet(\circ\varphi\vee\psi\rightarrow\varphi)\rightarrow\varphi\) Proof.: We have the following proof sequence. \[\begin{array}{llll}(1)&\bullet(\circ\varphi\vee\psi\rightarrow\varphi) \rightarrow(\circ\varphi\vee\psi\rightarrow\varphi)&\text{E2}\\ (2)&(\circ\varphi\vee\psi\rightarrow\varphi)\rightarrow(\circ\varphi \rightarrow\varphi)&\text{TAUT}\\ (3)&\bullet(\circ\varphi\vee\psi\rightarrow\varphi)\rightarrow(\circ\varphi \rightarrow\varphi)&(1),(2)\\ (4)&\bullet\varphi\rightarrow\varphi&\text{E2}\\ (5)&\bullet(\circ\varphi\vee\psi\rightarrow\varphi)\rightarrow\varphi&(3),( 4)\end{array}\] **Proposition 29**.: \(\mathbf{E}^{\nabla\bullet}\) is sound with respect to the class of all (neighborhood) frames. Proof.: It suffices to show the validity of axiom E3. This has been shown in the proof of Prop. 15. #### 5.1.2 Completeness **Definition 30**.: The canonical model for \(\mathbf{E}^{\nabla\bullet}\) is \(\mathcal{M}^{c}=\langle S^{c},N^{c},V^{c}\rangle\), where * \(S^{c}=\{s\mid s\) is a maximal consistent set for \(\mathbf{E}^{\nabla\bullet}\}\), * \(N^{c}(s)=\{|\varphi|\mid\circ\varphi\wedge\Delta\varphi\in s\}\), * \(V^{c}(p)=\{s\in S^{c}\mid p\in s\}\). **Lemma 31**.: For all \(\varphi\in\mathcal{L}(\nabla,\bullet)\), for all \(s\in S^{c}\), we have \[\mathcal{M}^{c},s\vDash\varphi\iff\varphi\in s.\] That is, \(|\varphi|=\varphi^{\mathcal{M}^{c}}\). Proof.: By induction on \(\varphi\). The nontrivial cases are \(\nabla\varphi\) and \(\bullet\varphi\). For case \(\nabla\varphi\): First, suppose that \(\nabla\varphi\in s\), to show that \(\mathcal{M}^{c},s\vDash\nabla\varphi\). By supposition, we have \(\Delta\varphi\notin s\). Then by definition of \(N^{c}\), we infer that \(|\varphi|\notin N^{c}(s)\). By supposition again and axiom E1, we have \(\nabla\neg\varphi\in s\), and thus \(\Delta\neg\varphi\notin s\), and hence \(|\neg\varphi|\notin N^{c}(s)\), that is, \(S^{c}\backslash|\varphi|\notin N^{c}(s)\). By induction hypothesis, we have \(\varphi^{\mathcal{M}^{c}}\notin N^{c}(s)\) and \(S^{c}\backslash\varphi^{\mathcal{M}^{c}}\notin N^{c}(s)\). Therefore, \(\mathcal{M}^{c},s\vDash\nabla\varphi\). Conversely, assume that \(\nabla\varphi\notin s\) (that is, \(\nabla\neg\varphi\notin s\)), to show that \(\mathcal{M}^{c},s\nvDash\nabla\varphi\). By assumption, \(\Delta\varphi\in s\) and \(\Delta\neg\varphi\in s\). Since \(s\in S^{c}\), we have either \(\varphi\in s\) or \(\neg\varphi\in s\). If \(\varphi\in s\), then by axiom E2, \(\circ\neg\varphi\in s\), and then \(|\neg\varphi|\in N^{c}(s)\), viz. \(S^{c}\backslash|\varphi|\in N^{c}(s)\), which by induction hypothesis implies that \(S^{c}\backslash\varphi^{\mathcal{M}^{c}}\in N^{c}(s)\). If \(\neg\varphi\in s\), then again by axiom E2, \(\circ\varphi\in s\), thus \(|\varphi|\in N^{c}(s)\), which by induction hypothesis entails that \(\varphi^{\mathcal{M}^{c}}\in N^{c}(s)\). We have now shown that either \(\varphi^{\mathcal{M}^{c}}\in N^{c}(s)\) or \(S^{c}\backslash\varphi^{\mathcal{M}^{c}}\in N^{c}(s)\), and we therefore conclude that \(\mathcal{M}^{c},s\nvDash\nabla\varphi\). For case \(\bullet\varphi\): First, suppose that \(\bullet\varphi\in s\), to show that \(\mathcal{M}^{c},s\vDash\bullet\varphi\). By supposition and axiom E2, we have \(\varphi\in s\). By induction hypothesis, \(\mathcal{M}^{c},s\vDash\varphi\). By supposition and definition of \(N^{c}\), we infer that \(|\varphi|\notin N^{c}(s)\), which by induction means that \(\varphi^{\mathcal{M}^{c}}\notin N^{c}(s)\). Therefore, \(\mathcal{M}^{c},s\vDash\bullet\varphi\). Conversely, assume that \(\bullet\varphi\notin s\), to demonstrate that \(\mathcal{M}^{c},s\nvDash\bullet\varphi\). By assumption, \(\circ\varphi\in s\). If \(\mathcal{M}^{c},s\nvDash\varphi\), it is obvious that \(\mathcal{M}^{c},s\nvDash\bullet\varphi\). Otherwise, by induction hypothesis, we have \(\varphi\in s\), then \(\circ\varphi\wedge\varphi\in s\). By Prop. 27, \(\Delta\varphi\in s\), and thus \(|\varphi|\in N^{c}(s)\), by induction we obtain \(\varphi^{\mathcal{M}^{c}}\in N^{c}(s)\), and therefore we have also \(\mathcal{M}^{c},s\nvDash\bullet\varphi\). It is then a standard exercise to show the following. **Theorem 32**.: \(\mathbf{E}^{\nabla\bullet}\) is sound and strongly complete with respect to the class of all neighborhood frames. ### Extensions #### 5.2.1 \(\mathbf{E}^{\nabla\bullet}_{\mathbf{c}}\) Define \(\mathbf{E}^{\nabla\bullet}_{\mathbf{c}}\) be the smallest extension of \(\mathbf{E}^{\nabla\bullet}\) with the following axiom, denoted E4: \[\bullet\varphi\rightarrow\nabla\varphi.\] Intuitively, E4 says that Fitchean ignorance implies first-order ignorance. From E4 we can easily prove \(\Delta\varphi\rightarrow\circ\varphi\). This turns the canonical model for \(\mathbf{E}^{\nabla\bullet}\) (Def. 30) into the following simpler one. **Definition 33**.: The canonical model for \(\mathbf{E}^{\nabla\bullet}_{\mathbf{c}}\) is a triple \(\mathcal{M}^{c}=\langle S^{c},N^{c},V^{c}\rangle\), where * \(S^{c}=\{s\mid s\text{ is a maximal consistent set for }\mathbf{E}_{\mathbf{c}}^{\nabla\bullet}\}\) * \(N^{c}(s)=\{|\varphi|\mid\Delta\varphi\in s\}\) * \(V^{c}(p)=\{s\in S^{c}\mid p\in s\}\). **Lemma 34**.: For all \(\varphi\in\mathcal{L}(\nabla,\bullet)\), for all \(s\in S^{c}\), we have \[\mathcal{M}^{c},s\vDash\varphi\iff\varphi\in s.\] That is, \(|\varphi|=\varphi^{\mathcal{M}^{c}}\). Proof.: By induction on \(\varphi\). The nontrivial cases are \(\nabla\varphi\) and \(\bullet\varphi\). The case \(\nabla\varphi\) has been shown in [12, Lemma 1]. It suffices to show the case \(\bullet\varphi\). Suppose that \(\bullet\varphi\in s\). Then by axiom E2, we have \(\varphi\in s\); by axiom E4, we derive that \(\nabla\varphi\in s\), and thus \(\Delta\varphi\notin s\). This follows that \(|\varphi|\notin N^{c}(s)\). By induction hypothesis, \(\mathcal{M}^{c},s\vDash\varphi\) and \(\varphi^{\mathcal{M}^{c}}\notin N^{c}(s)\). Therefore, \(\mathcal{M}^{c},s\vDash\bullet\varphi\). Conversely, assume that \(\bullet\varphi\notin s\), to show that \(\mathcal{M}^{c},s\not\vDash\bullet\varphi\), which by induction hypothesis amounts to showing that \(\varphi\notin s\) or \(|\varphi|\in N^{c}(s)\). For this, suppose that \(\varphi\in s\), this plus the assumption implies that \(\circ\varphi\wedge\varphi\in s\). Then by Prop. 27, \(\Delta\varphi\in s\), and therefore \(|\varphi|\in N^{c}(s)\). **Proposition 35**.: \(\mathcal{M}^{c}\) possesses the property \((c)\). Proof.: Refer to [12, Thm. 2]. Now it is a standard exercise to show the following. **Theorem 36**.: \(\mathbf{E}_{\mathbf{c}}^{\nabla\bullet}\) is sound and strongly complete with respect to the class of \((c)\)-frames. In the neighborhood context \((c)\), there is some relationship between Rumsfeld ignorance, second-order ignorance and first-order ignorance. The following is immediate from the axiom E4. **Proposition 37**.: \(\bullet\nabla\varphi\rightarrow\nabla\nabla\varphi\) is provable in \(\mathbf{E}_{\mathbf{c}}^{\nabla\bullet}\). This says that under the condition \((c)\), Rumsfeld ignorance implies second-order ignorance. Combined with an instance of the axiom E2 (\(\bullet\nabla\varphi\rightarrow\nabla\varphi\)) and \(\vdash\nabla\nabla\varphi\wedge\nabla\varphi\rightarrow\bullet\nabla\varphi\) (see the remark after Prop. 27), it follows that within the neighborhood context \((c)\), Rumsfeld ignorance amounts to second-order ignorance plus first-order ignorance, and thus Rumsfeld ignorance is definable in terms of first-order ignorance. #### 5.2.2 \(\mathbf{E}\mathbf{N}^{\nabla\bullet}\) Let \(\mathbf{E}\mathbf{N}^{\nabla\bullet}=\mathbf{E}^{\nabla\bullet}+\circ\top\). From \(\circ\top\) and Prop. 27 it follows that \(\Delta\top\) is derivable in \(\mathbf{E}\mathbf{N}^{\nabla\bullet}\). **Theorem 38**.: \(\mathbf{E}\mathbf{N}^{\nabla\bullet}\) is sound and strongly complete with respect to the class of all \((n)\)-frames. Proof.: For soundness, by Prop. 29, it remains only to show the validity of \(\circ\top\) over the class of \((n)\)-frames. The validity of \(\circ\top\) can be found in Prop. 22. For completeness, define the canonical model \(\mathcal{M}^{c}\) w.r.t. \(\mathbf{EN}^{\nabla\bullet}\) as in Def. 30. By Thm. 32, it suffices to show that \(\mathcal{M}^{c}\) possesses \((n)\). By the construction of \(\mathbf{EN}^{\nabla\bullet}\), for all \(s\in S^{c}\), we have \(\circ\top\wedge\Delta\top\in s\), and thus \(|\top|\in N^{c}(s)\), that is, \(S^{c}\in N^{c}(s)\), as desired. #### 5.2.3 Monotone logic Let \(\mathbf{M}^{\nabla\bullet}\) be the extension of \(\mathbf{E}^{\nabla\bullet}\) plus the following extra axioms: \[\begin{array}{ll}\text{M1}&\nabla(\varphi\vee\psi)\wedge\nabla(\neg\varphi \vee\chi)\to\nabla\varphi\\ \text{M2}&\bullet(\varphi\vee\psi)\wedge\bullet(\neg\varphi\vee\chi)\to\nabla \varphi\\ \text{M3}&\bullet(\varphi\vee\psi)\wedge\nabla(\neg\varphi\vee\chi)\to\nabla \varphi\\ \text{M4}&\circ\varphi\wedge\varphi\to\Delta(\varphi\vee\psi)\wedge\circ( \varphi\vee\psi)\end{array}\] Prop. 39 and Prop. 40 tells us how to derive \(\Delta\top\) and \(\circ\top\) in \(\mathbf{M}^{\nabla\bullet}\), respectively. They will used in Section 6. **Proposition 39**.: \(\Delta\varphi\to\Delta\top\) is provable in \(\mathbf{M}^{\nabla\bullet}\). Proof.: We have the following proof sequence in \(\mathbf{M}^{\nabla\bullet}\). \[\begin{array}{ll}(1)&\nabla(\varphi\vee\top)\wedge\nabla(\neg\varphi\vee \top)\to\nabla\varphi&\text{M1}\\ (2)&\top\leftrightarrow\varphi\vee\top&\text{TAUT}\\ (3)&\nabla\top\leftrightarrow\nabla(\varphi\vee\top)&(2),\text{REV}\\ (4)&\top\leftrightarrow\neg\varphi\vee\top&\text{TAUT}\\ (5)&\nabla\top\leftrightarrow\nabla(\neg\varphi\vee\top)&(4),\text{REV}\\ (6)&\nabla\top\to\nabla\varphi&(1),(3),(5)\\ (7)&\Delta\varphi\to\Delta\top&(6),\text{Def. }\Delta\end{array}\] In the above proof, \(\nabla\top\to\nabla\varphi\) says that if one is ignorant about whether \(\top\) holds, then one is ignorant about whether everything holds. **Proposition 40**.: \(\varphi\wedge\circ\varphi\to\circ\top\) is provable in \(\mathbf{M}^{\nabla\bullet}\). Proof.: We have the following proof sequence in \(\mathbf{M}^{\nabla\bullet}\). \[\begin{array}{ll}(1)&(\varphi\vee\top)\leftrightarrow\top&\text{TAUT}\\ (2)&\bullet(\varphi\vee\top)\leftrightarrow\bullet\top&(1),\text{RE}\bullet\\ (3)&(\neg\varphi\vee\top)\leftrightarrow\top&\text{TAUT}\\ (4)&\bullet(\neg\varphi\vee\top)\leftrightarrow\bullet\top&(3),\text{RE} \bullet\\ (5)&\bullet(\varphi\vee\top)\wedge\bullet(\neg\varphi\vee\top)\to\nabla \varphi&\text{M2}\\ (6)&\bullet\top\to\nabla\varphi&(2),(4),(5)\\ (7)&\Delta\varphi\to\circ\top&(6),\text{Def. }\Delta,\text{Def. }\circ\\ (8)&\varphi\wedge\circ\varphi\to\Delta\varphi&\text{Prop. }\ref{prop:27}\\ (9)&\varphi\wedge\circ\varphi\to\circ\top&(7),(8)\end{array}\] **Proposition 41**.: \(\mathbf{M}^{\nabla\bullet}\) is sound with respect to the class of all \((s)\)-frames. Proof.: By soundness of \(\mathbf{E}^{\nabla\bullet}\) (Prop. 29), it suffices to show the validity of the extra axioms. Let \(\mathcal{M}=\langle S,N,V\rangle\) be an arbitrary \((s)\)-model and \(s\in S\). For M1: suppose that \(\mathcal{M},s\in\nabla(\varphi\vee\psi)\wedge\nabla(\neg\varphi\vee\chi)\). Then \((\varphi\vee\psi)^{\mathcal{M}}\notin N(s)\) and \((\neg\varphi\vee\chi)^{\mathcal{M}}\notin N(s)\), that is, \(\varphi^{\mathcal{M}}\cup\psi^{\mathcal{M}}\notin N(s)\) and \((\neg\varphi)^{\mathcal{M}}\cup\chi^{\mathcal{M}}\notin N(s)\). Since \(\varphi^{\mathcal{M}}\subseteq\varphi^{\mathcal{M}}\cup\psi^{\mathcal{M}}\) and \(N(s)\) is closed under supersets, we must have \(\varphi^{\mathcal{M}}\notin N(s)\). Similarly, we can show that \((\neg\varphi)^{\mathcal{M}}\notin N(s)\). Therefore, \(\mathcal{M},s\vDash\nabla\varphi\), as desired. Similarly, we can show the validity of M2 and M3. For M4: assume that \(\mathcal{M},s\vDash\varphi\wedge\varphi\). Then \(\varphi^{\mathcal{M}}\in N(s)\). Since \(\varphi^{\mathcal{M}}\subseteq\varphi^{\mathcal{M}}\cup\psi^{\mathcal{M}}=( \varphi\vee\psi)^{\mathcal{M}}\), by the property \((s)\), we have \((\varphi\vee\psi)^{\mathcal{M}}\in N(s)\), and therefore \(\mathcal{M},s\vDash\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\). **Definition 42**.: Let \(\Lambda\) be an extension of \(\mathbf{M}^{\nabla\bullet}\). A triple \(\mathcal{M}^{\Lambda}=\langle S^{\Lambda},N^{\Lambda},V^{\Lambda}\rangle\) is a _canonical neighborhood model_ for \(\Lambda\) if * \(S^{\Lambda}=\{s\mid s\text{ is a maximal consistent set for }\Lambda\}\), * \(|\varphi|\in N^{\Lambda}(s)\) iff \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\) for all \(\psi\), * \(V^{\Lambda}(p)=|p|=\{s\in S^{\Lambda}\mid p\in s\}\). We need to show that \(N^{\Lambda}\) is well defined. **Proposition 43**.: Let \(s\in S^{\Lambda}\) as defined in Def. 42. If \(|\varphi|=|\varphi^{\prime}|\), then \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\) for all \(\psi\) iff \(\Delta(\varphi^{\prime}\vee\psi)\wedge\circ(\varphi^{\prime}\vee\psi)\in s\) for all \(\psi\). Proof.: Suppose that \(|\varphi|=|\varphi^{\prime}|\), then \(\vdash\varphi\leftrightarrow\varphi^{\prime}\), and thus \(\vdash\varphi\vee\psi\leftrightarrow\varphi^{\prime}\vee\psi\). By \(\mathrm{RE}\nabla\), \(\mathrm{RE}\bullet\), Def. \(\Delta\) and Def. \(\circ\), we infer that \(\vdash\Delta(\varphi\vee\psi)\leftrightarrow\Delta(\varphi^{\prime}\vee\psi)\) and \(\vdash\circ(\varphi\vee\psi)\leftrightarrow\circ(\varphi^{\prime}\vee\psi)\), and hence \(\vdash\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\leftrightarrow\Delta( \varphi^{\prime}\vee\psi)\wedge\circ(\varphi^{\prime}\vee\psi)\). Therefore, \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\) for all \(\psi\) iff \(\Delta(\varphi^{\prime}\vee\psi)\wedge\circ(\varphi^{\prime}\vee\psi)\in s\) for all \(\psi\). **Lemma 44**.: Let \(\mathcal{M}^{\Lambda}=\langle S^{\Lambda},N^{\Lambda},V^{\Lambda}\rangle\) be an arbitrary canonical neighborhood model for any system \(\Lambda\) extending \(\mathbf{M}^{\nabla\bullet}\). Then for all \(s\in S^{\Lambda}\), for all \(\varphi\in\mathcal{L}(\nabla,\bullet)\), we have \[\mathcal{M}^{\Lambda},s\vDash\varphi\iff\varphi\in s.\] That is, \(\varphi^{\mathcal{M}^{\Lambda}}=|\varphi|\). Proof.: By induction on \(\varphi\). The nontrivial cases are \(\nabla\varphi\) and \(\bullet\varphi\). For case \(\nabla\varphi\): Suppose that \(\mathcal{M}^{\Lambda},s\vDash\nabla\varphi\), to show that \(\nabla\varphi\notin s\). By supposition and induction hypothesis, \(|\varphi|\in N^{\Lambda}(s)\) or \(S\backslash|\varphi|\in N^{\Lambda}(s)\) (that is, \(|\neg\varphi|\in N^{\Lambda}(s)\)). If \(|\varphi|\in N^{\Lambda}(s)\), then \(\Delta(\varphi\vee\psi)\in s\) for all \(\psi\). By letting \(\psi=\bot\), then \(\Delta\varphi\in s\), and thus \(\nabla\varphi\notin s\). If \(|\neg\varphi|\in N^{\Lambda}(s)\), with a similar argument we can show that \(\Delta\neg\varphi\in s\), that is, \(\Delta\varphi\in s\), and we also have \(\nabla\varphi\notin s\). Conversely, assume that \(\mathcal{M}^{\Lambda},s\vDash\nabla\varphi\), to prove that \(\nabla\varphi\in s\). By assumption and induction hypothesis, \(|\varphi|\notin N^{\Lambda}(s)\) and \(S\backslash|\varphi|\notin N^{\Lambda}(s)\), that is, \(|\neg\varphi|\notin N^{\Lambda}(s)\). Then \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\notin s\) for some \(\psi\), and \(\Delta(\neg\varphi\vee\chi)\wedge\circ(\neg\varphi\vee\chi)\notin s\) for some \(\chi\). We consider the following cases. * \(\Delta(\varphi\vee\psi)\notin s\) and \(\Delta(\neg\varphi\vee\chi)\notin s\). That is, \(\nabla(\varphi\vee\psi)\in s\) and \(\nabla(\neg\varphi\vee\chi)\in s\). Then by axiom M1, we infer that \(\nabla\varphi\in s\). * \(\Delta(\varphi\vee\psi)\notin s\) and \(\circ(\neg\varphi\vee\chi)\notin s\). That is, \(\nabla(\varphi\vee\psi)\in s\) and \(\bullet(\neg\varphi\vee\chi)\in s\). By axiom M3, \(\nabla\neg\varphi\in s\), that is, \(\nabla\varphi\in s\). * \(\circ(\varphi\vee\psi)\notin s\) and \(\Delta(\neg\varphi\vee\chi)\notin s\). That is, \(\bullet(\varphi\vee\psi)\in s\) and \(\nabla(\neg\varphi\vee\chi)\in s\). Then by axiom M3, we derive that \(\nabla\varphi\in s\). * \(\circ(\varphi\vee\psi)\notin s\) and \(\circ(\neg\varphi\vee\chi)\notin s\). That is, \(\bullet(\varphi\vee\psi)\in s\) and \(\bullet(\neg\varphi\vee\chi)\in s\). By axiom M2, we obtain that \(\nabla\varphi\in s\). Either case implies that \(\nabla\varphi\in s\), as desired. For case \(\bullet\varphi\). Suppose that \(\bullet\varphi\in s\), to show that \(\mathcal{M}^{\Lambda},s\vDash\bullet\varphi\). By supposition and axiom E2, we obtain \(\varphi\in s\), which by induction hypothesis means that \(\mathcal{M}^{\Lambda},s\vDash\varphi\). We have also \(|\varphi|\notin N^{\Lambda}(s)\): otherwise, by definition of \(N^{\Lambda}\), we should have \(\circ(\varphi\vee\psi)\in s\) for all \(\psi\), which then implies that \(\circ\varphi\in s\) (by letting \(\psi=\bot\)), a contradiction. Then by induction hypothesis, \(\varphi^{\mathcal{M}^{\Lambda}}\notin N^{\Lambda}(s)\). Therefore, \(\mathcal{M}^{\Lambda},s\vDash\bullet\varphi\). Conversely, assume that \(\bullet\varphi\notin s\) (that is, \(\circ\varphi\in s\)), to prove that \(\mathcal{M}^{\Lambda},s\vDash\bullet\varphi\). For this, suppose that \(\mathcal{M}^{\Lambda},s\vDash\varphi\), by induction hypothesis, we have \(\varphi\in s\), and then \(\circ\varphi\wedge\varphi\in s\). By axiom M4, \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\) for all \(\psi\). By definition of \(N^{\Lambda}\), we derive that \(|\varphi|\in N^{\Lambda}(s)\). Then by induction hypothesis again, we conclude that \(\varphi^{\mathcal{M}^{\Lambda}}\in N^{\Lambda}(s)\). Therefore, \(\mathcal{M}^{\Lambda},s\vDash\bullet\varphi\), as desired. Given an extension \(\Lambda\) of \(\mathbf{M}^{\nabla\bullet}\), the minimal canonical neighborhood model for \(\Lambda\), denoted \(\mathcal{M}^{\Lambda}_{0}=\langle S^{\Lambda},N^{\Lambda}_{0},V^{\Lambda}\rangle\), is defined such that \(N^{\Lambda}_{0}(s)=\{|\varphi|\mid\Delta(\varphi\vee\psi)\wedge\circ(\varphi \vee\psi)\in s\text{ for all }\psi\}\). Note that \(\mathcal{M}^{\Lambda}_{0}\) is not necessarily supplemented. Therefore, we define a notion of supplementation, which comes from [4]. **Definition 45**.: Let \(\mathcal{M}=\langle S,N,V\rangle\) be a neighborhood model. The _supplementation_ of \(\mathcal{M}\), denoted \(\mathcal{M}^{+}\), is a triple \(\langle S,N^{+},V\rangle\), in which for every \(s\in S\), \(N^{+}(s)\) is the superset closure of \(N(s)\); namely, for each \(s\in S\), \[N^{+}(s)=\{X\subseteq S\mid Y\subseteq X\text{ for some }Y\in N(s)\}.\] One may easily show that \(\mathcal{M}^{+}\) is supplemented, that is, \(\mathcal{M}^{+}\) possesses \((s)\). Also, \(N(s)\subseteq N^{+}(s)\). Moreover, the properties of being closed under intersections and containing the unit are closed under the supplementation. **Proposition 46**.: Let \(\mathcal{M}=\langle S,N,V\rangle\) be a neighborhood model and \(\mathcal{M}^{+}\) be its supplementation. If \(\mathcal{M}\) possesses \((i)\), then so does \(\mathcal{M}^{+}\); if \(\mathcal{M}\) possesses \((n)\), then so does \(\mathcal{M}^{+}\). In what follows, we will use \((\mathcal{M}^{\Lambda}_{0})^{+}\) to denote the supplementation of \(\mathcal{M}^{\Lambda}_{0}\), namely \((\mathcal{M}^{\Lambda}_{0})^{+}=\langle S^{\Lambda},(N^{\Lambda}_{0})^{+},V^{ \Lambda}\rangle\), where \(\Lambda\) extends \(\mathbf{M}^{\nabla\bullet}\). By the definition of supplementation, \((\mathcal{M}^{\Lambda}_{0})^{+}\) is an \((s)\)-model. To show the completeness of \(\mathbf{M}^{\nabla\bullet}\) over the class of \((s)\)-frames, by Lemma 44, it remains only to show that \((\mathcal{M}^{\Lambda}_{0})^{+}\) is a canonical neighborhood model for \(\Lambda\). **Lemma 47**.: Let \(\Lambda\) extends \(\mathbf{M}^{\nabla\bullet}\). For every \(s\in S^{\Lambda}\), we have \[|\varphi|\in(N_{0}^{\Lambda})^{+}(s)\iff\Delta(\varphi\vee\psi)\wedge\circ( \varphi\vee\psi)\in s\text{ for all }\psi.\] Proof.: Right-to-Left: Immediate by the definition of \(N_{0}^{\Lambda}\) and the fact that \(N_{0}^{\Lambda}(s)\subseteq(N_{0}^{\Lambda})^{+}(s)\). Left-to-Right: Suppose that \(|\varphi|\in(N_{0}^{\Lambda})^{+}(s)\), to prove that \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\text{ for all }\psi\). By supposition, \(X\subseteq|\varphi|\) for some \(X\in N_{0}^{\Lambda}(s)\). Then there must be a \(\chi\) such that \(X=|\chi|\), and thus \(\Delta(\chi\vee\psi)\wedge\circ(\chi\vee\psi)\in s\) for all \(\psi\), and hence \(\Delta(\chi\vee\varphi\vee\psi)\wedge\circ(\chi\vee\varphi\vee\psi)\in s\). From \(|\chi|\subseteq|\varphi|\), it follows that \(\vdash\chi\rightarrow\varphi\), and then \(\vdash\chi\vee\varphi\vee\psi\leftrightarrow\varphi\vee\psi\), and thus \(\vdash\Delta(\chi\vee\varphi\vee\psi)\leftrightarrow\Delta(\varphi\vee\psi)\) and \(\vdash\circ(\chi\vee\varphi\vee\psi)\leftrightarrow\circ(\varphi\vee\psi)\) by \(\mathrm{REV}\), \(\mathrm{RE\bullet}\), \(\mathrm{Def.}\)\(\Delta\) and \(\mathrm{Def.}\)\(\circ\). Therefore, \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\) for all \(\psi\). Based on the previous analysis, we have the following. **Theorem 48**.: \(\mathbf{M}^{\nabla\bullet}\) is sound and strongly complete with respect to the class of all \((s)\)-frames. We conclude this part with some results which will be used in Section 6. The following result states that if one is ignorant of the fact that either \(\varphi\) holds or one is ignorant whether \(\varphi\) holds, then one is either ignorant of the fact that \(\varphi\) or ignorant whether \(\varphi\) holds. **Proposition 49**.: \(\bullet(\varphi\vee\nabla\varphi)\rightarrow(\bullet\varphi\vee\nabla\varphi)\) is provable in \(\mathbf{M}^{\nabla\bullet}\).10 Footnote 10: In fact, we can show a stronger result: \(\bullet(\varphi\vee\nabla\psi)\rightarrow\bullet\varphi\vee\nabla\psi\) is provable in \(\mathbf{M}^{\nabla\bullet}\). But we do not need such a strong result below. Proof.: By Thm. 48, it suffices to show the formula is valid over the class of \((s)\)-frames. Let \(\mathcal{M}=\langle S,N,V\rangle\) be an \((s)\)-model and \(s\in S\). Suppose, for reductio, that \(\mathcal{M},s\vDash\bullet(\varphi\vee\nabla\varphi)\) and \(\mathcal{M},s\nvDash\bullet\varphi\vee\nabla\varphi\). From the former, it follows that \(\mathcal{M},s\vDash\varphi\vee\nabla\varphi\) and \((\varphi\vee\nabla\varphi)^{\mathcal{M}}\notin N(s)\); from the latter, it follows that \(\mathcal{M},s\nvDash\bullet\varphi\) and \(\mathcal{M},s\nvDash\nabla\varphi\). This implies that \(\mathcal{M},s\vDash\varphi\), which plus \(\mathcal{M},s\nvDash\bullet\varphi\) gives us \(\varphi^{\mathcal{M}}\in N(s)\). Since \(\varphi^{\mathcal{M}}\subseteq(\varphi\vee\nabla\varphi)^{\mathcal{M}}\), by \((s)\), we conclude that \((\varphi\vee\nabla\varphi)^{\mathcal{M}}\in N(s)\): a contradiction, as desired. The following result says that if one is ignorant of the fact that either non-ignorance of \(\varphi\) or non-ignorance whether \(\varphi\) holds implies that \(\varphi\), then one is ignorant of the fact that \(\varphi\). **Proposition 50**.: \(\bullet(\circ\varphi\vee\Delta\varphi\rightarrow\varphi)\rightarrow\bullet\varphi\) is provable in \(\mathbf{M}^{\nabla\bullet}\). Proof.: By Thm. 48, it remains only to prove that the formula is valid over the class of \((s)\)-frames. Let \(\mathcal{M}=\langle S,N,V\rangle\) be an \((s)\)-model and \(s\in S\). Assume, for reductio, that \(\mathcal{M},s\vDash\bullet(\circ\varphi\vee\Delta\varphi\rightarrow\varphi)\) and \(\mathcal{M},s\nvDash\bullet\varphi\). The former implies \(\mathcal{M},s\vDash\circ\varphi\vee\Delta\varphi\rightarrow\varphi\) and \((\circ\varphi\vee\Delta\varphi\rightarrow\varphi)^{\mathcal{M}}\notin N(s)\); the latter entails that \(\mathcal{M},s\vDash\circ\varphi\). Then \(\mathcal{M},s\vDash\varphi\), and thus \(\varphi^{\mathcal{M}}\in N(s)\). One may easily verify that \(\varphi^{\mathcal{M}}\subseteq(\circ\varphi\vee\Delta\varphi\rightarrow\varphi)^{ \mathcal{M}}\). Then by \((s)\), we conclude that \((\circ\varphi\vee\Delta\varphi\rightarrow\varphi)^{\mathcal{M}}\in N(s)\): a contradiction. #### 5.2.4 Regular logic Define \(\mathbf{R}^{\nabla\bullet}:=\mathbf{M}^{\nabla\bullet}+\text{R1}+\text{R2}\), where \[\begin{array}{rl}\text{R1}&\Delta\varphi\wedge\Delta\psi\to\Delta(\varphi \wedge\psi)\\ \text{R2}&\circ\varphi\wedge\circ\psi\to\circ(\varphi\wedge\psi)\end{array}\] **Proposition 51**.: \(\mathbf{R}^{\nabla\bullet}\) is sound with respect to the class of quasi-filters. Proof.: By soundness of \(\mathbf{M}^{\nabla\bullet}\), it remains to prove the validity of R1 and R2. The validity of R1 has been shown in [8, Prop. 3(iv)], and the validity of R2 has been shown in [11, Thm. 5.2]. **Proposition 52**.: Let \(\Lambda\) extends \(\mathbf{R}^{\nabla\bullet}\). Then the minimal canonical model \(\mathcal{M}_{0}^{\Lambda}\) has the property \((i)\). As a corollary, its supplementation is a quasi-filter. Proof.: Suppose that \(X,Y\in N_{0}^{\Lambda}(s)\), to show that \(X\cap Y\in N_{0}^{\Lambda}(s)\). By supposition, there exist \(\varphi\) and \(\chi\) such that \(X=|\varphi|\) and \(Y=|\chi|\), and then \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\) for all \(\psi\), and \(\Delta(\chi\vee\psi)\wedge\circ(\chi\vee\psi)\in s\) for all \(\psi\). By axioms R1 and R2, we can obtain that \(\Delta((\varphi\wedge\chi)\vee\psi)\wedge\circ((\varphi\wedge\chi)\vee\psi)\in s\) for all \(\psi\), which implies that \(|\varphi\wedge\chi|\in N_{0}^{\Lambda}(s)\), that is, \(X\cap Y\in N_{0}^{\Lambda}(s)\). **Theorem 53**.: \(\mathbf{R}^{\nabla\bullet}\) is sound and strongly complete with respect to the class of quasi-filters. #### 5.2.5 \(\mathbf{K}^{\nabla\bullet}\) Define \(\mathbf{K}^{\nabla\bullet}:=\mathbf{R}^{\nabla\bullet}+\circ\top\). Again, like the case of \(\mathbf{EN}^{\nabla\bullet}\)(Sec. 5.2.2), \(\Delta\top\) is derivable from \(\circ\top\) and Prop. 27. This hints us that the inference rule R1 in [7, Def. 12] is actually dispensable. (Fact 13 therein is derivable from axiom A1 and axiom A6. Then by R2, we have \(\vdash\varphi\) implies \(\vdash\circ\varphi\wedge\varphi\), and then \(\vdash\Delta\varphi\). Thus we derive R1 there.) **Theorem 54**.: \(\mathbf{K}^{\nabla\bullet}\) is sound and strongly complete with respect to the class of filters. Proof.: For soundness, by Prop. 51, it suffices to show the validity of \(\circ\top\) over the class of filters. This follows immediately from Prop. 22. For completeness, define \(\mathcal{M}_{0}^{\Lambda}\) as before w.r.t. \(\mathbf{K}^{\nabla\bullet}\). By Prop. 52 and Prop. 46, it remains only to show that \(N_{0}^{\Lambda}(s)\) possesses \((n)\). By \(\circ\top\) and the derivable formula \(\Delta^{\top}\), we have \(\vdash\circ\top\) and \(\vdash\Delta\top\). Then by \(\vdash(\top\vee\psi)\leftrightarrow\top\), \(\text{REV}\), \(\text{RE}\bullet\), Def. \(\Delta\) and Def. \(\circ\), we infer that for all \(s\in S^{\Lambda}\), \(\Delta(\top\vee\psi)\wedge\circ(\top\vee\psi)\in s\) for all \(\psi\), and thus \(|\top|\in N^{\Lambda}(s)\), that is, \(S\in N_{0}^{\Lambda}(s)\), as desired. Inspired by the definition of \(N^{\Lambda}\), one may define the canonical relation for the extensions of \(\mathbf{K}^{\nabla\bullet}\) as follows: \(sR^{N}t\) iff for all \(\varphi\), if \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\) for all \(\psi\), then \(\varphi\in t\). Recall that the original definition of canonical relation given in [7, Def. 18] is as follows: \(sR^{K}t\) iff there exists \(\delta\) such that \((a)\)\(\bullet\delta\in s\), and \((b)\) for all \(\varphi\), if \(\Delta\varphi\wedge\circ(\neg\delta\to\varphi)\in s\), then \(\varphi\in t\). One may ask what the relationship between \(R^{N}\) and \(R^{K}\) is. As we shall see, they are equal to each other. Before this, we need some preparation. **Proposition 55**.: \(\vdash\bullet\delta\wedge\Delta\varphi\wedge\circ(\neg\delta\to\varphi)\to\Delta( \varphi\vee\psi)\wedge\circ(\varphi\vee\chi)\)__ Proof.: By Thm. 54, it remains only to show that this formula is valid over the class of filters. Let \(\mathcal{M}=\langle S,N,V\rangle\) be a filter and \(s\in S\). Suppose that \(\mathcal{M},s\vDash\bullet\delta\wedge\Delta\varphi\wedge\circ(\neg\delta\to\varphi)\), to show \(\mathcal{M},s\vDash\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\chi)\). By \(\mathcal{M},s\vDash\bullet\delta\), we have \(\mathcal{M},s\vDash\delta\) and \(\delta^{\mathcal{M}}\notin N(s)\). By \(\mathcal{M},s\vDash\Delta\varphi\), we infer that \(\varphi^{\mathcal{M}}\in N(s)\) or \(S\backslash\varphi^{\mathcal{M}}\in N(s)\). Since \(\mathcal{M},s\vDash\delta\), we derive that \(\mathcal{M},s\vDash\neg\delta\to\varphi\). Then by \(\mathcal{M},s\vDash\circ(\neg\delta\to\varphi)\), we get \((\neg\delta\to\varphi)^{\mathcal{M}}\in N(s)\), that is, \(\delta^{\mathcal{M}}\cup\varphi^{\mathcal{M}}\in N(s)\). If \(S\backslash\varphi^{\mathcal{M}}\in N(s)\), then as \(N(s)\) has the property \((i)\), \((\delta^{\mathcal{M}}\cup\varphi^{\mathcal{M}})\cap(S\backslash\varphi^{ \mathcal{M}})\in N(s)\), viz. \(\delta^{\mathcal{M}}\cap(S\backslash\varphi^{\mathcal{M}})\in N(s)\). Since \(N(s)\) possesses the property \((s)\) and \(\delta^{\mathcal{M}}\cap(S\backslash\varphi^{\mathcal{M}})\subseteq\delta^{ \mathcal{M}}\), it follows that \(\delta^{\mathcal{M}}\in N(s)\): a contradiction. This entails that \(S\backslash\varphi^{\mathcal{M}}\notin N(s)\), and thus \(\varphi^{\mathcal{M}}\in N(s)\). Note that \(\varphi^{\mathcal{M}}\subseteq\varphi^{\mathcal{M}}\cup\psi^{\mathcal{M}}=( \varphi\vee\psi)^{\mathcal{M}}\) and \(\varphi^{\mathcal{M}}\subseteq\varphi^{\mathcal{M}}\cup\chi^{\mathcal{M}}=( \varphi\vee\chi)^{\mathcal{M}}\). Using \((s)\) again, we conclude that \((\varphi\vee\psi)^{\mathcal{M}}\in N(s)\) and \((\varphi\vee\chi)^{\mathcal{M}}\in N(s)\), and therefore \(\mathcal{M},s\vDash\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\chi)\), as desired. **Proposition 56**.: Let \(\Lambda\) be an extension of \(\mathbf{K}^{\nabla\bullet}\). Then for all \(s,t\in S^{\Lambda}\), \(sR^{N}t\) iff \(sR^{K}t\). Proof.: Suppose that \(sR^{N}t\), to show that \(sR^{K}t\). By supposition, for all \(\varphi\), if \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\) for all \(\psi\), then \(\varphi\in t\). Letting \(\varphi=\bot\), we can infer that \(\Delta\psi\wedge\circ\psi\notin s\) for some \(\psi\). If \(\Delta\psi\notin s\), that is, \(\nabla\psi\in s\), then by axiom E3, we derive that \(\bullet\psi\in s\) or \(\bullet\neg\psi\in s\). If \(\circ\psi\notin s\), then we have \(\bullet\psi\in s\). Either case implies that \(\bullet\delta\in s\) for some \(\delta\). Now suppose for any \(\varphi^{\prime}\) that \(\Delta\varphi^{\prime}\wedge\circ(\neg\delta\to\varphi^{\prime})\in s\). By Prop. 55, we infer that \(\Delta(\varphi^{\prime}\vee\chi)\wedge\circ(\varphi^{\prime}\vee\chi)\in s\) for all \(\chi\). Then by supposition again, we conclude that \(\varphi^{\prime}\in t\). Therefore, \(sR^{K}t\). Conversely, assume that \(sR^{K}t\), then there exists \(\delta\) such that \((a)\)\(\bullet\delta\in s\), and \((b)\) for all \(\varphi\), if \(\Delta\varphi\wedge\circ(\neg\delta\to\varphi)\in s\), then \(\varphi\in t\). It remains to prove that \(sR^{N}t\). For this, suppose for any \(\varphi\) that \(\Delta(\varphi\vee\psi)\wedge\circ(\varphi\vee\psi)\in s\) for all \(\psi\). By letting \(\psi=\bot\), we obtain that \(\Delta\varphi\in s\); by letting \(\psi=\delta\), we infer that \(\circ(\neg\delta\to\varphi)\in s\). Thus \(\Delta\varphi\wedge\circ(\neg\delta\to\varphi)\in s\). Then by \((b)\), we conclude that \(\varphi\in t\), and therefore \(sR^{N}t\), as desired. ## 6 Updating neighborhood models In this section, we extend the previous results to the dynamic case of public announcements. Syntactically, we add the constructor \([\varphi]\varphi\) into the previous languages \(\mathcal{L}(\nabla)\), \(\mathcal{L}(\bullet)\) and \(\mathcal{L}(\nabla,\bullet)\), and denote the obtained extensions by \(\mathcal{L}(\nabla,[\cdot])\), \(\mathcal{L}(\bullet,[\cdot])\), \(\mathcal{L}(\nabla,\bullet,[\cdot])\), respectively. \([\psi]\varphi\) is read "after every truthfully public announcement of \(\psi\), \(\varphi\) holds". Also, as usual, \(\langle\psi\rangle\varphi\) abbreviates \(\neg[\psi]\neg\varphi\). Semantically, we adopt the intersection semantics in the literature (e.g. [21, 37, 22]). In detail, given a monotone neighborhood model \(\mathcal{M}=\langle S,N,V\rangle\) and a state \(s\in S\), \[\mathcal{M},s\vDash[\psi]\varphi\quad\Longleftrightarrow\quad\mathcal{M},s \vDash\psi\text{ implies }\mathcal{M}^{\cap\psi},s\vDash\varphi\] where \(\mathcal{M}^{\cap\psi}\) is the intersection submodel \(\mathcal{M}^{\cap\psi^{\mathcal{M}}}\), and the notion of intersection submodels is defined as below. **Definition 57**.: [21, Def. 3] Let \(\mathcal{M}=\langle S,N,V\rangle\) be a monotone model, and \(X\) a nonempty subset of \(S\). Define _the intersection model_\(\mathcal{M}^{\cap X}=\langle X,N^{\cap X},V^{X}\rangle\) induced from \(X\) in the following. * for every \(s\in X\), \(N^{\cap X}=\{Y\mid Y=P\cap X\) for some \(P\in N(s)\}\), * \(V^{X}(p)=V(p)\cap X\). **Proposition 58**.: [21, Prop. 2] The neighborhood property \((s)\) is preserved under taking the intersection submodel. That is, if \(\mathcal{M}\) is a monotone neighborhood model with the domain \(S\), then for any nonempty subset \(X\) of \(S\), the intersection submodel \(\mathcal{M}^{\cap X}\) is also monotone. The following lists the reduction axioms of \(\mathcal{L}(\nabla,\bullet,[\cdot])\) and its sublanguages \(\mathcal{L}(\nabla,[\cdot])\) and \(\mathcal{L}(\bullet,[\cdot])\) under the intersection semantics. \[\begin{array}{ll}\text{AP}&[\psi]p\leftrightarrow(\psi\to p)\\ \text{AN}&[\psi]\neg\varphi\leftrightarrow(\psi\to\neg[\psi]\varphi)\\ \text{AC}&[\psi](\varphi\land\chi)\leftrightarrow([\psi]\varphi\land[\psi] \chi)\\ \text{AA}&[\psi][\chi]\varphi\leftrightarrow[\psi\land[\psi]\chi]\varphi\\ \text{AV}&[\psi]\nabla\varphi\leftrightarrow(\psi\to\nabla[\psi]\varphi\land \nabla[\psi]\neg\varphi)\\ \text{A}\bullet&[\psi]\bullet\varphi\leftrightarrow(\psi\to\bullet[\psi] \varphi)\end{array}\] The following reduction axioms are derivable from the above reduction axioms. \[\begin{array}{ll}\text{A}\Delta&[\psi]\Delta\varphi\leftrightarrow(\psi\to \Delta[\psi]\varphi\lor\Delta[\psi]\neg\varphi)\\ \text{A}\circ&[\psi]\circ\varphi\leftrightarrow(\psi\to\circ[\psi]\varphi) \end{array}\] **Theorem 59**.: Let \(\Lambda\) be a system of \(\mathcal{L}(\nabla)\) (resp. \(\mathcal{L}(\bullet)\), \(\mathcal{L}(\nabla,\bullet)\)). If \(\Lambda\) is sound and strongly complete with respect to the class of monotone neighborhood frames, then so is \(\Lambda\) plus AP, AN, AC, AA and AV (resp. plus AP, AN, AC, AA and A\(\bullet\), plus AP, AN, AC, AA, AV and A\(\bullet\)) under intersection semantics. Proof.: The validity of axioms AP, AN, AC, AA can be found in [21, Thm. 1], [22, Thm. 2, Thm. 3] and [37, Prop. 3.1]. The validity of A\(\bullet\) has been shown in [11], where the axiom is named A\(\bullet\)Int. The validity of AV is shown as follows. Let \(\mathcal{M}=\langle S,N,V\rangle\) be an \((s)\)-model and \(s\in S\). To begin with, suppose that \(\mathcal{M},s\vDash[\psi]\nabla\varphi\) and \(\mathcal{M},s\vDash\psi\), to show that \(\mathcal{M},s\vDash\nabla[\psi]\varphi\land\nabla[\psi]\neg\varphi\). By supposition, we have \(\mathcal{M}^{\cap\psi},s\vDash\nabla\varphi\), which implies \(\varphi^{\mathcal{M}^{\cap\psi}}\notin N^{\cap\psi}(s)\) and \(\psi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}\notin N^{\cap \psi}(s)\). We claim that \(\mathcal{M},s\vDash\nabla[\psi]\varphi\), that is, \(([\psi]\varphi)^{\mathcal{M}}\notin N(s)\) and \(S\backslash([\psi]\varphi)^{\mathcal{M}}\notin N(s)\). If \(([\psi]\varphi)^{\mathcal{M}}\in N(s)\), then \(([\psi]\varphi)^{\mathcal{M}}\cap\psi^{\mathcal{M}}\in N^{\cap\psi}(s)\). As \(([\psi]\varphi)^{\mathcal{M}}\cap\psi^{\mathcal{M}}\subseteq\varphi^{\mathcal{M }^{\cap\psi}}\), by \((s)\), we have \(\varphi^{\mathcal{M}^{\cap\psi}}\in N^{\cap\psi}(s)\): a contradiction. If \(S\backslash([\psi]\varphi)^{\mathcal{M}}\in N(s)\), then \((S\backslash([\psi]\varphi)^{\mathcal{M}})\cap\psi^{\mathcal{M}}\in N^{\cap \psi}(s)\). Note that \((S\backslash([\psi]\varphi)^{\mathcal{M}})\cap\psi^{\mathcal{M}}\subseteq \varphi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}\): for any \(x\in(S\backslash([\psi]\varphi)^{\mathcal{M}})\cap\psi^{\mathcal{M}}\), \(x\notin([\psi]\varphi)^{\mathcal{M}}\), thus \(x\in\psi^{\mathcal{M}}\) and \(x\notin\varphi^{\mathcal{M}^{\cap\psi}}\), and hence \(x\in\psi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}\). By \((s)\) again, \(\psi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}\in N^{\cap\psi}(s)\): a contradiction again. We also claim that \(\mathcal{M},s\vDash\nabla[\psi]\neg\varphi\), that is, \(([\psi]\neg\varphi)^{\mathcal{M}}\notin N(s)\) and \(S\backslash([\psi]\neg\varphi)^{\mathcal{M}}\notin N(s)\). If \(([\psi]\neg\varphi)^{\mathcal{M}}\in N(s)\), then \(([\psi]\neg\varphi)^{\mathcal{M}}\cap\psi^{\mathcal{M}}\in N^{\cap\psi}(s)\). As \(([\psi]\neg\varphi)^{\mathcal{M}}\cap\psi^{\mathcal{M}}\subseteq\psi^{ \mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}\), we infer by \((s)\) that \(\psi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}\in N^{\cap\psi}(s)\): a contradiction. If \(S\backslash([\psi]\neg\varphi)^{\mathcal{M}}\in N(s)\), then \((S\backslash([\psi]\neg\varphi)^{\mathcal{M}})\cap\psi^{\mathcal{M}}\in N^{ \cap\psi}(s)\). Since \((S\backslash([\psi]\neg\varphi)^{\mathcal{M}})\cap\psi^{\mathcal{M}}\subseteq \varphi^{\mathcal{M}^{\cap\psi}}\), by \((s)\) again, we derive that \(\varphi^{\mathcal{M}^{\cap\psi}}\in N^{\cap\psi}(s)\): a contradiction again. Conversely, assume that \(\mathcal{M},s\vDash\psi\rightarrow\nabla[\psi]\varphi\wedge\nabla[\psi]\neg\varphi\), to prove that \(\mathcal{M},s\vDash[\psi]\nabla\varphi\). For this, we suppose that \(\mathcal{M},s\vDash\psi\), it remains only to show that \(\mathcal{M}^{\cap\psi},s\vDash\nabla\varphi\), that is, \(\varphi^{\mathcal{M}^{\cap\psi}}\notin N^{\cap\psi}(s)\) and \(\psi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}\notin N^{\cap \psi}(s)\). By assumption and supposition, we obtain \(\mathcal{M},s\vDash\nabla[\psi]\varphi\wedge\nabla[\psi]\neg\varphi\). This follows that \(([\psi]\varphi)^{\mathcal{M}}\notin N(s)\) and \(S\backslash([\psi]\varphi)^{\mathcal{M}}\notin N(s)\), and \(([\psi]\neg\varphi)^{\mathcal{M}}\notin N(s)\) and \(S\backslash([\psi]\neg\varphi)^{\mathcal{M}}\notin N(s)\). We claim that \(\varphi^{\mathcal{M}^{\cap\psi}}\notin N^{\cap\psi}(s)\). Otherwise, that is, \(\varphi^{\mathcal{M}^{\cap\psi}}\in N^{\cap\psi}(s)\), we have \(\varphi^{\mathcal{M}^{\cap\psi}}=P\cap\psi^{\mathcal{M}}\) for some \(P\in N(s)\). This implies that \(P\subseteq([\psi]\varphi)^{\mathcal{M}}\): for any \(x\in P\), we would have \(x\in([\psi]\varphi)^{\mathcal{M}}\), since if \(x\in\psi^{\mathcal{M}}\), then \(x\in P\cap\psi^{\mathcal{M}}=\varphi^{\mathcal{M}^{\cap\psi}}\). By \((s)\), \(([\psi]\varphi)^{\mathcal{M}}\in N(s)\): a contradiction. We also claim that \(\psi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}\notin N^{\cap \psi}(s)\). Otherwise, that is, \(\psi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}\in N^{\cap\psi}(s)\), we infer that \(\psi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{\cap\psi}}=P\cap\psi^{ \mathcal{M}}\) for some \(P\in N(s)\). It then follows that \(P\subseteq([\psi]\neg\varphi)^{\mathcal{M}}\): for any \(x\in P\), we have \(x\in([\psi]\neg\varphi)^{\mathcal{M}}\), since if \(x\in\psi^{\mathcal{M}}\), then \(x=P\cap\psi^{\mathcal{M}}=\psi^{\mathcal{M}}\backslash\varphi^{\mathcal{M}^{ \cap\psi}}\), and so \(x\in(\neg\varphi)^{\mathcal{M}^{\cap\psi}}\). By \((s)\) again, \(([\psi]\neg\varphi)^{\mathcal{M}}\in N(s)\): a contradiction, as desired. For the sake of reference, we use \(\mathbf{M}^{\nabla\bullet[\cdot]}\) to denote the extension of \(\mathbf{M}^{\nabla\bullet}\) with all the above reduction axioms. By dropping \(\mathrm{A}\bullet\) from \(\mathbf{M}^{\nabla\bullet[\cdot]}\), we obtain the system \(\mathbf{M}^{\nabla[\cdot]}\); by dropping \(\mathrm{A}\nabla\) from \(\mathbf{M}^{\nabla\bullet[\cdot]}\), we obtain the system \(\mathbf{M}^{\bullet[\cdot]}\). In what follows, we will focus on some successful formulas in our languages. A formula is said to be _successful_, if it still holds after being announced; in symbols, \(\vDash[\varphi]\varphi\). Recall that \(\neg\bullet p\) is shown to be successful under the relational semantics in [7, Prop. 39] and under the intersection semantics in [11, Prop. 6.5]. We will follow this line of research and say much more. As we shall show, any combination of \(p\), \(\neg p\), \(\neg\bullet p\), and \(\neg\nabla p\) via conjunction (or, via disjunction) is successful under the intersection semantics.11 Footnote 11: It is shown in [7, Prop. 38] that under Kripke semantics, \(\bullet p\) is self-refuting and \(\neg\bullet p\) is successful. To begin with, we show that, provably, any combination of \(p\), \(\neg p\), \(\neg\bullet p\), and \(\neg\nabla p\) via _conjunction_ is successful under the intersection semantics. **Proposition 60**.: \(p\) is successful under the intersection semantics. That is, \([p]p\) is provable in \(\mathbf{M}^{\bullet[\cdot]}\). Proof.: Straightforward by \(\mathrm{AP}\). **Proposition 61**.: \(\neg p\) is successful under the intersection semantics. That is, \([\neg p]\neg p\) is provable in \(\mathbf{M}^{\bullet[\cdot]}\). Proof.: Straightforward by \(\mathrm{AN}\) and \(\mathrm{AP}\). **Proposition 62**.: \(\neg\bullet p\) is successful under the intersection semantics. That is, \([\neg\bullet p]\neg\bullet p\) is provable in \(\mathbf{M}^{\bullet[\cdot]}\). Proof.: Refer to [11, Prop. 6.5]. **Proposition 63**.: \(\neg\nabla p\) is successful under the intersection semantics. That is, \([\neg\nabla p]\neg\nabla p\) is provable in \(\mathbf{M}^{\nabla[\cdot]}\). Proof.: We have the following proof sequence in \(\mathbf{M}^{\nabla[\cdot]}\). \[\begin{array}{llll}&[\neg\nabla p]\neg\nabla p\\ \leftrightarrow&(\neg\nabla p\rightarrow\neg(\neg\nabla p\rightarrow\nabla [\neg\nabla p]\nabla p)&\text{AN}\\ \leftrightarrow&(\neg\nabla p\rightarrow\neg(\neg\nabla p\rightarrow\nabla [\neg\nabla p]\rho\wedge\nabla[\neg\nabla p]\neg p)&\text{AP}\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\ Proof.: We have the following proof sequence in \(\mathbf{M}^{\nabla[\cdot]}\). \[[p\land\neg\nabla p](p\land\neg\nabla p)\] \[\leftrightarrow ([p\land\neg\nabla p]p\land[p\land\neg\nabla p]\neg\nabla p)\] \[\leftrightarrow (p\land\neg\nabla p\to p)\land(p\land\neg\nabla p\to\neg[p\land \neg\nabla p]\nabla p)\] \[\leftrightarrow (p\land\neg\nabla p\to\neg(p\land\neg\nabla p\to\neg\nabla[p \land\neg\nabla p]\neg\nabla\nabla p\land\neg\nabla p]\neg p))\] \[\leftrightarrow (p\land\neg\nabla p\to\neg(\nabla[p\land\neg\nabla p]p\land\nabla [p\land\neg\nabla p]\neg p))\] \[\leftrightarrow (p\land\neg\nabla p\to\neg(\nabla\top\land\nabla[p\land\neg \nabla p]\neg p))\] \[\leftrightarrow (p\land\neg\nabla p\to\neg\neg\nabla\top\lor\neg\nabla(p\land \neg\nabla p\to\neg p))\] \[\leftrightarrow (p\land\neg\nabla p\to\neg\nabla\top\lor\neg\nabla(p\land \neg\nabla p\to\neg p))\] \[\leftrightarrow (p\land\Delta p\to\Delta\top\lor\Delta(p\land\Delta p\to\neg p))\] Def. \[\Delta\] By Prop. 39, \(\Delta p\to\Delta\top\) is provable in \(\mathbf{M}^{\nabla}\), so is the last formula in the above proof sequence, and thus \([p\land\neg\nabla p](p\land\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla[\cdot]}\). **Proposition 67**.: \(\neg p\land\neg\bullet p\) is successful under the intersection semantics. Proof.: By E2, \(\neg p\land\neg\bullet p\) is equivalent to \(\neg p\). And we have already known from Prop. 61 that \(\neg p\) is successful. **Proposition 68**.: \(\neg p\land\neg\nabla p\) is successful under the intersection semantics. That is, \([\neg p\land\neg\nabla p](\neg p\land\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla[\cdot]}\). Proof.: We have the following proof sequence in \(\mathbf{M}^{\nabla[\cdot]}\). \[[\neg p\land\neg\nabla p](\neg p\land\neg\nabla p)\] \[\leftrightarrow [\neg p\land\neg\nabla p]\neg p\land(\neg p\land\neg\neg\nabla p ]\neg\nabla p\] AC \[\leftrightarrow (\neg p\land\neg\nabla p\to\neg p)\land(\neg p\land\neg\neg \nabla p\to\neg[\neg p\land\neg\nabla p]\nabla p)\] AN, AP \[\leftrightarrow (\neg p\land\neg\nabla p\to\neg[\neg p\land\neg\nabla p]\nabla p)\] TAUT \[\leftrightarrow (\neg p\land\neg\nabla p\to\neg(\nabla[\neg p\land\neg\nabla p ]p\land\nabla[\neg p\land\neg\nabla p]\neg p)\] \[\leftrightarrow (\neg p\land\Delta p\to\Delta(\neg p\land\Delta p\to p)\lor \Delta\top)\] \[\leftrightarrow \top\] \[\text{Prop. \ref{prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop:prop: We have the following proof sequence in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). \[[\neg\bullet p\wedge\neg\nabla p]\neg\bullet p\] \[\leftrightarrow (\neg\bullet p\wedge\neg\nabla p\rightarrow\neg[\neg\bullet p \wedge\neg\nabla p]\bullet p)\] AN \[\leftrightarrow (\neg\bullet p\wedge\neg\nabla p\rightarrow\neg(\neg\bullet p \wedge\neg\neg\nabla p\rightarrow\neg\bullet[\neg\bullet p\wedge\neg\nabla p ]p)\] \[\leftrightarrow (\neg\bullet p\wedge\neg\neg\nabla p\rightarrow\neg\bullet[ \neg\bullet p\wedge\neg\nabla p]p)\] TAUT \[\leftrightarrow (\neg\bullet p\wedge\neg\nabla p\rightarrow\neg\bullet(\neg \bullet p\wedge\neg\nabla p\to p))\] \[\leftrightarrow (\bullet(\neg\bullet p\wedge\neg\nabla p\to p)\rightarrow( \bullet p\vee\nabla p))\] TAUT \[\leftrightarrow (\bullet(\bullet p\vee\nabla p\lor p)\rightarrow(\bullet p\vee \nabla p))\] \[\leftrightarrow (\bullet(p\vee\nabla p)\rightarrow(\bullet p\vee\nabla p))\] \[\leftrightarrow \top\] Prop. 49 \[[\neg\bullet p\wedge\neg\nabla p]\neg\nabla p\] \[\leftrightarrow (\neg\bullet p\wedge\neg\nabla p\rightarrow\neg[\neg\bullet p \wedge\neg\nabla p]\nabla p)\] AN \[\leftrightarrow (\neg\bullet p\wedge\neg\nabla p\rightarrow\neg(\neg\bullet p \wedge\neg\nabla p\rightarrow\nabla[\neg\bullet p\wedge\neg\nabla p]p\wedge\] \[\nabla[\neg\bullet p\wedge\neg\nabla p]\neg p))\] \[\leftrightarrow (\neg\bullet p\wedge\neg\nabla p\rightarrow\neg(\nabla[\neg \bullet p\wedge\neg\nabla p]p\wedge\nabla[\neg\bullet p\wedge\neg\nabla p] \neg p))\] TAUT \[\leftrightarrow (\neg\bullet p\wedge\neg\nabla p\rightarrow\neg(\nabla(\neg \bullet p\wedge\neg\nabla p\to p)\wedge\nabla(\neg\bullet p\wedge\neg \nabla p\rightarrow\neg p))\] \[\leftrightarrow (\nabla(p\vee\bullet p\vee\nabla p)\wedge\nabla(\neg p\vee\bullet p \vee\nabla p)\rightarrow(\nabla p\vee\bullet p))\] \[\leftrightarrow \top\] Thus both \([\neg\bullet p\wedge\neg\nabla p]\neg\bullet p\) and \([\neg\bullet p\wedge\neg\nabla p]\neg\nabla p\) are provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). Therefore, \([\neg\bullet p\wedge\neg\nabla p](\neg\bullet p\wedge\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). Intuitively, \([\neg\bullet p\wedge\neg\nabla p](\neg\bullet p\wedge\neg\nabla p)\) says that after being told that one is neither ignorant whether nor ignorant of \(p\), one is still neither ignorant whether nor ignorant of \(p\). In short, one's non-ignorance whether and non-ignorance of a fact cannot be altered by being announced. The following two propositions can be shown as in Prop. 64. **Proposition 70**.: \(p\wedge\neg p\wedge\neg\bullet p\) is successful under the intersection semantics. **Proposition 71**.: \(p\wedge\neg p\wedge\neg\nabla p\) is successful under the intersection semantics. **Proposition 72**.: \(p\wedge\neg\bullet p\wedge\neg\nabla p\) is successful under the intersection semantics. That is, \([p\wedge\neg\bullet p\wedge\neg\nabla p](p\wedge\neg\bullet p\wedge\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). Proof.: By AC, \([p\wedge\neg\bullet p\wedge\neg\nabla p](p\wedge\neg\bullet p\wedge\neg\nabla p) \leftrightarrow([p\wedge\neg\bullet p\wedge\neg\nabla p]p\wedge[p\wedge\neg \neg\nabla p]\neg\bullet p\wedge[p\wedge\neg\neg\nabla p]\neg\nabla p)\). One may easily verify that \([p\wedge\neg\bullet p\wedge\neg\nabla p]p\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). It remains only to show that both \([p\wedge\neg\bullet p\wedge\neg\nabla p]\neg\bullet p\) and \([p\wedge\neg\bullet p\wedge\neg\nabla p]\neg\nabla p\) are provable in the system in question. We have the following proof sequence in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). \[[p\wedge\neg\bullet p\wedge\neg\nabla p]\neg\bullet p\] \[\leftrightarrow (p\wedge\neg\bullet p\wedge\neg\nabla p\rightarrow\neg[p\wedge \neg\bullet p\wedge\neg\nabla p]\bullet p)\] AN \[\leftrightarrow (p\wedge\neg\bullet p\wedge\neg\nabla p\rightarrow\neg\bullet[p \wedge\neg\bullet p\wedge\neg\nabla p]p)\] A \[\leftrightarrow (p\wedge\neg\bullet p\wedge\neg\nabla p\rightarrow\neg\bullet\top)\] AP \[\leftrightarrow (p\wedge\circ p\wedge\Delta p\rightarrow\circ\top)\] Def. \[\circ,\text{Def.}\ \Delta\] By Prop. 40, \(p\wedge\circ p\to\circ\top\) is provable in \(\mathbf{M}^{\bullet}\), so is the last formula in the above proof sequence, and thus \([p\wedge\neg\bullet p\wedge\neg\neg\nabla p]\neg\bullet p\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). Also, we have the following proof sequence in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). \[[p\wedge\neg\bullet p\wedge\neg\nabla p]\neg\nabla p\] \[\leftrightarrow (p\wedge\neg\neg\bullet p\wedge\neg\neg\neg p\to\neg[p\wedge \neg\bullet p\wedge\neg\nabla p]\nabla p)\] \[\leftrightarrow (p\wedge\neg\neg\bullet p\wedge\neg\neg\neg(\nabla[p\wedge \neg\bullet p\wedge\neg\nabla p]p\wedge\nabla[p\wedge\neg\neg\bullet p\wedge \neg\nabla p]\neg p))\] \[\leftrightarrow (p\wedge\circ p\wedge\Delta p\to\Delta[p\wedge\neg\bullet p \wedge\neg\neg\nabla p]p\vee\Delta[p\wedge\neg\bullet p\wedge\neg\nabla p]\neg p)\] \[\leftrightarrow (p\wedge\circ p\wedge\Delta p\to\Delta\top\vee\Delta[p\wedge \neg\bullet p\wedge\neg\nabla p]\neg p)\] \[\Rightarrow \mathrm{AP},\mathrm{RE}\nabla\] By Prop. 39, \(\Delta p\to\Delta\top\) is provable in \(\mathbf{M}^{\nabla}\), thus the last formula in the above proof sequence is provable in \(\mathbf{M}^{\nabla\bullet}\). Therefore, \([p\wedge\neg\bullet p\wedge\neg\nabla p]\neg\nabla p\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). According to the previous analysis, \([p\wedge\neg\bullet p\wedge\neg\nabla p](p\wedge\neg\bullet p\wedge\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). **Proposition 73**.: \(\neg p\wedge\neg\bullet p\wedge\neg\nabla p\) is successful under the intersection semantics. That is, \([\neg p\wedge\neg\bullet p\wedge\neg\nabla p](\neg p\wedge\neg\bullet p\wedge \neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). Proof.: By the proof of Prop. 67, \(\neg p\wedge\neg\bullet p\wedge\neg\nabla p\) is equivalent to \(\neg p\wedge\neg\neg\nabla p\). And we have already shown in Prop. 68 that \(\neg p\wedge\neg\nabla p\) is successful under the intersection semantics. **Proposition 74**.: \(p\wedge\neg p\wedge\neg\bullet p\wedge\neg\nabla p\) is successful under the intersection semantics. Proof.: The proof is similar to that of Prop. 64. Now we demonstrate that any combination of \(p\), \(\neg p\), \(\neg\bullet p\), and \(\neg\nabla p\) via _disjunction_ is successful under the intersection semantics. First, one may show that \([\psi](\varphi\vee\chi)\leftrightarrow([\psi]\varphi\vee[\psi]\chi)\) is provable from the above reduction axioms. For the sake of reference, we denote it AD. **Proposition 75**.: \(p\vee\neg p\) is successful under the intersection semantics. Proof.: Note that \(p\vee\neg p\) is equivalent to \(\top\), and \(\top\) is successful. **Proposition 76**.: \(p\vee\neg\bullet p\) is successful under the intersection semantics. That is, \([p\vee\neg\bullet p](p\vee\neg\bullet p)\) is provable in \(\mathbf{M}^{\bullet[\cdot]}\). Proof.: Just note that \(p\vee\neg\bullet p\) is equivalent to \(\bullet p\to p\), which by E2 is equivalent to \(\top\). And \(\top\) is successful. **Proposition 77**.: \(p\vee\neg\nabla p\) is successful under the intersection semantics. That is, \([p\vee\neg\nabla p](p\vee\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla[\cdot]}\). Proof.: We have the following proof sequence in \(\mathbf{M}^{\nabla[\cdot]}\). \[\begin{array}{ll}&[p\vee\neg\nabla p](p\vee\neg\nabla p)\\ \leftrightarrow&([p\vee\neg\nabla p]p\vee[p\vee\neg\nabla p]\neg\nabla p)& \text{AD}\\ \leftrightarrow&(p\vee\neg\nabla p\to p)\vee(p\vee\neg\nabla p\to\neg[p\vee \neg\nabla p]\nabla p)&\text{AP},\text{AN}\\ \leftrightarrow&(p\vee\neg\nabla p\to p)\vee(p\vee\neg\nabla p\to\neg(\nabla[p \vee\neg\nabla p]p\wedge\nabla[p\vee\neg\nabla p]\neg p))&\text{AP},\text{AN}\\ \leftrightarrow&(p\vee\neg\nabla p\to p)\vee(p\vee\neg\neg\nabla p\to\neg( \nabla(p\vee\neg\nabla p\to p)\wedge\nabla(p\vee\neg\nabla p\to\neg p)))&\text{TAUT}\\ \leftrightarrow&(\neg p\wedge\nabla(p\vee\neg(p\vee\neg\nabla p))\wedge\nabla( \neg p\vee\neg(p\vee\neg\nabla p))\rightarrow\neg p\wedge\nabla p)&\text{TAUT}, \text{RE}\nabla\\ \leftrightarrow&\top&\text{M1}\end{array}\] Therefore, \([p\vee\neg\nabla p](p\vee\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla[\cdot]}\). **Proposition 78**.: \(\neg p\vee\neg\bullet p\) is successful under the intersection semantics. Proof.: By E2, \(\neg p\vee\neg\bullet p\) is equivalent to \(\neg\bullet p\), and Prop. 62 has shown that \(\neg\bullet p\) is successful under the intersection semantics. **Proposition 79**.: \(\neg p\vee\neg\nabla p\) is successful under the intersection semantics. That is, \([\neg p\vee\neg\nabla p](\neg p\vee\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla[\cdot]}\). Proof.: We have the following proof sequence in \(\mathbf{M}^{\nabla[\cdot]}\). \[\begin{array}{ll}&[\neg p\vee\neg\nabla p](\neg p\vee\neg\nabla p)\\ \leftrightarrow&[\neg p\vee\neg\nabla p]\neg p\vee[\neg p\vee\neg\nabla p] \neg\nabla p\\ \leftrightarrow&(\neg p\vee\neg\nabla p\to\neg p)\vee(\neg p\vee\neg\neg \nabla p\to\neg[\neg p\vee\neg\nabla p]\nabla p)&\text{AD}\\ \leftrightarrow&(\neg p\vee\neg\nabla p\to\neg p)\vee(\neg p\vee\neg\neg \nabla p\to\neg[\neg p\vee\neg\neg\nabla p]\nabla p)&\text{AN},\text{AP}\\ \leftrightarrow&(\neg p\vee\neg\nabla p\to\neg p)\vee(\neg p\vee\neg\nabla p \to\neg(\nabla[\neg p\vee\neg\nabla p\to p)\wedge\nabla(\neg p\vee\neg \nabla p\to\neg p)))&\text{AP},\text{AN}\\ \leftrightarrow&(\neg p\vee\neg\nabla p\to\neg p\vee\neg(\nabla(\neg p\vee \neg\nabla p\to p)\wedge\nabla(\neg p\vee\neg\nabla p\to\neg p)))&\text{TAUT}\\ \leftrightarrow&p\wedge\nabla(p\vee\neg(\neg p\vee\neg\nabla p))\wedge\nabla( \neg p\vee\neg(\neg p\vee\neg\nabla p))\to p\wedge\nabla p&\text{TAUT}\\ \leftrightarrow&\top&\text{M1}\end{array}\] Therefore, \([\neg p\vee\neg\nabla p](\neg p\vee\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla[\cdot]}\). **Proposition 80**.: \(\neg\bullet p\vee\neg\nabla p\) is successful under the intersection semantics. That is, \([\neg\bullet p\vee\neg\nabla p](\neg\bullet p\vee\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). Proof.: We have the following proof sequence in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). \[[\neg\bullet p\vee\neg\nabla p](\neg\bullet p\vee\neg\nabla p)\] \[\leftrightarrow ([\neg\bullet p\vee\neg\nabla p]\neg\bullet p\vee[\neg\bullet p \vee\neg\nabla p]\neg\nabla p)\] AD \[\leftrightarrow (\neg\bullet p\vee\neg\nabla p\rightarrow\neg[\neg\bullet p \vee\neg\nabla p]\bullet p)\vee\] \[(\neg\bullet p\vee\neg\neg\nabla p\rightarrow\neg[\neg\bullet p \vee\neg\nabla p]\nabla p)\] AN \[\leftrightarrow (\neg\bullet p\vee\neg\nabla p\rightarrow\neg\bullet[\neg\bullet p \vee\neg\nabla p]p)\vee\] \[(\neg\bullet p\vee\neg\nabla p\rightarrow\neg(\nabla[\neg\bullet p \vee\neg\nabla p]p\wedge\] \[\nabla[\neg\bullet p\vee\neg\nabla p]\neg p))\] \[\leftrightarrow (\neg\bullet p\vee\neg\neg\nabla p\rightarrow\neg\bullet(\neg \bullet p\vee\neg\nabla p\to p))\vee\] \[(\neg\bullet p\vee\neg\nabla p\rightarrow\neg(\neg\neg\bullet p \vee\neg\nabla p\to p))\] \[\nabla(\neg\bullet p\vee\neg\nabla p\rightarrow\neg p)))\] \[\leftrightarrow (\neg\bullet p\vee\neg\neg\nabla p\rightarrow\neg\bullet p \vee\neg\nabla p\rightarrow p))\] \[\nabla(\neg\bullet p\vee\neg\neg\nabla p\rightarrow\neg(\neg \neg\neg\nabla p\rightarrow p)\wedge\] \[\nabla(\neg\bullet p\vee\neg\neg\nabla p\rightarrow\neg p))\] \[\leftrightarrow (\neg\bullet p\vee\neg\nabla p\rightarrow\neg p)\wedge\nabla( \neg\bullet p\vee\neg\nabla p\rightarrow p))\] \[\nabla(\neg\bullet p\vee\neg\neg\nabla p\rightarrow p)\wedge \nabla(\neg\bullet p\vee\neg\nabla p\rightarrow p)\wedge\] \[\nabla(\neg\bullet p\vee\neg\neg\nabla p\rightarrow\neg p) \rightarrow\bullet p\wedge\nabla p\] TAUT \[\leftrightarrow \bullet(\neg\bullet p\vee\neg\nabla p\to p)\wedge\nabla( \neg\bullet p\vee\neg\nabla p\to p)\wedge\] \[\nabla(\neg\bullet p\vee\neg\nabla p\rightarrow\neg p) \rightarrow\bullet p\wedge\nabla p\] TAUT \[\leftrightarrow \bullet(\circ p\vee\Delta p\to p)\wedge\nabla(\circ p\vee \Delta p\to p)\wedge\] \[\nabla(\circ p\vee\Delta p\rightarrow\neg p)\rightarrow\bullet p \wedge\nabla p\] Def. \[\circ,\text{Def.}\ \Delta\] By Prop. 50, \(\bullet(\circ p\vee\Delta p\to p)\rightarrow\bullet p\) is provable in \(\mathbf{M}^{\nabla\bullet}\); by axiom M1 and \(\text{REV}\), we can show the provability of \(\nabla(\circ p\vee\Delta p\to p)\wedge\nabla(\circ p\vee\Delta p \rightarrow\neg p)\rightarrow\nabla p\) in \(\mathbf{M}^{\nabla\bullet}\). Therefore, \([\neg\bullet p\vee\neg\nabla p](\neg\bullet p\vee\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). Intuitively, \([\neg\bullet p\vee\neg\nabla p](\neg\bullet p\vee\neg\nabla p)\) says that one's either non-ignorance of or non-ignorance whether a fact cannot be altered by being announced: after being told that one is either not ignorant of or not ignorant whether \(p\), one is still either not ignorant of or not ignorant whether \(p\). Next two propositions are shown as in Prop. 75. **Proposition 81**.: \(p\vee\neg p\vee\neg\bullet p\) is successful under the intersection semantics. **Proposition 82**.: \(p\vee\neg p\vee\neg\nabla p\) is successful under the intersection semantics. **Proposition 83**.: \(p\vee\neg\bullet p\vee\neg\nabla p\) is successful under the intersection semantics. That is, \([p\vee\neg\bullet p\vee\neg\nabla p](p\vee\neg\bullet p\vee\neg\nabla p)\) is provable in \(\mathbf{M}^{\nabla\bullet[\cdot]}\). Proof.: By the proof of Prop. 76, \(p\vee\neg\bullet p\) is equivalent to \(\top\), so is \(p\vee\neg\bullet p\vee\neg\nabla p\). And \(\top\) is successful. **Proposition 84**.: \(\neg p\vee\neg\bullet p\vee\neg\nabla p\) is successful under the intersection semantics. Proof.: By E2, \(\neg p\vee\neg\bullet p\vee\neg\nabla p\) is equivalent to \(\neg\bullet p\vee\neg\nabla p\), and we have shown in Prop. 80 that \(\neg\bullet p\vee\neg\nabla p\) is successful under the intersection semantics. **Proposition 85**.: \(p\vee\neg p\vee\neg\bullet p\vee\neg\nabla p\) is successful under the intersection semantics. Proof.: The proof is similar to that of Prop. 75. Conclusion and Future work In this paper, we investigated the bimodal logic of Fitchean ignorance and first-order ignorance \(\mathcal{L}(\nabla,\bullet)\) under the neighborhood semantics. We compared the relative expressivity between \(\mathcal{L}(\nabla,\bullet)\) and the logic of (first-order) ignorance \(\mathcal{L}(\nabla)\) and the logic of Fitchean ignorance \(\mathcal{L}(\bullet)\), and between \(\mathcal{L}(\nabla,\bullet)\) and standard epistemic logic \(\mathcal{L}(\Diamond)\). It turns out that over the class of models possessing \((c)\) or \((t)\), all of these logics are equally expressive, whereas over the class of models possessing either of other eight neighborhood properties, \(\mathcal{L}(\nabla,\bullet)\) is more expressive than both \(\mathcal{L}(\nabla)\) and \(\mathcal{L}(\bullet)\), and over the class of models possessing either of eight neighborhood properties except for \((d)\), \(\mathcal{L}(\nabla,\bullet)\) is less expressive than \(\mathcal{L}(\Diamond)\). We explored the frame definability of the bimodal logic, which turns out that all ten frame properties except for \((n)\) are undefinable in \(\mathcal{L}(\nabla,\bullet)\). We axiomatized the bimodal logic over various classes of neighborhood frames, which among other results includes the classical logic, the monotone logic, and the regular logic. Last but not least, by updating the neighborhood models via the intersection semantics, we found suitable reduction axioms and thus reduced the public announcement operators to the bimodal logic. This gives us good applications to successful formulas, since as we have shown, any combination of \(p\), \(\neg p\), \(\neg\bullet p\) and \(\neg\nabla p\) via conjunction (or, via disjunction) is successful under the intersection semantics. We also partly answers open questions raised in [9, 11]. For future work, we hope to know whether \(\mathcal{L}(\nabla,\bullet)\) is less expressive than \(\mathcal{L}(\Diamond)\) over the class of \((d)\)-models. We conjecture the answer is positive, but the model constructions seems hard, where the desired models both needs at least three points. Moreover, as we have seen, the proofs of the expressivity and frame definability results involve nontrivial (if not highly nontrivial) constructions of neighborhood models and frames, we thus also hope to find the bisimulation notion for \(\mathcal{L}(\nabla,\bullet)\) under the neighborhood semantics.
2308.16894
EMDB: The Electromagnetic Database of Global 3D Human Pose and Shape in the Wild
We present EMDB, the Electromagnetic Database of Global 3D Human Pose and Shape in the Wild. EMDB is a novel dataset that contains high-quality 3D SMPL pose and shape parameters with global body and camera trajectories for in-the-wild videos. We use body-worn, wireless electromagnetic (EM) sensors and a hand-held iPhone to record a total of 58 minutes of motion data, distributed over 81 indoor and outdoor sequences and 10 participants. Together with accurate body poses and shapes, we also provide global camera poses and body root trajectories. To construct EMDB, we propose a multi-stage optimization procedure, which first fits SMPL to the 6-DoF EM measurements and then refines the poses via image observations. To achieve high-quality results, we leverage a neural implicit avatar model to reconstruct detailed human surface geometry and appearance, which allows for improved alignment and smoothness via a dense pixel-level objective. Our evaluations, conducted with a multi-view volumetric capture system, indicate that EMDB has an expected accuracy of 2.3 cm positional and 10.6 degrees angular error, surpassing the accuracy of previous in-the-wild datasets. We evaluate existing state-of-the-art monocular RGB methods for camera-relative and global pose estimation on EMDB. EMDB is publicly available under https://ait.ethz.ch/emdb
Manuel Kaufmann, Jie Song, Chen Guo, Kaiyue Shen, Tianjian Jiang, Chengcheng Tang, Juan Zarate, Otmar Hilliges
2023-08-31T17:56:19Z
http://arxiv.org/abs/2308.16894v1
# EMDB: The Electromagnetic Database of Global 3D Human ###### Abstract We present EMDB, the Electromagnetic Database of Global 3D Human Pose and Shape in the Wild. EMDB is a novel dataset that contains high-quality 3D SMPL pose and shape parameters with global body and camera trajectories for in-the-wild videos. We use body-worn, wireless electromagnetic (EM) sensors and a hand-held iPhone to record a total of 58 minutes of motion data, distributed over 81 indoor and outdoor sequences and 10 participants. Together with accurate body poses and shapes, we also provide global camera poses and body root trajectories. To construct EMDB, we propose a multi-stage optimization procedure, which first fits SMPL to the 6-DoF EM measurements and then refines the poses via image observations. To achieve high-quality results, we leverage a neural implicit avatar model to reconstruct detailed human surface geometry and appearance, which allows for improved alignment and smoothness via a dense pixel-level objective. Our evaluations, conducted with a multi-view volumetric capture system, indicate that EMDB has an expected accuracy of 2.3 cm positional and 10.6 degrees angular error, surpassing the accuracy of previous in-the-wild datasets. We evaluate existing state-of-the-art monocular RGB methods for camera-relative and global pose estimation on EMDB. EMDB is publicly available under [https://ait.ethz.ch/emdb](https://ait.ethz.ch/emdb). ## 1 Introduction 3D human pose and shape estimation from monocular RGB images is a long-standing computer vision problem with many applications in AR/VR, robotics, assisted living, rehabilitation, or sports analysis. Much progress has been made in estimating camera-relative poses, typically assuming a weak-perspective camera model, _e.g_., [13, 33, 37, 38, 62]. However, this setting is too restrictive for many applications that involve a moving camera. Such applications must estimate a) human poses in-the-wild, under occlusion and encountering uncommon poses; and b) global locations of humans and the camera. Compared to the camera-relative setting, there is relatively little work on global pose estimation [77, 81]. This is in part due to the lack of comprehensive datasets that contain accurate 3D human pose and shape with global trajectories in a fully in-the-wild setting. To overcome this bottleneck, in this paper we propose a novel dataset, called EMDB, short for the ElectroMagnetic DataBase of Global 3D Human Pose and Shape in the Wild. EMDB consists of 58 minutes (105k frames) of challenging 3D human motion recorded in diverse scenes. We provide high-quality pose and shape annotations, as well as global body root and camera trajectories. The dataset contains 81 sequences distributed over 10 participants that were recorded with a hand-held mobile phone. Recording such data requires a motion capture system that is both mobile and accurate - a notoriously difficult problem. Systems that provide world-anchored 3D body keypoints often require multiple well-calibrated RGB or IR cameras within a static environment, which restricts outdoor use [23, 25, 27, 46]. While body-worn sensors such as head-mounted cameras [56, 75, 84] are promising for mobile use, such egocentric approaches introduce either heavy self-occlusions [56, 75] or are restricted to indoor settings with a fixed capture volume [84]. The 3DPW dataset [70] uses IMU sensors for outdoor recordings, yet the dataset is relatively small and lacks global trajectories. Moreover, IMU drift and the lack of direct positional sensor measurements imposes constraints in terms of pose diversity and accuracy. Instead, following [30], we leverage drift-free electromagnetic (EM) sensors that directly measure their position and orientation. Yet, any sensor-based capture system requires handling of measurement noise, accurate calibration of the sensors to the body's coordinate system and temporal and spatial alignment of the data streams. Addressing these challenges, we propose a method, Electromagnetic Poser (EMP), that allows for the construction of EMDB. EMP is a multi-stage optimization formulation that fuses up to 12 body-worn EM sensor measurements, monocular RGB-D images and camera poses, and produces accurate SMPL [43] pose and shape parameters alongside global trajectory estimates for the body's root and the camera. EMP works in the following 3 stages. **Calibration and EM Pose:** As an initial calibration step, we scan participants in minimal clothing using an indoor multi-view volumetric capture system (MVS, [14]) to obtain ground-truth shape and skin-to-sensor offsets. We subsequently record in-the-wild sequences of the same subject and fit SMPL to the drift-free EM measurements of the sensors' positions and orientations. This provides an accurate SMPL fit, albeit in a EM-local coordinate system. **World Alignment:** In the second stage, we align the EM-local pose estimates with a global world space, defined by the tracking space of a hand-held iPhone 13 that films the participants. We model this stage as a joint optimization that fuses the input EM measurements, 2D keypoints, depth, and camera poses. In our experiments we have found that the self-localized 6D poses of the iPhone are accurate to around \(2\) cm positional and \(<1\) degree angular error. The fixed body shape and accurate camera poses thus enable EMP to provide global SMPL root trajectories. **Pixel-Level Refinement:** In the third stage, we refine the initial global poses via dense pixel-level information to ensure high-quality and temporally smooth image alignment. To this end we leverage recent advancements in neural body modelling for in-the-wild videos and fit a neural body model with detailed geometry and appearance to the RGB images. Following [20], we model the human as a deformable implicit signed distance field and the background as a neural radiance field. This allows us to formulate a pixel-level RGB loss that compares color values obtained via composited neural rendering with the observed pixel value. We jointly optimize the neural body model and the SMPL poses, initialized with the output of the second stage. We experimentally show that this final stage results in temporally smooth results and accurate pose-to-image alignment. We evaluate EMP on 21 sequences recorded with our MVS [14], the same system we use to register ground-truth SMPL shape parameters. With a pose accuracy of 2.3 cm positional and 10.6\({}^{\circ}\) angular error, our evaluations reveal that EMP is more accurate than what has been reported for 3DPW (2.6 cm, 12.1\({}^{\circ}\)) [70]. Also, our global SMPL root trajectories are accurate with an estimated error of 5.1 cm compared to our indoor MVS. Finally, we evaluate the performance of recent state-of-the-art camera-relative and global RGB-based pose estimators on EMDB. Our results show that EMDB is a new challenging dataset that will enable future local _and_ global pose estimation research. In summary, we contribute: 1. EMDB, to the best of our knowledge the first comprehensive dataset to provide accurate SMPL poses, shapes, and trajectories in an unrestricted, mobile, in-the-wild scenario. 2. EMP, the first method to fuse EM measurements with image data and camera poses. 3. Extensive evaluations of the accuracy of EMP as well as baseline results of state-of-the-art work when evaluating on EMDB. Data is available under [https://ait.ethz.ch/emdb](https://ait.ethz.ch/emdb). ## 2 Related Work Sensor-based Pose EstimationModern inertial measurement units (IMUs) are an appealing sensor modality for human pose estimation because they are small and do not require line-of-sight. However, they only measure orientation directly. This lack of reliable positional information can be mitigated by using a large number of sensors [57] or by fusing IMU data with other modalities such as external cameras [7, 17, 44, 53, 54, 65, 70, 85], head-mounted cameras [21], LiDAR [15], or acoustic sensors [42, 69]. Research has attempted to reduce the required number of sensors, [9, 24, 70, 71], which requires costly optimizations [71], external cameras [70], or data-driven priors to establish the sensor-to-pose mapping [24, 26, 74, 78, 79] and deal with the under-constrained pose space. While such methods yield accurate local poses, IMUs are intrinsically limited in that their position estimates drift over time. Addressing this challenge, EM-POSE [30] puts forth a novel method for body-worn pose estimation that relies on wireless electromagnetic (EM) field sensing to directly measure positional values. A learned optimization [60] formulation estimates accurate body pose and shape from EM inputs. However, [30] is limited to a small indoor capture space, requires external tracking of the root pose and is not aligned with image observations. In this work, we move beyond these limitations and present an EM-based capture system that is mobile, deployed to capture in-the-wild data, and produces high-quality pose-to-image alignment. RGB-based Pose EstimationThe 3D pose of a human is either represented as a skeleton of 3D joints [45, 47, 61, 87] or via parametric body models like SCAPE [2] and SMPL [43] for a more fine-grained representation. We note that almost the entire body of research estimates local (_i.e_., camera-local) poses. In recent years, deep neural networks have driven significant advancements in estimating body model parameters directly from images or videos [19, 28, 29, 32, 33, 37, 38, 49, 62, 63, 64, 66, 68, 76, 82, 86]. In addition, researchers have combined the advantages of both optimization and regression to fit the SMPL body [34, 60]. Others have leveraged graph convolutional neural networks to effectively learn local vertex relations by building a graph structure based on the mesh topology of the parametric body models, _e.g_. [13, 39]. These methods propose transformer encoder architectures to learn the non-local relations between human body joints and mesh vertices via attention mechanisms. Recently, a few approaches have set out to estimate realistic global trajectories of humans and cameras from local human poses [36, 77, 80, 81]. We evaluate several of the above methods on our proposed dataset on the tasks of camera-relative and global human pose estimation. Human Pose DatasetsCommonly used datasets to evaluate 3D human pose estimation are H3.6M [25], MPI-INF-3DHP [46], HumanEva [58], and TotalCapture [27]. Although these datasets offer synchronized video and MoCap data, they are restricted to indoor settings with static backgrounds and limited variation in clothing and activities. To address these limitations, [70] proposed a method that combines a single hand-held camera and a set of body-worn IMUs to estimate relatively accurate 3D poses, resulting in an in-the-wild dataset called 3DPW. Following this work, HPS [21] estimates 3D human pose with IMUs while localizing the person via a head-mounted camera within a pre-scanned 3D scene. To further address the issue of IMU drift, HSC4D [15] leverages LiDAR sensors for global localization. However, both HPS and HSC4D assume static scene scans and do not register global body pose in a third-person view. Moreover, they lack an evaluation of how accurate their pose estimates are. Another approach to outdoor performance capture with reduced equipment is to utilize one or multiple RGB-D cameras [22, 6, 23]. In these approaches, the quality of body pose registrations is limited by the cameras' line-of-sight, noisy depth measurements and the capture space is fixed. None of these works provide an estimate of their datasets' accuracy either. EgoBody [84] provides egocentric views and registered SMPL poses but is restricted to a fixed indoor space, requires up to 5 external RGB-D cameras and lacks evaluation of the data accuracy. Synthetic data has been suggested as a means to provide high-quality annotations [51, 68]. However, due to the reliance on static human scans and artificial backgrounds there is a distributional shift compared to real images. With EMDB we provide the first dataset of 3D human pose and shape that is recorded in an unrestricted, mobile, in-the-wild setting and provides global camera and SMPL root trajectories. To gauge the accuracy of EMDB, we rigorously evaluate our method against ground-truth obtained on a multi-view volumetric capture system [14]. These evaluations reveal that EMDB is not only two times larger than 3DPW, but its annotations are also more accurate. ## 3 Overview Our goal is to provide a dataset with i) accurate 3D body poses and ii) shapes alongside global trajectories of the iii) body's root and iv) the moving camera. This data is obtained from electromagnetic (EM) sensor measurements and RGB-D data streamed from a single hand-held iPhone. We first describe the capture setup and protocol in Sec. 4. Sec. 5 discusses our method, EMP, for the estimation of global SMPL parameters, summarized in Fig. 2. To gauge the accuracy of EMP, we evaluate it against ground-truth data recorded with a multi-view volumetric system (MVS, [14]). These evaluations are provided in Sec. 6. Finally, using EMP on newly captured in-the-wild sequences, we introduce the Electromagnetic Database of Global 3D Human Pose and Shape in the Wild, EMDB, in Sec. 7, where we also evaluate existing state-of-the-art methods on EMDB. ## 4 Capture Setup ### Sensing Hardware EM sensors measure their position \(\mathbf{p}_{s}\) and orientation \(\mathbf{R}_{s}\) w.r.t. a source that emits an electromagnetic field. We use the same wireless EM sensors as [30], which have an estimated accuracy of 1 cm positional and 2-3 degrees angular error. We mount the EM source on the lower back of a participant and arrange the sensors on the lower and upper extremities and the head and torso. For the detailed sensor placement we refer to the Supp. Mat. All sensor data is streamed wirelessly to a laptop for recording. We record the subjects with a hand-held iPhone 13 Pro Max. The record3d app [55] is used to retrieve depth and the iPhone's 6D pose is estimated by Apple's ARKit. We synchronize the data streams via a hand clap which is easy to detect in the phone's audio and in the EM accelerations. ### Body Calibration Before we start recording, we first scan each participant in minimal clothing to obtain their ground-truth shape. To this end, we leverage our MVS [14] and use the resulting surface scans and 53 RGB views to register the SMPL shape parameters \(\mathbf{\beta}\). Details on the registration pipeline can be found in the Supp. Mat. Subsequently, we mount the sensors and EM-source onto the participant under regular clothing (see Fig. 2, left). We then record a 3-second calibration sequence to determine subject-specific skin-to-sensor offsets. We first register SMPL to the calibration sequence and follow [30] to manually select anchor points on the SMPL mesh for every sensor \(s\). An anchor point is parameterized via a position \(\tilde{\mathbf{p}}_{s}\) and orientation \(\tilde{\mathbf{R}}_{s}\). We then compute per-sensor offsets \(\mathbf{o}_{s}=(\mathbf{Q}_{s},\mathbf{v}_{s})\) by minimizing an objective that equates the measured orientation \(\mathbf{R}_{s}=\tilde{\mathbf{R}}_{s}\mathbf{Q}_{s}\) and the measured position \(\mathbf{p}_{s}=\tilde{\mathbf{p}}_{s}+\tilde{\mathbf{R}}_{s}\mathbf{v}_{s}\) (see Fig. 2, left). For this to work, the sensor measurements must be spatially and temporally aligned with the MVS. We thus track the EM source with an Apriltag [35, 48, 72] and use an Atomos Ultrasync One timecode generator [67] for temporal alignment. More details are shown in the Supp. Mat. Note that this procedure must only be done once per sensor placement. ## 5 Method (EMP) ### Notations and Preliminaries The inputs to our method are EM sensor measurements \(\mathbf{p}_{s}\in\mathbb{R}^{3}\) and \(\mathbf{R}_{s}\in SO(3)\), skin-to-sensor offsets \(\mathbf{o}_{s}=(\mathbf{Q}_{s},\mathbf{v}_{s})\), SMPL shape parameters \(\mathbf{\beta}\in\mathbb{R}^{10}\), RGB images \(\mathbf{I}\in\mathbb{R}^{1920\times 1440\times 3}\), depth point clouds \(\mathcal{P}=\{\mathbf{p}_{i}\mid\mathbf{p}_{i}\in\mathbb{R}^{3}\}\), camera extrinsics \(\mathbf{C}=\left[\mathbf{R}^{\mathbf{C}}\mid\mathbf{t}^{\mathbf{C}}\right]\in \mathbb{R}^{3\times 4}\) and intrinsics \(\mathbf{K}\in\mathbb{R}^{3\times 3}\). Note that the EM measurements are in EM-local space, _i.e._, relative to the source worn on the lower back. From these input measurements, we aim to estimate the SMPL body pose parameters \(\mathbf{\theta}_{b}\in\mathbb{R}^{69}\), the SMPL root orientation \(\mathbf{\theta}_{r}\in\mathbb{R}^{3}\) and translation \(\mathbf{t}\in\mathbb{R}^{3}\) in world coordinates such that they align with sensor measurements, images, and camera poses. We fix the world space to be the iPhone's coordinate frame. We summarize SMPL parameters as \(\mathbf{\Omega}=(\mathbf{\theta}_{r},\mathbf{\theta}_{b},\mathbf{t},\mathbf{\beta})\). Note that \(\mathbf{\beta}\in\mathbb{R}^{10}\) is not an optimization variable and is obtained a-priori (see Sec. 4.2). All quantities usually refer to a time step \(t\), but we omit the time subscript for clarity unless necessary. ### Multi-stage Optimization As shown in Fig. 2, our method employs a multi-stage optimization procedure, which we detail in the following. Stage 1: Local EM PoseFor a given sequence, we start our optimization procedure by first finding SMPL parameters \(\mathbf{\Omega}\) that best explain the EM measurements in EM-local space. We follow EM-POSE [30] and define a reconstruction cost function \(E_{\text{rec}}\) that measures how well the current SMPL fit matches the sensor measurements: \[E_{\text{rec}}= \sum_{s=1}^{S}\lambda_{\mathbf{p}}||\mathbf{p}_{s}-\mathbf{p}_{s }^{v}(\mathcal{M}(\mathbf{\Omega}),\mathbf{o}_{s})||_{2}^{2}+\] \[\sum_{s=1}^{S}\lambda_{t}||\mathbf{R}_{s}-\mathbf{R}_{s}^{v}( \mathcal{M}(\mathbf{\Omega}),\mathbf{o}_{s})||_{2}^{2}\;, \tag{1}\] Figure 2: Method overview. We first scan a subject in minimal clothing with a multi-view volumetric capture system to obtain their reference shape parameters \(\mathbf{\beta}\) and calibrate subject-specific skin-to-sensor offsets in regular clothing (left). We subsequently fit SMPL to in-the-wild data with a multi-stage optimization pipeline. Stage 1 fits SMPL to the EM measurements in EM-local space leveraging the calibrated body shape and skin-to-sensor offsets. Stage 2 aligns the local fit with the world, by jointly optimizing over 2D keypoints, depth, camera poses, EM measurements, and the output of stage 1. Stage 3 then refines the output of stage 2 by fitting a neural implicit body model with detailed geometry and appearance to the RGB images via a pixel-level supervision signal to boost smoothness and image-to-pose alignment. where we use the current SMPL mesh \(\mathcal{M}(\mathbf{\Omega})\) and skin-to-sensor offsets \(\mathbf{o}_{s}\) to compute virtual sensor positions \(\mathbf{p}_{s}^{v}\) and orientations \(\mathbf{R}_{s}^{v}\). In addition, we penalize impossible joint angles with a simple regularizer \(E_{\text{bp}}\). The final optimization objective of the first stage is then \(E_{\text{S1}}=\lambda_{\text{rec}}E_{\text{rec}}+\lambda_{\text{bp}}E_{\text{bp}}\). We use a batched optimization to minimize it over all \(T\) frames of the sequence. The output of stage 1 are the SMPL parameters in local EM space, \(\mathbf{\Omega}^{\text{S1}}\) (see also Fig. 2). Stage 2: World AlignmentDue to accurate sensor data and our body calibration procedure, the \(\mathbf{\Omega}^{\text{S1}}\) parameters are already of high quality (see Sec. 6.1). However, the EM space is not aligned with the world space. We align \(\mathbf{\Omega}^{\text{S1}}\) with the world in a second optimization stage such that it fits the RGB-D observations and camera pose data. An overview of this stage is provided in Fig. 2. This stage is guided by a 2D keypoint reprojection loss. Importantly, both 2D keypoints and depth are noisy and fitting to them naively can corrupt the initial estimates \(\mathbf{\Omega}^{\text{S1}}\). Hence, we must trade-off accurate alignment of human and camera poses in world coordinates with the accuracy of the local pose. Although our trust in the EM fit \(\mathbf{\Omega}^{\text{S1}}\) is high, we can still achieve improvements by fitting to RGB-D data for frames in which errors arise from sensor calibration or occasional measurement noise. Furthermore, the temporal alignment of EM and RGB-D data streams can be improved by fitting to the images. We model this trade-off as a joint optimization over all the input modalities. We first define a 2D keypoint reprojection loss. We extract \(N=25\) 2D keypoints from Openpose [12] denoted by \(\mathbf{x}_{i}\in\mathbb{R}^{2}\). The 3D keypoints \(X(\mathbf{\Omega})\) are obtained via a linear regressor from the SMPL vertices. We then use the camera parameters to perspectively project the 3D keypoints (in homogenous coordinates), \(\hat{\mathbf{x}}_{i}=\mathbf{K}\left[\mathbf{R}^{\text{C}}\mid\mathbf{t}^{ \text{C}}\right]X(\mathbf{\Omega})_{i}\). The reprojection cost is then defined as \[E_{\text{2D}}=\sum_{i=1}^{N}\mathbb{I}\left[c_{i}\geqslant\tau\right]\cdot \rho(\hat{\mathbf{x}}_{i}-\mathbf{x}_{i}) \tag{2}\] where \(\rho\) is the Geman-McClure function [16], \(c_{i}\) is the confidence of the \(i\)-th keypoint as estimated by Openpose and \(\mathbb{I}\) the indicator function. We set a high confidence threshold \(\tau=0.5\) in Eq. (2) to account for keypoint noise. Yet, even high confidence keypoints can be wrong. To ensure high quality of the ground-truth annotations provided in EMDB, we carefully review the keypoint predictions by Openpose and manually correct them for challenging samples. We add two EM-related cost terms to this stage's optimization to further constrain the 3D pose. The first term is the EM reconstruction cost \(E_{\text{rec}}\) from Eq. (1). Note that here we only optimize the SMPL body pose \(\mathbf{\theta}_{b}\) when computing the cost, denoted as \(E_{\text{rec}}^{*}\). The second term is an additional prior on the body pose \(\mathbf{\theta}_{b}^{\text{S1}}\) found in the first stage: \[E_{\text{prior}}=||\mathbf{\theta}_{b}^{\text{S1}}-\mathbf{\theta}_{b}||_{2}^{2}. \tag{3}\] This \(E_{\text{prior}}\) formulation is similar to the one of HPS [21]. However, we found that the addition of \(E_{\text{prior}}\) alone is not sufficient and \(E_{\text{rec}}^{*}\) plays a crucial role (see Sec. 6.1). Finally, we incorporate the iPhone's point clouds \(\mathcal{P}\). Since the point clouds are noisy, they mostly serve as a regularizer for the translation \(\mathbf{t}\) with the following term: \[E_{\text{pcl}}=\frac{1}{|\mathcal{P}^{h}|}\sum_{\mathbf{p}_{i}\in\mathcal{P}_{ h}}d(\mathbf{p}_{i},\mathcal{M}(\mathbf{\Omega})). \tag{4}\] Here, \(d(\cdot)\) finds the closest triangle on the SMPL mesh \(\mathcal{M}(\mathbf{\Omega})\) and then returns the squared distance to either the triangle's plane, edge, or vertex. \(\mathcal{P}_{h}\) is a crop of \(\mathcal{P}\), where the human is isolated via masks provided by RVM [40]. The final second stage objective is thus: \[E_{\text{S2}}=\lambda_{\text{2D}}E_{\text{2D}}+\lambda_{\text{rec}}E_{\text{ rec}}^{*}+\lambda_{\text{prior}}E_{\text{prior}}+\lambda_{\text{pcl}}E_{\text{pcl}} \tag{5}\] We optimize this objective frame-by-frame and use the previous output as the initialization for the next frame. The output of this stage is \(\mathbf{\Omega}^{\text{S2}}\) (see also Fig. 2). For the very first frame, we initialize \(\mathbf{t}^{\text{S2}}\) as the mean of \(\mathcal{P}_{h}\). All sequences start with a T-pose where the subject is facing the camera, so that it is easy to find an initial estimate of \(\mathbf{\theta}_{r}^{\text{S2}}\). Stage 3: Pixel-Level RefinementStage 2 finds a good trade-off between accurate poses and global alignment (see Sec. 6.1). However, the jitter in the 2D keypoints causes temporally non-smooth estimates. Reducing the jitter by manually cleaning 2D keypoints is not viable. Instead, we add a third stage to EMP (see also Fig. 2) in which we follow recent developments in neural body modelling for in-the-wild videos. For every sequence, we fit a neural implicit model of clothed human shape and appearance to the RGB images by minimizing a dense pixel-level objective. More specifically, we leverage Vid2Avatar (V2A [20]) to model the human in the scene as an implicit signal-distance field (SDF) representing surface geometry and a texture field, while the background is treated as a separate neural radiance field (NeRF++) [83]. The SDF is modelled in canonical space and deformed via SMPL parameters \(\mathbf{\Omega}\) to pose the human. Then, given a ray \(\mathbf{r}=(\mathbf{o},\mathbf{v})\) whose origin \(\mathbf{o}\) is the camera center and \(\mathbf{v}\) its viewing direction, a color value \(C(\mathbf{r})\) can be computed via differentiable neural rendering and is compared to the actual RGB value \(\hat{C}(\mathbf{r})\) to formulate a self-supervised objective: \[E_{\text{rgb}}=\frac{1}{|\mathcal{R}_{t}|}\sum_{\mathbf{r}\in\mathcal{R}_{t}} |C(\mathbf{r})-\hat{C}(\mathbf{r})| \tag{6}\] where \(\mathcal{R}_{t}\) is the set of all rays that we shoot into the scene at frame \(t\). Importantly, \(C(\mathbf{r})\) depends on the SMPL poses \(\mathbf{\Omega}\) that are optimized jointly together with the parameters for the human and background fields. Along with \(E_{\text{rgb}}\), the original formulation of V2A minimizes two other objectives: the Eikonal loss \(E_{\text{eik}}\) and a scene decomposition loss \(E_{\text{dec}}\) to disentangle the human from the background. For more details we refer the reader to [20]. We initialize the SMPL parameters \(\mathbf{\Omega}\) with the outputs of the second stage \(\mathbf{\Omega}^{\text{S2}}\) and add a pose regularization term \(E_{\text{reg}}=||\boldsymbol{\theta}-\boldsymbol{\theta}^{\text{S2}}||_{2}^{2}\) (where \(\boldsymbol{\theta}:=[\boldsymbol{\theta}_{r},\boldsymbol{\theta}_{b}]\)) to encourage solutions to stay close to the initializations. The final third stage objective for a single time step is thus (omitting weights \(\lambda\) for brevity): \[E_{\text{S3}}=E_{\text{rgb}}(\boldsymbol{\omega}_{h},\boldsymbol{\omega}_{b} )+E_{\text{eik}}(\boldsymbol{\omega}_{h})+E_{\text{dec}}(\boldsymbol{\omega}_{ h})+E_{\text{reg}}(\boldsymbol{\theta}), \tag{7}\] where \(\boldsymbol{\omega}_{h}\) summarizes the parameters for the human field, including SMPL pose parameters \(\mathbf{\Omega}\), and \(\boldsymbol{\omega}_{b}\) summarizes the weights of the background field. This objective is minimized over all \(T\) frames of the given sequence and produces outputs \(\mathbf{\Omega}^{\text{S3}}\), which are noticeably less jitter (see Sec. 6.1). ## 6 Evaluation ### Pose Accuracy To estimate the accuracy of EMP we recorded a number of sequences with the same capture setup as we use for the in-the-wild sequences, but the motions are performed on our MVS [14] that is synchronized with the EM sensors and the iPhone. We use the surface scans and 53 high-resolution RGB views from this stage to procure SMPL ground-truth registrations (see Supp. Mat. for details), which we can then compare to the outputs of EMP to estimate its accuracy. We have recorded a total of 21 sequences (approx. 13k frames) distributed over all 10 participants for this evaluation. The respective ablation studies and comparisons to other methods are listed in Tab. 1. The closest related in-the-wild dataset to ours is 3DPW [70]. It is also the only other dataset that provides ground-truth evaluations of their method. As different sensor technologies are used, a direct comparison to their method is not feasible. Still, to allow for a comparison of the estimated accuracy, we compute and report the same metrics as [70],, the Procrustes-aligned mean per-joint positional and angular errors (MPJPE-PA, MPJAE-PA). To measure smoothness, we follow TransPose [79] and report their jitter metric. In addition we show qualitative comparisons to 3DPW with similar motions in the Supp. Mat. **Results**: Tab. 1 allows to draw several conclusions. First, recent monocular methods - whether they use ground-truth bounding boxes (HybrIK [37]) or not (ROMP [62]) - are far below EMP's accuracy. Also V2A [20] suffers without good initial poses. LGD [60], which uses 2D keypoints in a hybrid optimization and outperforms SPIN [34] and Simplify [8] on 3DPW, underperforms compared to EMP. This highlights a clear need for sensor-based methods to procure \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & MPJPE-PA & MPJAE-PA & Jitter \\ & [mm] & [deg] & [10m s\({}^{-3}\)] \\ \hline ROMP [62] & \(57.9\pm 23.6\) & \(19.8\pm 6.3\) & \(49.0\pm 10.6\) \\ HybrIK [37] & \(50.4\pm 22.3\) & \(19.0\pm 5.8\) & \(33.3\pm 7.1\) \\ Vid2Avatar [20] & \(50.2\pm 22.8\) & \(18.1\pm 6.2\) & \(38.7\pm 8.0\) \\ LGD [60] & \(61.1\pm 31.9\) & \(20.1\pm 8.0\) & \(68.9\pm 10.2\) \\ \hline Stage 1 & \(26.0\pm 8.6\) & \(10.9\pm 3.1\) & \(6.0\pm 2.9\) \\ Stage 2 (no \(E_{\text{rec}}^{*}\)) & \(31.6\pm 14.1\) & \(12.7\pm 4.5\) & \(26.8\pm 3.7\) \\ Stage 2 (no \(E_{\text{prior}}\)) & \(35.4\pm 14.2\) & \(11.6\pm 3.9\) & \(23.0\pm 3.3\) \\ Stage 2 & \(23.7\pm 7.5\) & \(10.5\pm 3.0\) & \(21.7\pm 3.7\) \\ \hline Stage 3 (after \(E_{\text{S3}}\)) & \(23.5\pm 7.6\) & \(10.6\pm 3.1\) & \(12.7\pm 2.5\) \\ Stage 3 (EMP) & \(\mathbf{23.4\pm 7.5}\) & \(10.6\pm 3.1\) & \(\mathbf{3.5\pm 1.0}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of EMP to existing RGB-based methods (top) and self-ablations (middle/bottom) on ground-truth data obtained with our multi-view capture system. Figure 4: Effect of Stage 3. We visualize the output of stage 2 (second column) and the refined output of stage 3 (third column) showing improved pose-to-image alignment. The two right-most columns show the rendering of the entire scene and the separated human (foreground). Figure 3: Evaluation of global trajectories on our MVS. high-quality 3D poses. Second, Tab. 1 ablates the contributions of the multi-stage design of EMP. We observe that the first stage, which only fits to the EM measurements, already produces good results. Further, the joint optimization in our second stage finds a good trade-off and even improves the initial poses from the first stage via the addition of \(E_{\text{rec}}^{*}\) and \(E_{\text{prior}}\). Lastly, the third stage only improves the pose marginally, but helps with smoothness and image alignment ("after \(E_{\text{S3}}\)" in Tab. 1). We perform a light smoothing pass as a post-processing step on the outputs of \(E_{\text{S3}}\). We found that this further reduces jitter without breaking pose-to-image alignment. For a visualization of the effect of stage 3, as well as renderings of the neural implicit human model and the scene, please refer to Fig. 4. Note that naively smoothing the outputs of the second stage impacts the alignment negatively, which we show in the Supp. Mat. ### Global Trajectories iPhone Pose AccuracyWe first compare the iPhone's self-localized poses using optical tracking with our MVS. To do so we rigidly attach an Apriltag [35, 48, 72] to the iPhone and move the pair around. An Apriltag of roughly 5 cm side length can be tracked with millimeter accuracy. To compare its pose to the iPhone's pose, we must compute an alignment, the details of which are reported in the Supp. Mat. After alignment, the difference between the iPhone and Apriltag trajectories on a 15 second sequence is \(1.8\pm 0.9\) cm and \(0.4\pm 0.2\) deg respectively. Global SMPL TrajectoriesTo evaluate the accuracy of the global trajectories, we asked half of our participants to move freely in the capture space while we track the iPhone with an Apriltag as above. This enables us to align the iPhone's and the MVS' tracking frames. For details, please refer to the Supp. Mat. After alignment, we compute the Euclidean distance between EMP's predicted trajectory and the ground-truth trajectory obtained on the stage. Over 5 sequences (approx. 3.9k frames) we found that EMP's trajectories are on average \(5.1\pm 3.2\) cm close to the ground-truth, which is low considering a capture space diameter of \(2.5\) meters (see Fig. 3 for a visualization). To gauge the accuracy of the global trajectories in-the-wild, where we cannot track the iPhone, we asked some participants to return to the starting point at the end of the sequence. This allows us to compute a measure of drift for the in-the-wild sequences. For an indoor sequence of \(81\) meters, this error is \(23.4\) cm (or \(0.3\%\) of the total path length) and for an outdoor sequence of \(112\) meters length it is \(73.0\) cm (\(0.7\%\)) respectively (see also Fig. 9 for a visualization). Further, to shed more light onto pose diversity of EMDB compared to our closest related work, 3DPW [70], we project all poses of both datasets into VPoser's [52] latent space, run PCA and plot the first two principal components in Fig. 5. We make several observations: i) EMDB covers a larger area than 3DPW. ii) The additional area is made up of complex and diverse poses. iii) The highlighted poses of 3DPW around the lower boundary lack diversity. iv) Outliers on 3DPW can be broken poses, while the closest EMDB pose is still valid (see right-most pose pair). We provide visualizations of our dataset's quality in Fig. 7. The recording of this dataset has been approved by our institution's ethics committee. All subjects have participated voluntarily and gave written consent for the capture and the release of their data. ### Baselines on EMDB We evaluate two tasks on EMDB: camera-local 3D human pose estimation from monocular RGB images and the emerging task of global trajectory prediction. To this end we partition EMDB into two parts: EMDB 1, which consists of our most challenging sequences (\(17\) sequences of a total of \(24\,117\) frames), and EMDB 2 with \(25\) sequences (\(43\,120\) frames) featuring meaningful global trajectories. Monocular RGB-based Pose EstimationWe evaluate a total of \(8\) recent SOTA methods on EMDB 1. Please refer to Tab. 3 for an overview of the results. We follow the AGORA protocol [51] and compute the MPJPE and MVE metrics with both a Procrustes alignment (*-PA) and a hip-alignment pre-processing step. In addition, we follow sensor-based pose estimation work and report the joint angular error MPJAE and the jitter metric [79]. To provide a fair evaluation and comparison between baselines, we provide ground-truth bounding boxes for methods that accept them or tightly crop the image to the human and re-scale it to the resolution the method requires. Hence only ROMP [62] takes the input images as is. Also, we exclude the few frames where the human is entirely occluded. We use the HRNet version of HybrIK [37] - an improved variant of their originally published model. For FastMETRO [13] we use their biggest model (*-L) and evaluate both with and without the SMPL regression head. None of the methods are fed any knowledge about the camera and comparisons to the ground-truth are performed in camera-relative coordinates. We use the SMPL gender(s) that the respective method was trained with. **Results**: Tab. 3 reveals HybrIK [37] as the best performer. Nonetheless, an MPJPE-PA error of \(>65\) mm suggests that there is a lot of room for improvement. As is noted in AGORA [51], we highlight that the MPJPE-PA is a very forgiving metric due to the Procrustes alignment that removes rotation, translation, and scale. We have noticed that a good MPJPE-PA does not always translate to visually pleasing results, a circumstance that the rather high jitter and MPJPE value for all baselines supports (see also the supp. video). Similarly we observe very high standard deviations, which is a metric that tends to have been neglected by common benchmarks. Furthermore, we notice high angular errors of \(>23^{\circ}\) on average for all methods. These results and the fact that we used ground-truth bounding-boxes for all methods except ROMP, suggest that there is ample space for future research in this direction using EMDB. We show selected results for each baseline in Fig. 7 and further highlight a common failure case in Fig. 8 where the baseline method fails to capture the lower arm rotations. Note that such a failure case is not accounted for by the MPJPE metric, which is why we also report angular errors. Global Trajectory EstimationAs a second task, we evaluate GLAMR [81] on EMDB 2. We use GLAMR's publicly available code to run and evaluate its performance. This protocol computes global MPJPE, MVE, and acceleration metrics on windows of 10 seconds length, where the \begin{table} \begin{tabular}{l c c|c c|c c|c} \hline \hline Method & MPJPE \(\downarrow\) & MPJPE-PA \(\downarrow\) & MVE \(\downarrow\) & MVE-PA \(\downarrow\) & MPJAE \(\downarrow\) & Jitter \(\downarrow\) \\ & [mm] & [mm] & [mm] & [mm] & [deg] & [deg] & [10m s\({}^{-3}\)] \\ \hline PyMAF [82] & 131.1 \(\pm\) 54.9 & 82.9 \(\pm\) 38.2 & 160.0 \(\pm\) 64.5 & 98.1 \(\pm\) 44.4 & 28.5 \(\pm\) 12.5 & 25.7 \(\pm\) 10.1 & 81.8 \(\pm\) 25.6 \\ \hline LGD [60] & 115.8 \(\pm\) 64.5 & 81.1 \(\pm\) 51.1 & 140.6 \(\pm\) 75.8 & 95.7 \(\pm\) 56.8 & 25.2 \(\pm\) 13.3 & 25.6 \(\pm\) 15.3 & 73.0 \(\pm\) 38.5 \\ \hline ROMP [62] & 112.7 \(\pm\) 48.0 & 75.2 \(\pm\) **33.0** & 134.9 \(\pm\) 56.1 & 90.6 \(\pm\) 38.4 & 26.6 \(\pm\) 10.4 & 24.0 \(\pm\) 8.7 & 71.3 \(\pm\) 25.3 \\ PARE [33] & 113.9 \(\pm\) 49.5 & 72.2 \(\pm\) 33.9 & 133.2 \(\pm\) 57.4 & 85.4 \(\pm\) 39.1 & 24.7 \(\pm\) **9.8** & 22.4 \(\pm\) 8.8 & 75.1 \(\pm\) 22.5 \\ \hline GLAMR [81] & 107.8 \(\pm\) 50.1 & 71.0 \(\pm\) 36.6 & 128.2 \(\pm\) 58.8 & 85.5 \(\pm\) 40.9 & 25.5 \(\pm\) 12.6 & 23.5 \(\pm\) 11.4 & 67.4 \(\pm\) 32.3 \\ FastMETRO-L [13] & 115.0 \(\pm\) 9.1 & 72.7 \(\pm\) 47.4 & 133.6 \(\pm\) 109.7 & 86.0 \(\pm\) 55.4 & 25.1 \(\pm\) 16.0 & 22.9 \(\pm\) 12.7 & 81.3 \(\pm\) 38.7 \\ \hline CLIEF [38] & 103.1 \(\pm\) **43.7** & 68.8 \(\pm\) 33.8 & 122.9 \(\pm\) **49.5** & 81.3 \(\pm\) **37.9** & **23.1 \(\pm\) 9.9** & **21.6 \(\pm\) 8.6** & 55.5 \(\pm\) **17.9** \\ FastMETRO-L* [13] & 108.1 \(\pm\) 52.9 & 66.8 \(\pm\) 36.6 & **119.2** \(\pm\) 59.7 & 81.2 \(\pm\) 43.9 & n/a & n/a & 185.9 \(\pm\) 51.0 \\ HybrIK [37] & **103.0** \(\pm\) 44.3 & **65.6** \(\pm\) **33.3** & 122.2 \(\pm\) 50.5 & **80.4** \(\pm\) 39.1 & 24.5 \(\pm\) 11.3 & 23.1 \(\pm\) 11.1 & **49.2 \(\pm\) 18.5** \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluations of state-of-the-art methods on EMDB 1. Ordered descendingly by MPJPE-PA. Best results in **bold**, second best underlined. FastMETRO-L*: version without SMPL regression head, _i.e._, the MPJPE is only evaluated on 14 joints as dictated by its model architecture. beginning of each window is aligned to the ground-truth trajectory. We found that GLAMR achieves a G-MPJPE of \(3\,193\) mm, a G-MVE of \(3\,203\) mm and acceleration of \(12.6\) mm s\({}^{-2}\). We visualize one sequence in Fig. 9, where we observe that the GLAMR prediction drifts significantly from our provided trajectories. We believe EMDB will help to boost future method's performance on this task. ## 8 Conclusion ConclusionWe present EMDB, the first comprehensive dataset to provide accurate SMPL poses, shapes and trajectories in an unrestricted, mobile, in-the-wild setting. Our results indicate a clear need for sensor-based performance capture to procure high-quality 3D human motion and push the boundaries of monocular RGB-based pose estimators. LimitationsEMDB does not contain multi-person sequences, because using multiple EM systems requires non-trivial changes to avoid cross-talk and interference between sensors. Furthermore, there are no sensors on the feet as indoor floors often contain metal beams that would disturb the readings. Lastly, the quality of our camera trajectories is upper-bounded by the quality of Apple's AR toolkit. AcknowledgmentsWe thank Robert Wang, Emre Aksan, Braden Copple, Kevin Harris, Mishael Herrmann, Mark Hogan, Stephen Olsen, Lingling Tao, Christopher Twigg, and Yi Zhao for their support. Thanks to Dean Bakker, Andrew Searle, and Stefan Walter for their help with our infrastructure. Thanks to Marek, developer of record3d, for his help with the app. Thanks to Laura Wilfroth and Deniz Yildiz for their assistance with capture. Thanks to Dario Mylonopoulos for his priceless work on aitviewer which we used extensively in this work. We are grateful to all our participants for their valued contribution to this research. Computations were carried out in part on the ETH Euler cluster. Figure 8: Common failure case where the baseline (here ROMP [62]) fails to capture the lower arm rotations. Figure 7: Example images and reference poses appearing in EMDB, alongside comparisons to the outputs of recent state-of-the-art RGB-based pose estimation methods. Figure 9: (Left) GLAMR [81] results projected into the camera at the start and end of a loop-closing sequence. (Right) GLAMR’s global trajectories compared to ours.
2309.10114
Mixed Graph Signal Analysis of Joint Image Denoising / Interpolation
A noise-corrupted image often requires interpolation. Given a linear denoiser and a linear interpolator, when should the operations be independently executed in separate steps, and when should they be combined and jointly optimized? We study joint denoising / interpolation of images from a mixed graph filtering perspective: we model denoising using an undirected graph, and interpolation using a directed graph. We first prove that, under mild conditions, a linear denoiser is a solution graph filter to a maximum a posteriori (MAP) problem regularized using an undirected graph smoothness prior, while a linear interpolator is a solution to a MAP problem regularized using a directed graph smoothness prior. Next, we study two variants of the joint interpolation / denoising problem: a graph-based denoiser followed by an interpolator has an optimal separable solution, while an interpolator followed by a denoiser has an optimal non-separable solution. Experiments show that our joint denoising / interpolation method outperformed separate approaches noticeably.
Niruhan Viswarupan, Gene Cheung, Fengbo Lan, Michael Brown
2023-09-18T19:40:18Z
http://arxiv.org/abs/2309.10114v1
# Mixed Graph Signal Analysis of Joint Image Denoising / Interpolation ###### Abstract A noise-corrupted image often requires interpolation. Given a linear denoiser and a linear interpolator, when should the operations be independently executed in separate steps, and when should they be combined and jointly optimized? We study joint denoising / interpolation of images from a mixed graph filtering perspective: we model denoising using an undirected graph, and interpolation using a directed graph. We first prove that, under mild conditions, a linear denoiser is a solution graph filter to a maximum a posteriori (MAP) problem regularized using an undirected graph smoothness prior, while a linear interpolator is a solution to a MAP problem regularized using a directed graph smoothness prior. Next, we study two variants of the joint interpolation / denoising problem: a graph-based denoiser followed by an interpolator has an optimal separable solution, while an interpolator followed by a denoiser has an optimal non-separable solution. Experiments show that our joint denoising / interpolation method outperformed separate approaches noticeably. Niruhan Viswarupan\({}^{\dagger}\), Gene Cheung\({}^{\dagger}\), Fengbo Lan\({}^{\ddagger}\), Michael S. Brown\({}^{\dagger}\)+\({}^{\dagger}\)York University, Canada \({}^{\ddagger}\)Hong Kong Polytechnic University Footnote †: dagger}\)[9] proved a similar theorem for linear denoiser, but our proof based on linear algebra is simpler and more intuitive. See Section 3.1 for details. Footnote †: thanks: \({}^{\dagger}\)[9] proved a similar theorem for linear denoiser, but our proof based on linear algebra is simpler and more intuitive. See Section 3.1 for details. Image denoising, image interpolation, graph signal processing ## 1 Introduction Acquired sensor images are typically noise-corrupted, and a subsequent interpolation task is often required for processing and/or display purposes. For example, images captured on a Bayer-patterned grid require demosaicing [1, 2], and a perspective image may need rectification into a different viewpoint [3]. However, image denoers and interpolators are often designed and optimized as individual components [4, 5, 6]. This leads to a natural question: should these denoisers and interpolators be independently executed in separate steps, or should they be combined and jointly optimized? We study the joint image denoising / interpolation problem from a _mixed_ graph filtering perspective, leveraging recent progress in the _graph signal processing_ (GSP) field [7, 8]. Our work makes two contributions. First, we prove that, under mild conditions, a linear denoiser is also an optimal graph filter to a _maximum a posteriori_ (MAP) denoising problem regularized using an _undirected_ graph smoothness prior [10] (Theorem 1), while a linear interpolator is also an optimal graph filter to a MAP interpolation problem regularized using a _directed_ graph smoothness prior [11] (Theorem 2). These two basic theorems establish one-to-one mappings from conventional linear image filters [12] to MAP-optimized graph filters for appropriately defined graphs. Considering both denoising and interpolation simultaneously thus naturally leads to a _mixed_ graph model with both directed and undirected edges--a formalism that provides a mathematical framework for joint optimization and explains under which scenarios a joint denoising / interpolation approach would be necessary. Our second contribution is to study two variants of the joint problem: i) an undirected-graph-based denoiser followed by a directed-graph-based interpolator has an optimal _separable_ solution (Corollary 1), and ii) a directed-graph-based interpolator followed by an undirected-graph-based denoiser has an optimal _non-separable_ solution (Corollary 2). In the latter case, we show that the solution comprises analytically derived denoising and interpolation operators that are easily computable functions of the input interpolator / denoiser. Experiments show that using these computed operators for joint denoising / interpolation of test images can outperform separate approaches noticeably. ## 2 Preliminaries ### GSP Definitions We first define basic definitions in GSP [7]. A graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) consists of a node set \(\mathcal{V}\) of size \(N\) and an edge set \(\mathcal{E}\) specified by \((i,j,w_{i,j})\), where \(i,j\in\mathcal{V}\) and \(w_{i,j}\in\mathbb{R}\) is a scalar weight of an edge \((i,j)\in\mathcal{E}\) reflecting the (dissimilarity between samples at nodes \(i\) and \(j\). We define an _adjacency matrix_\(\mathbf{A}\in\mathbb{R}^{N\times N}\), where \(A_{i,j}=w_{i,j}\) if \((i,j)\in\mathcal{E}\), and \(A_{i,j}=0\) otherwise. We consider both _undirected_ and _directed_ graphs. An undirected graph \(\mathcal{G}\) means \(A_{i,j}=A_{j,i},\forall i,j\in\mathcal{V}\), and \(\mathbf{A}\) is symmetric. For undirected graphs, we define a diagonal _degree matrix_\(\mathbf{D}\in\mathbb{R}^{N\times N}\), where \(D_{i,i}=\sum_{j}A_{i,j}\). Given \(\mathbf{A}\) and \(\mathbf{D}\), we define a _combinatorial graph Laplacian matrix_\(\mathbf{\Delta}\triangleq\mathbf{D}-\mathbf{A}\). If there exist self-loops, _i.e._, \(w_{i,i}\neq 0,\exists i,\) then the _generalized graph Laplacian matrix_\(\mathbf{L}_{g}\triangleq\mathbf{D}-\mathbf{A}+\text{diag}(\mathbf{A})\) is typically used instead. ### Graph Smoothness Priors There exist several graph smoothness priors in the GSP literature; each assumes signal \(\mathbf{x}\) is smooth w.r.t. the underlying graph \(\mathcal{G}\) but is expressed in slightly different mathematical terms. The most common is the _graph Laplacian regularizer_ (GLR) [10]: \[\mathbf{x}^{\top}\mathbf{L}\mathbf{x}=\sum_{(i,j)\in\mathcal{E}}w_{i,j}(x_{i }-x_{j})^{2}, \tag{1}\] where \(\mathbf{L}\) is the combinatorial graph Laplacian for graph \(\mathcal{G}\). If edge weights are non-negative, _i.e._, \(w_{i,j}\geq 0,\ \forall(i,j)\in\mathcal{E}\), then \(\mathbf{L}\) is provably _positive semi-definite_ (PSD) and \(\mathbf{x}^{\top}\mathbf{L}\mathbf{x}\geq 0,\forall\mathbf{x}\in\mathbb{R}^{N}\)[8]. GLR can be similarly defined using generalized Laplacian \(\mathbf{L}_{g}\) instead of \(\mathbf{L}\). A small GLR means a connected node-pair \((i,j)\) with large edge weight \(w_{i,j}\) should have similar values \(x_{i}\) and \(x_{j}\). Another common graph smoothness prior is the _graph shift variation_ (GSV) [11]. First, define row-stochastic adjacency matrix as \(\mathbf{A}_{r}\triangleq\mathbf{D}^{-1}\mathbf{A}\). We then write GSV as \[\|\mathbf{x}-\mathbf{A}_{r}\mathbf{x}\|_{2}^{2}=\|(\mathbf{I}-\mathbf{A}_{r}) \mathbf{x}\|_{2}^{2}=\|\mathbf{L}_{r}\mathbf{x}\|_{2}^{2}, \tag{2}\] where \(\mathbf{L}_{r}\triangleq\mathbf{D}^{-1}\mathbf{L}\) is the random walk graph Laplacian. GSV (2) can be interpreted as the \(\ell_{2}\)-norm difference between signal \(\mathbf{x}\) and its shifted version \(\mathbf{A}_{r}\mathbf{x}\), where the row-stochastic adjacency matrix \(\mathbf{A}_{r}\) is the shift operator. GSV can be rewritten as \(\mathbf{x}^{\top}\mathbf{L}_{r}^{\top}\mathbf{L}_{r}\), which was called _left eigenvectors of the random walk graph Laplacian_ (LERaG) in [13]. In contrast to GLR (1), one important characteristic of GSV (2) is that it is well defined even if the graph \(\mathcal{G}\) is directed. ## 3 Linear Denoisers and Interpolators ### Denoiser: Undirected Graph MAP Problem We first establish a theorem to relate a linear denoiser \(\mathbf{\Psi}\in\mathbb{R}^{N\times N}\) to a MAP optimization problem regularized by an undirected graph. Consider a linear denoising operation written as \[\mathbf{x}=\mathbf{\Psi}\mathbf{y}, \tag{3}\] where \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{N}\) are the output and input of the denoiser, respectively. Consider next the following standard MAP optimization problem for denoising input \(\mathbf{y}\), using GLR (1) [10] as the signal prior: \[\min_{\mathbf{x}}\|\mathbf{y}-\mathbf{x}\|_{2}^{2}+\mu\mathbf{x}^{\top} \mathbf{L}\mathbf{x}, \tag{4}\] where \(\mu>0\) is a weight parameter. Assuming graph Laplacian \(\mathbf{L}\) is PSD, (4) is an unconstrained and convex _quadratic programming_ (QP) problem with solution \[\mathbf{x}^{\star}=\left(\mathbf{I}_{N}+\mu\mathbf{L}\right)^{-1} \mathbf{y}, \tag{5}\] where \(\mathbf{I}_{N}\) is an \(N\times N\) identity matrix. Note that coefficient matrix \(\mathbf{I}_{N}+\mu\mathbf{L}\) is provably _positive definite_ (PD) and thus invertible. We now connect denoiser \(\mathbf{\Psi}\) (3) and the MAP problem (4): **Theorem 1**.: _Denoiser \(\mathbf{\Psi}\) (3) is the solution filter for the MAP problem (4) if \(\mathbf{L}=\mu^{-1}(\mathbf{\Psi}^{-1}-\mathbf{I}_{N})\), assuming matrix \(\mathbf{\Psi}\) is non-expansive, symmetric, and PD._ Proof.: Symmetry means \(\mathbf{\Psi}\) has real eigenvalues. Non-expansiveness and PDness mean \(\mathbf{\Psi}\) is invertible with positive eigenvalues \(0<\lambda_{k}\leq 1,\forall k\). Thus, \(\mathbf{\Psi}^{-1}\) exists and has positive eigenvalues \(\{\lambda_{k}^{-1}\}\). Condition \(\lambda_{k}\leq 1,\forall k\) means that \(1\leq\lambda_{k}^{-1},\forall k\). Thus, the eigenvalues of \(\mathbf{\Psi}^{-1}-\mathbf{I}_{N}\) are \(\lambda_{k}^{-1}-1\geq 0,\forall k\). This implies \(\mathbf{L}=\mu^{-1}(\mathbf{\Psi}^{-1}-\mathbf{I}_{N})\) is PSD, and thus quadratic objective (4) is convex. Taking derivative w.r.t. \(\mathbf{x}\) and setting it to zero, optimization (4) has (5) as solution. Inserting \(\mathbf{L}=\mu^{-1}(\mathbf{\Psi}^{-1}-\mathbf{I}_{N})\) into (5), we get \(\mathbf{x}^{\star}=\mathbf{\Psi}\mathbf{y}\), and thus \(\mathbf{\Psi}\) is the resulting solution filter. **Remarks**: In the general case, PSD matrix \(\mathbf{L}_{g}=\mu^{-1}(\mathbf{\Psi}^{-1}-\mathbf{I}_{N})\) corresponding to non-expansive, symmetric and PD \(\mathbf{\Psi}\) is a generalized graph Laplacian to a graph with positive / negative edges and self-loops. Nonetheless, Theorem 1 states that, under "mild" condition, a graph filter--solution to MAP problem (4) regularized by an _undirected_ graph--is equally expressive as a linear denoiser \(\mathbf{\Psi}\). One benefit of Theorem 1 is _interpretability_: any linear denoiser \(\mathbf{\Psi}\) satisfying the aforementioned requirements can now be interpreted as a _graph filter_ corresponding to an _undirected_ graph \(\mathcal{G}^{u}\), specified by \(\mathbf{L}=\mu^{-1}(\mathbf{\Psi}^{-1}-\mathbf{I}_{N})\), given that \(\mathbf{L}\) is symmetric. In fact, _bilateral filter_ (BF) [14] has been shown to be a graph filter in [15], but Theorem 1 provides a more general statement. ### Interpolator: Directed Graph MAP Problem We next investigate a linear interpolator \(\mathbf{\Theta}\in\mathbb{R}^{N\times M}\) that interpolates \(N\) new pixels from \(M\) original pixels \(\mathbf{y}\in\mathbb{R}^{M}\): \[\mathbf{x}=\left[\begin{array}{c}\mathbf{I}_{M}\\ \mathbf{\Theta}\end{array}\right]\mathbf{y}, \tag{6}\] where \(\mathbf{x}=[\mathbf{y}^{\top}\quad\mathbf{\tilde{x}}^{\top}]^{\top}\in\mathbb{ R}^{M+N}\) is the length-\((M+N)\) target signal that retains the original \(M\) pixels. We define a MAP optimization objective for interpolation, similar to previous (4) for denoising. Denote by \(\mathbf{H}=[\mathbf{I}_{M}\quad\mathbf{0}_{M,N}]\)\(a\)\(M\times(M+N)\)_sampling matrix_ that selects \(M\) original pixels from signal \(\mathbf{x}\), where \(\mathbf{0}_{M,N}\) is a \(M\times N\) matrix of zeros. Denote by \(\mathbf{A}\) an _asymmetric_ adjacency matrix specifying _directional_ edges in a directed graph \(\mathcal{G}^{d}\) for signal \(\mathbf{x}\). Specifically, \(\mathbf{A}\) describes edges only from the \(M\) original pixels to \(N\) new pixels, _i.e._, \[\mathbf{A}=\left[\begin{array}{cc}\mathbf{0}_{M,M}&\mathbf{A}_{M,N}\\ \mathbf{0}_{N,M}&\mathbf{0}_{N,N}\end{array}\right]. \tag{7}\] We now write a MAP optimization objective using GSV (2) as the signal prior: \[\min_{\mathbf{x}}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_{2}^{2}+ \gamma\|\mathbf{H}(\mathbf{x}-\mathbf{A}\mathbf{x})\|_{2}^{2}, \tag{8}\] where \(\gamma>0\) is a weight parameter. GSV \(\|\mathbf{H}(\mathbf{x}-\mathbf{A}\mathbf{x})\|_{2}^{2}\) states that a smooth graph signal \(\mathbf{x}\) should be similar to its shifted version \(\mathbf{A}\mathbf{x}\), but we evaluate only the \(M\) original pixels in the objective. Note that (8) is convex for any definition of \(\mathbf{A}\). We state formally a theorem to connect interpolator in (6) and the MAP problem (8). **Theorem 2**.: _The interpolator \([\mathbf{I}_{M};\mathbf{\Theta}]\) (6) is the solution filter to the MAP problem (8) if \(M=N\), \(\mathbf{\Theta}\) is invertible, and \(\mathbf{A}_{M,N}=\mathbf{\Theta}^{-1}\)._ Proof.: we rewrite \(\mathbf{H}(\mathbf{x}-\mathbf{A}\mathbf{x})=\mathbf{H}(\mathbf{I}-\mathbf{A}) \mathbf{x}\). Given (8) is convex for any \(\mathbf{A}\), we take the derivative w.r.t. \(\mathbf{x}\) and set it to 0, resulting in \[\left(\mathbf{H}^{\top}\mathbf{H}+\gamma(\mathbf{I}-\mathbf{A})^{ \top}\mathbf{H}^{\top}\mathbf{H}(\mathbf{I}-\mathbf{A})\right)\mathbf{x}=\mathbf{ H}^{\top}\mathbf{y}. \tag{9}\] Given the definitions of \(\mathbf{H}\) and \(\mathbf{A}\), we can rewrite (9) as \[\underbrace{\left[\begin{array}{cc}(1+\gamma)\mathbf{I}_{M}&- \gamma\mathbf{A}_{M,N}\\ -\gamma\mathbf{A}_{N,M}&\gamma\mathbf{A}_{N,N}^{2}\\ \end{array}\right]\mathbf{x}=\mathbf{H}^{\top}\mathbf{y}}_{\mathcal{C}}, \tag{10}\] where \(\mathbf{A}_{N,N}^{2}=\mathbf{A}_{N,M}\mathbf{A}_{M,N}\). Using a matrix inversion formula [16], \[\left[\begin{array}{cc}\dot{\mathbf{A}}&\dot{\mathbf{B}}\\ \dot{\mathbf{C}}&\dot{\mathbf{D}}\end{array}\right]^{-1}=\left[\begin{array}{cc} \dot{\mathbf{P}}&-\dot{\mathbf{P}}\dot{\mathbf{B}}\dot{\mathbf{D}}^{-1}\\ -\dot{\mathbf{D}}^{-1}\dot{\mathbf{C}}\dot{\mathbf{P}}&\dot{\mathbf{D}}^{-1}+ \dot{\mathbf{D}}^{-1}\dot{\mathbf{C}}\dot{\mathbf{P}}\dot{\mathbf{B}}\dot{ \mathbf{D}}^{-1}\end{array}\right] \tag{11}\] where \(\dot{\mathbf{P}}=(\dot{\mathbf{A}}-\dot{\mathbf{B}}\dot{\mathbf{D}}^{-1}\dot{ \mathbf{C}})^{-1}\), we first compute \(\dot{\mathbf{P}}\) for coefficient matrix \(\mathbf{C}\) in (10) as \[\dot{\mathbf{P}} =\left((1+\gamma)\mathbf{I}_{M}-(-\gamma\mathbf{A}_{M,N})(\gamma \mathbf{A}_{N,N}^{2})^{-1}(-\gamma\mathbf{A}_{N,M})\right)^{-1}\] \[=\left((1+\gamma)\mathbf{I}_{M}-\gamma\mathbf{A}_{M,N}(\mathbf{A}_ {N,M}\mathbf{A}_{M,N})^{-1}\mathbf{A}_{N,M}\right)^{-1}\] \[\stackrel{{(a)}}{{=}}\left((1+\gamma)\mathbf{I}_{M}- \gamma\mathbf{A}_{M,N}\mathbf{A}_{M,N}^{-1}\mathbf{A}_{N,M}^{-1}\mathbf{A}_{ \[\mathbf{x} =\mathbf{C}^{-1}\mathbf{H}^{\top}\mathbf{y}=\left[\begin{array}{c} \mathbf{I}_{M}\\ -(\gamma\mathbf{A}_{N,N}^{2})^{-1}(-\gamma\mathbf{A}_{N,M})\mathbf{I}_{M}\\ \end{array}\right]\mathbf{y}\] \[=\left[\begin{array}{c}\mathbf{I}_{M}\\ \mathbf{A}_{M,N}^{-1}\mathbf{A}_{N,M}^{-1}\mathbf{A}_{N,M}\\ \end{array}\right]\mathbf{y}=\left[\begin{array}{c}\mathbf{I}_{M}\\ \mathbf{A}_{M,N}^{-1}\\ \end{array}\right]\mathbf{y}. \tag{13}\] Given \(\mathbf{A}_{M,N}^{-1}=\mathbf{\Theta}\) by assumption, we conclude that \[\mathbf{x}=\left[\begin{array}{c}\mathbf{I}_{M}\\ \mathbf{\Theta}\\ \end{array}\right]\mathbf{y}. \tag{14}\] Hence, \([\mathbf{I}_{M};\mathbf{\Theta}]\) is the solution filter to (8). **Remarks**: Theorem 2 states that, under "mild" condition, a graph filter--solution to MAP problem (8) regularized by a _directed_ graph--is equally expressive as a general linear interpolator \(\mathbf{\Theta}\). The requirements for Theorem 2 mean that the \(M\) interpolated pixels are linearly independent. Intuitively, using _bidirectional_ edges \((i,j)\) in an _undirected_ graph for denoising makes sense; the uncertainty in observed noisy pixels \(i\) and \(j\) means that their reconstructions depend on each other. In contrast, using _directional_ edge \([i,j]\) in a _directed_ graph for interpolation is reasonable; original pixel \(i\) should influence interpolated pixel \(j\) but not vice versa. ## 4 Joint Denoising / Interpolation Having developed Theorem 1 and Theorem 2 to relate linear denoiser \(\mathbf{\Psi}\) and linear interpolator \(\mathbf{\Theta}\) to graph-regularized MAP problems (4) and (8) respectively, we study different joint denoising / interpolation formulations in this section. ### Joint Formulation with Separable Solution Denote by \(\mathbf{A}\in\mathbb{R}^{(M+N)\times(M+N)}\) in the form (7) an adjacency matrix of a _directed_ graph \(\mathcal{G}^{d}\) connecting \(M\) original pixels to \(N\) new pixels, corresponding to an interpolator \(\mathbf{\Theta}\). Further, denote by \(\mathbf{L}\in\mathbb{R}^{M\times M}\) a graph Laplacian matrix for an _undirected_ graph \(\mathcal{G}^{u}\) inter-connecting the \(M\) noisy original pixels, corresponding to a denoiser \(\mathbf{\Psi}\). One direct formulation for joint denoising / interpolation is to simply combine terms in MAP objectives (4) and (8) as \[\min_{\mathbf{x}}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_{2}^{2}+\gamma\|\mathbf{ H}(\mathbf{x}-\mathbf{A}\mathbf{x})\|_{2}^{2}+\mu(\mathbf{H}\mathbf{x})^{\top} \mathbf{L}(\mathbf{H}\mathbf{x}). \tag{15}\] In words, (15) states that sought signal \(\mathbf{x}\) should be smooth w.r.t. _two_ graphs \(\mathcal{G}^{d}\) and \(\mathcal{G}^{u}\): i) \(\mathbf{x}\) should be similar to its shifted version \(\mathbf{A}\mathbf{x}\) where \(\mathbf{A}\) is an adjacency matrix for a _directed_ graph \(\mathcal{G}^{d}\), and ii) original pixels \(\mathbf{H}\mathbf{x}\) should be smooth w.r.t. to an _undirected_ graph \(\mathcal{G}^{u}\) defined by \(\mathbf{L}\). We show that the optimal solution to the posed MAP problem (15) takes a particular form. **Corollary 1**.: (15) _has a separable solution._ Proof.: Optimization (15) is an unconstrained convex QP problem. Taking the derivative w.r.t. \(\mathbf{x}\) and setting it to \(0\), we get \[\left(\mathbf{H}^{\top}\mathbf{H}+\gamma\left((\mathbf{I}-\mathbf{A})^{\top} \mathbf{H}^{\top}\mathbf{H}(\mathbf{I}-\mathbf{A})\right)+\mu\mathbf{H}^{ \top}\mathbf{L}\mathbf{H}\right)\mathbf{x}^{\star}=\mathbf{H}^{\top}\mathbf{y}. \tag{16}\] Denote by \(\mathbf{C}\) the coefficient matrix on the left-hand side. Given \(\mathbf{H}=[\mathbf{I}_{M}\ \mathbf{0}_{M,N}]\), \(\mathbf{H}^{\top}\mathbf{L}\mathbf{H}\) has nonzero term \(\mathbf{L}\) only in the upper-left sub-matrix. Given (10), \(\mathbf{C}\) differs only in the upper-left block, _i.e._, \[\mathbf{C}=\left[\begin{array}{cc}(\gamma+1)\mathbf{I}_{M}+\mu\mathbf{L}&- \gamma\mathbf{A}_{M,N}\\ -\gamma\mathbf{A}_{N,M}&\gamma\mathbf{A}_{N,N}^{2}\\ \end{array}\right]. \tag{17}\] Using the matrix inverse formula (11), we can first write block \(\hat{\mathbf{P}}=(\hat{\mathbf{A}}-\hat{\mathbf{B}}\mathbf{D}^{-1}\hat{ \mathbf{C}})^{-1}\) as: \[\hat{\mathbf{P}} =\left((\gamma+1)\mathbf{I}_{M}+\mu\mathbf{L}-(-\gamma\mathbf{A}_ {M,N})(\gamma\mathbf{A}_{N,N}^{2})^{-1}(-\gamma\mathbf{A}_{N,M})\right)^{-1}\] \[=((\gamma+1)\mathbf{I}_{M}+\mu\mathbf{L}-\gamma\mathbf{I}_{M})^{- 1}=(\mathbf{I}_{M}+\mu\mathbf{L})^{-1}. \tag{18}\] Thus, the solution \(\mathbf{x}^{\star}\) for optimization (16) is \[\mathbf{x}^{\star} =\mathbf{C}^{-1}\mathbf{H}^{\top}\mathbf{y}\] \[=\left[\begin{array}{c}(\mathbf{I}_{M}+\mu\mathbf{L})^{-1}\\ -(\gamma\mathbf{A}_{N,N}^{2})^{-1}(-\gamma\mathbf{A}_{N,M})(\mathbf{I}_{M}+\mu \mathbf{L})^{-1}\\ \end{array}\right]\] \[=\left[\begin{array}{c}(\mathbf{I}_{M}+\mu\mathbf{L})^{-1}\\ \mathbf{A}_{M,N}^{-1}(\mathbf{I}_{M}+\mu\mathbf{L})^{-1}\\ \end{array}\right]=\left[\begin{array}{c}\mathbf{\Psi}\\ \mathbf{\Theta}\,\mathbf{\Psi}\\ \end{array}\right]\mathbf{y}, \tag{19}\] where \(\mathbf{\Psi}=(\mathbf{I}_{M}+\mu\mathbf{L})^{-1}\) is the denoiser, and \(\mathbf{\Theta}=\mathbf{A}_{M,N}^{-1}\) is the interpolator. We see that solution (19) to optimization (15) is _entirely separable_: input \(\mathbf{y}\) first undergoes denoising via original denoiser \(\mathbf{\Psi}\), then subsequently interpolation via original interpolator \(\mathbf{\Theta}\). ### Joint Formulation with Non-separable Solution Next, we consider a scenario where we introduce a GLR smoothness prior [10] for the \(N\) interpolated pixels instead of a smoothness prior for the \(M\) original pixels in (15), resulting in \[\min_{\mathbf{x}}\|\mathbf{y}-\mathbf{H}\mathbf{x}\|_{2}^{2}+\gamma\|\mathbf{H} (\mathbf{x}-\mathbf{A}\mathbf{x})\|_{2}^{2}+\kappa(\mathbf{G}\mathbf{x})^{ \top}\bar{\mathbf{L}}(\mathbf{G}\mathbf{x}), \tag{20}\] where \(\mathbf{G}=[\mathbf{0}_{N,M}\ \mathbf{I}_{N}]\) selects only the \(N\) new pixels from signal \(\mathbf{x}\), and \(\bar{\mathbf{L}}\in\mathbb{R}^{N\times N}\) denotes a graph Laplacian matrix for an undirected graph \(\mathcal{G}^{u}\) connecting the interpolated pixels. **Corollary 2**.: (20) _has a non-separable solution._ Proof.: (20) is convex, quadratic and differentiable. The optimal solution \(\mathbf{x}^{\star}\) can be computed via a system of linear equations, \[\left(\mathbf{H}^{\top}\mathbf{H}+\gamma\left((\mathbf{I}-\mathbf{A})^{\top} \mathbf{H}^{\top}\mathbf{H}(\mathbf{I}-\mathbf{A})\right)+\mathcal{L}\right) \mathbf{x}^{\star}=\mathbf{H}^{\top}\mathbf{y}, \tag{21}\] where \(\mathcal{L}=\kappa\mathbf{G}^{\top}\bar{\mathbf{L}}\mathbf{G}\) and is block-diagonal, _i.e._, \[\mathcal{L}=\left[\begin{array}{cc}\mathbf{0}_{M}&\mathbf{0}_{M,N}\\ \mathbf{0}_{N,M}&\kappa\mathbf{L}\\ \end{array}\right]. \tag{22}\] A similar derivation shows that coefficient matrix \(\mathbf{C}\) changes from (17) to Figure 1: Illustrations of mixed graphs for joint interpolation / denoising. Pixels 5,67,8 are interpolated using pixels 1,2,3,4. Directed edges for interpolation are shown as red dashed arrows, while undirected edges for denoising are shown as green dashed lines. (a) corresponds to corollary 1 where input pixels are denoised before interpolation. (b) corresponds to corollary 2 where interpolated pixels are denoised after interpolation. \[\mathbf{C}=\left[\begin{array}{cc}(\gamma+1)\mathbf{I}_{M}&-\gamma\mathbf{A}_{M,N }\\ -\gamma\mathbf{A}_{N,M}&\gamma\mathbf{A}_{N,N}^{2}+\kappa\bar{\mathbf{L}}\end{array} \right]. \tag{23}\] Using again the matrix inverse formula (11), we first write block \(\dot{\mathbf{P}}=(\dot{\mathbf{A}}-\dot{\mathbf{B}}\dot{\mathbf{D}}^{-1}\bar{ \mathbf{C}})^{-1}\) as \[\dot{\mathbf{P}} =\left((\gamma+1)\mathbf{I}_{M}-(\neg\mathbf{A}_{M,N})(\gamma \mathbf{A}_{N,N}^{2}+\kappa\bar{\mathbf{L}})^{-1}(-\gamma\mathbf{A}_{N,M}) \right)^{-1}\] \[=\left((\gamma+1)\mathbf{I}_{M}-\gamma^{2}\mathbf{A}_{M,N}(\gamma \mathbf{A}_{N,N}^{2}+\kappa\bar{\mathbf{L}})^{-1}\mathbf{A}_{N,M}\right)^{-1}. \tag{24}\] The solution for \(\mathbf{x}^{*}\) in (21) is \[\mathbf{x}^{*} =\mathbf{C}^{-1}\mathbf{H}^{\top}\mathbf{y}=\left[\begin{array}[] {c}\dot{\mathbf{P}}\\ -(\gamma\mathbf{A}_{N,N}^{2}+\kappa\bar{\mathbf{L}})^{-1}(-\gamma\mathbf{A}_{ N,M})\dot{\mathbf{P}}\end{array}\right]\] \[=\left[\begin{array}{c}\dot{\mathbf{P}}\\ \gamma(\gamma\mathbf{A}_{N,N}^{2}+\kappa\bar{\mathbf{L}})^{-1}\mathbf{A}_{N,M} \dot{\mathbf{P}}\end{array}\right]=\left[\begin{array}{c}\boldsymbol{\Psi}^ {*}\\ \boldsymbol{\Theta}^{*}\boldsymbol{\Psi}^{*}\end{array}\right]\mathbf{y}, \tag{25}\] where \(\boldsymbol{\Psi}^{*}=\dot{\mathbf{P}}\) is a derived denoiser for original pixels, and \(\boldsymbol{\Theta}^{*}=\gamma(\gamma\mathbf{A}_{N,N}^{2}+\kappa\bar{ \mathbf{L}})^{-1}\mathbf{A}_{N,M}\) is an augmented interpolator. Since \(\boldsymbol{\Theta}^{*}\boldsymbol{\Psi}^{*}\neq\dot{\boldsymbol{\Psi}} \boldsymbol{\Theta}\), the solution to (20) is not separable2. Footnote 2: Though the solution \(\boldsymbol{\Theta}^{*}\boldsymbol{\Psi}^{*}\) to (20) is a sequence of two separate matrix operations, each matrix is a non-separable function of input matrices \(\{\mathbf{A},\bar{\mathbf{L}}\}\) or \(\{\boldsymbol{\Theta},\bar{\boldsymbol{\Psi}}\}\). **Remarks:**\(\boldsymbol{\Psi}^{*}\) is a denoiser because it denoises \(M\) original pixels. \(\boldsymbol{\Theta}^{*}\) is an interpolator because it operates on the \(M\) denoised original pixels and outputs \(N\) interpolated pixels. \(\boldsymbol{\Psi}^{*}\) and \(\boldsymbol{\Theta}^{*}\) are analytically der denoising and interpolation operators that are computable functions of original directed graph adjacency matrix \(\mathbf{A}\) for interpolation and undirected graph Laplacian \(\bar{\mathbf{L}}\) for denoising. We show next that using the derived operators for joint interpolation / denoising can result in performance better than separate schemes. ## 5 Experiments ### Experimental setup Experiments were conducted to test our derived operators in (25) for joint denoising / interpolation (denoted by joint) compared to original sequential operations (denoted by sequential) in the non-separable case. For denoisers, we employed Gaussian filter [12], bilateral filter (BF) [14] and nonlocal means (NLM) [17, 18]. For interpolators, we used linear operators for image rotation and warping using Homography transform. The output images were evaluated using signal-to-noise ratio (PSNR). Popular \(512\times 512\) grayscale images, Lena and peppers, were used for rotation and warping, respectively. The experiments were run in Matlab R2022a3. Gaussian noise of different variances were added to the images. Footnote 3: The code developed for these experiments are made available at our GitHub repository The joint denoising / interpolation operation was performed on \(10\times 10\) output patches. The size and location of the input patch from which the output patch is generated depend on the interpolation operation and the output location. To ensure that the number of input pixels is equal to the output pixels (_i.e._, \(M=N\)), we interpolated dummy pixels by adding rows to \(\boldsymbol{\Theta}\). It is also important to ensure that \(\boldsymbol{\Theta}\) is full rank (thus invertible) when adding new rows. To ensure denoiser \(\boldsymbol{\Psi}\) is non-expansive, symmetric and PD according to Theorem 1, we ran the Sinkhorn-Knopp procedure [19] for an input linear denoiser with non-negative entries and independent rows, so that matrix \(\boldsymbol{\Psi}\) was _double stochastic_, and thus its eigenvalues satisfied \(|\lambda_{i}|\leq 1,\forall i\). Note that for a chosen interpolation operation, \(\boldsymbol{\Theta}\) changed from patch to patch, and so the denoiser needed to adapt to the input and output dimensions when using the Gaussian filter. For BF, and NLM, the denoiser itself changed from patch to patch. To solve (21) in each iteration, pepg function in MATLAB implementing a version of conjugate gradient [20] was used. Image rotation was performed at 20 degrees anti-clockwise, and for image warping a homography matrix of [1, 0.2, 0; 0.1, 1, 0; 0, 0, 1] was used. For the experiments with Gaussian filter and BF, the hyperparameters were selected as \(\mu=0.3\), \(\gamma=0.5\), \(\kappa=0.3\). For Gaussian denoiser, a variance of \(0.3\) was used, and for BF, variance of \(0.3\) was used for both spatial and range kernels. For experiments with NLM, \(\gamma=0.6\), \(\kappa=0.2\), while \(\mu\) was kept the same. The patch size for NLM was \(3\times 3\) and the search window size was \(9\times 9\). ### Experimental Results Fig. 2 show that joint performed better than sequential in general. While the performance of both schemes degraded as noise variance increased, the performance of sequential degraded faster than joint. In the experiment with image rotation and Bilateral denoiser, we observe a maximum PSNR gain of \(1.35\)dB, and when NLM was used, the maximum gain was \(0.77\)dB. Note that we have reported results for NLM over a larger range of noise variance, because NLM generally produced high-quality output, and thus the PSNR difference between joint and sequential is small at low noise levels. For image warping, the maximum gains were \(1.05\)dB and \(1.14\)dB for bilateral and Gaussian denoisers, respectively. ## 6 Conclusion We presented two theorems, under mild conditions, connecting any linear denoiser / interpolator to optimal graph filter regularized using an undirected / directed graph. The theorems demonstrate the generality of graph filters and provide graph interpretations for common linear denoisers / interpolators. Using the two theorems, we examine scenarios of joint denoising / interpolation where the optimal solution can be separable or non-separable. In the later case, analytically derived denoiser / interpolator can be computed as functions of original denoiser and interpolator. We demonstrate that using these computed operators resulted in noticeable performance gain over seperate schemes in a range of joint denoising / interpolation settings. Figure 2: Plots comparing the performance of joint vs sequential over a range of noise variance using 4 combinations of interpolators and denoisers. **(a)** Image rotation + Bilateral denoiser. **(b)** Image rotation + Non-Local Means denoiser. **(c)** Image warping + Bilateral denoiser. **(d)** Image warping + Gaussian denoiser.
2309.12703
Irreducible unitary representations with non-zero relative Lie algebra cohomology of the Lie group $SO_0(2,m)$
By a theorem of D. Wigner, an irreducible unitary representation with non-zero $(\frak{g},K)$-cohomology has trivial infinitesimal character, and hence up to unitary equivalence, these are finite in number. We have determined the number of equivalence classes of these representations and the Poincar\'{e} polynomial of cohomologies of these representations for the Lie group $SO_0(2,m)$ for any positive integer $m.$ We have also determined, among these, which are discrete series representations and holomorphic discrete series representations.
Ankita Pal, Pampa Paul
2023-09-22T08:28:30Z
http://arxiv.org/abs/2309.12703v1
Irreducible unitary representations with non-zero relative Lie algebra cohomology of the Lie group \(SO_{0}(2,m)\) ###### Abstract. By a theorem of D. Wigner, an irreducible unitary representation with non-zero \((\mathfrak{g},K)\)-cohomology has trivial infinitesimal character, and hence up to unitary equivalence, these are finite in number. We have determined the number of equivalence classes of these representations and the Poincare polynomial of cohomologies of these representations for the Lie group \(SO_{0}(2,m)\) for any positive integer \(m.\) We have also determined, among these, which are discrete series representations and holomorphic discrete series representations. Key words and phrases:Lie group, Lie algebra, Dynkin diagram, \(\theta\)-stable parabolic subalgebra, cohomological induction 2020 Mathematics Subject Classification: 22E46, 17B10, 17B20, 17B22, 17B56 ## 1. Introduction Let \(G\) be a connected semisimple Lie group with finite centre, and \(K\) be a maximal compact subgroup of \(G\) with Cartan involution \(\theta.\) The differential of \(\theta\) at identity is denoted by the same notation \(\theta.\) Let \(\mathfrak{g}_{0}\) be the Lie algebra of \(G,\mathfrak{k}_{0}\) be the subalgebra of \(\mathfrak{g}_{0}\) corresponding to the Lie subgroup \(K\) of \(G,\mathfrak{h}_{0}\) be a \(\theta\)-stable fundamental Cartan subalgebra of \(\mathfrak{g}_{0},\) and \(\mathfrak{g}=\mathfrak{g}_{0}^{\mathbb{C}},\mathfrak{h}=\mathfrak{h}_{0}^{ \mathbb{C}}.\) Corresponding to a \(\theta\)-stable parabolic subalgebra \(\mathfrak{q}\) of \(\mathfrak{g}_{0}\) containing \(\mathfrak{h}_{0},\) and a linear function \(\lambda\) on \(\mathfrak{h}\) in a certain good range, there is a cohomologically induced module \(A_{\mathfrak{q}}(\lambda),\) which is an irreducible unitary representation of \(G\) with infinitesimal character \(\chi_{\lambda}.\) These representations include all discrete series representations if \(\text{rank}(G)=\)rank\((K).\) We are interested in those cohomologically induced module \(A_{\mathfrak{q}}(\lambda)\) for which the infinitesimal character \(\chi_{\lambda}\) is trivial, that is \(\chi_{\lambda}\) is the infinitesimal character of the trivial representation of \(G,\) and we denote it by \(A_{\mathfrak{q}}.\) By a theorem of D. Wigner, an irreducible representation with non-zero \((\mathfrak{g},K)\)-cohomology has trivial infinitesimal character. Hence there are only finitely many irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology. In fact, the irreducible unitary representations with non-zero relative Lie algebra cohomology are exactly the irreducible unitary representations \(A_{\mathfrak{q}}.\) See SS2 for more details. So the representations \(A_{\mathfrak{q}}\) are important on its own. Apart from that Borel [1] has conjectured that an irreducible unitary representation with non-zero \((\mathfrak{g},K)\)-cohomology is an automorphic representation for a suitable uniform discrete subgroup of \(G.\) Millson and Raghunathan [10] have proved this conjecture for the group \(G=SO(n,1),\) by constructing geometric cycles and using Matsushima's isomophism [9]. So the representations \(A_{\mathfrak{q}}\) are possible candidates of automorphic representations of \(G.\) Collingwood [3] has determined the representations \(A_{\mathfrak{q}}\) and computed cohomologies of these representations of \(Sp(n,1),\) and the real rank one real form of \(F_{4}.\) Li and Schwermer [8] have determined the representations \(A_{\mathfrak{q}}\) and cohomologies of these representations for the connected non-compact real Lie group of type \(G_{2}.\) Mondal and Sankaran [11] have determined certain representations \(A_{\mathfrak{q}}\) of Hodge type \((p,p)\) when \(G/K\) is an irreducible Hermitian symmetric space. If \(G\) is a complex simple Lie group, the number of equivalence classes of the representations \(A_{\mathfrak{q}},\) and Poincare polynomials of cohomologies of some of these representations have been determined in [13]. In this article, we have determined the number of equivalence classes of the representations \(A_{\mathfrak{q}},\) and Poincare polynomials of cohomologies of these representations, when \(G=SO_{0}(2,m)\) for any positive integer \(m.\) The main results are stated as follows: **Theorem 1.1**.: _(i) If \(A\) is the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology of the Lie group \(SO_{0}(2,m)(m\in\mathbb{N})\), then_ \[A=\begin{cases}l(l+2)&\text{if }m=2l-1,\\ l^{2}+4l-3&\text{if }m=2l-2.\end{cases}\] _(ii) An \(A_{\mathfrak{q}}\) is unitarily equivalent to a discrete series representation of \(SO_{0}(2,m)\) with trivial infinitesimal character if and only if \((-\Delta(\mathfrak{u}\cap\mathfrak{p}_{-}))\cup\Delta(\mathfrak{u}\cap \mathfrak{p}_{+})=\Delta_{n}^{+}.\) Also if \(m\neq 2,\) an \(A_{\mathfrak{q}}\) is unitarily equivalent to a holomorphic discrete series representation of \(SO_{0}(2,m)\) with trivial infinitesimal character if and only if \(\Delta(\mathfrak{u}\cap\mathfrak{p}_{+})=\phi\) or \(\Delta_{n}^{+}.\) If \(D\)(respectively, \(D_{h}\)) is the number of equivalence classes of discrete series representations (respectively, holomorphic discrete series representations) of \(SO_{0}(2,m)\) with trivial infinitesimal character, then \(D=2l\) if \(m=2l-1,\) or \(2l-2.\) Also \(D_{h}=2\) if \(m\neq 2.\) If \(m=2,\) then \(D_{h}=4.\)_ We have also determined Poincare polynomials of cohomologies of these representations in the Table 1, Table 2. The proof of Th.1.1 is given in SS4.1. We have used Remark 3.3(iii) of [11] to prove Th.1.1(i), and an alternative approach of \(\theta\)-stable parabolic subalgebras to determine the Poincare polynomial of cohomologies of these representations. ## 2. Irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology Let \(G\) be a connected semisimple Lie group with finite centre, and \(K\) be a maximal compact subgroup of \(G\) with Cartan involution \(\theta.\) The differential of \(\theta\) at identity is denoted by the same notation \(\theta.\) Let \(\mathfrak{g}_{0}=\)Lie\((G),\mathfrak{g}_{0}=\)Lie\((K),\) and \(\mathfrak{g}_{0}=\mathfrak{k}_{0}\oplus\mathfrak{p}_{0}\) be the Cartan decomposition corresponding to \(\theta.\) Let \(\mathfrak{g}=\mathfrak{g}_{0}^{\mathbb{C}},\mathfrak{t}=\mathfrak{k}_{0}^{ \mathbb{C}}\subset\mathfrak{g},\mathfrak{p}=\mathfrak{p}_{0}^{\mathbb{C}} \subset\mathfrak{g}.\) A \(\theta\)-stable parabolic subalgebra of \(\mathfrak{g}_{0}\) is a parabolic subalgebra \(\mathfrak{q}\) of \(\mathfrak{g}\) such that (a) \(\theta(\mathfrak{q})=\mathfrak{q},\) and (b) \(\bar{\mathfrak{q}}\cap\mathfrak{q}=\mathfrak{l}\) is a Levi subalgebra of \(\mathfrak{q};\) where \(\bar{\ }\) denotes the conjugation of \(\mathfrak{g}\) with respect to \(\mathfrak{g}_{0}\). By (b), \(\mathfrak{l}\) is the complexification of a real subalgebra \(\mathfrak{l}_{0}\) of \(\mathfrak{g}_{0}\). Also \(\theta(t_{0})=t_{0}\) and and \(t_{0}\) contains a maximal abelian subalgebra \(t_{0}\) of \(t_{0}\). Then \(h_{0}=s_{\mathfrak{g}_{0}}(t_{0})\) is a \(\theta\)-stable Cartan subalgebra of \(\mathfrak{g}_{0}.\) Let \(t=t_{0}^{C}\subset\mathfrak{t},\) and \(h=b_{0}^{C}\subset\mathfrak{g}.\) Note that \(t,h\) are Cartan subalgebras of \(\mathfrak{t},\mathfrak{g}\) respectively and \(h\subset\mathfrak{q}.\) Let \(u\) be the nilradical of \(q\) so that \(q=i\oplus u.\) Then \(u\) is \(\theta\)-stable and so \(u=(u\cap t)\oplus(u\cap p).\) If \(V\) is a finite dimensional complex \(A\)-module, where \(A\) is an abelian Lie algebra; we denote by \(\Delta(V)(\) or by \(\Delta(V,A)),\) the set of all non-zero weights of \(V\); by \(V^{\alpha},\) the weight space of \(V\) corresponding to a weight \(\alpha\in\Delta(V);\) and by \(\delta(V)\) (or by \(\delta(V,A)),\)\(1/2\) of the sum of elements in \(\Delta(V)\) counted with their respective multiplicities. Fix a maximal abelian subspace \(t_{0}\) of \(t_{0}\) and \(t=t_{0}^{C}\subset\mathfrak{t}.\) Since \(t,g\) are \(t\)-modules (under the adjoint action), we have \[t=t\oplus\sum_{\alpha\in\Delta(t,t)}t^{\alpha},\text{and }g=h\oplus\sum_{ \alpha\in\Delta(g,t)}g^{\alpha}.\] Note that \(\Delta(t,t)\) is actually the set of all non-zero roots of \(t\) relative to the Cartan subalgebra \(t.\) Choose a system of positive roots \(\Delta_{t}^{+}\) in \(\Delta(t,t).\) If \(x\in i_{0}\) be such that \(\alpha(x)\geq 0\) for all \(\alpha\in\Delta_{t}^{+},\) then \(q_{x}=h\oplus\sum_{\alpha\in\Delta(g,t),\alpha(x)\geq 0}g^{\alpha}\) is a \(\theta\)-stable parabolic subalgebra of \(\mathfrak{g}_{0},\)\(t_{x}=h\oplus\sum_{\alpha\in\Delta(g,t),\alpha(x)=0}g^{\alpha}\) is the Levi subalgebra of \(q_{x},\) and \(u_{x}=\sum_{\alpha\in\Delta(g,t),\alpha(x)>0}g^{\alpha}\) is the nilradical of \(q_{x}.\) If \(q\) is a \(\theta\)-stable parabolic subalgebra of \(\mathfrak{g}_{0},\) there exists \(k\in K\) such that \(Ad(k)(q)=q_{x}.\) Now associated with a \(\theta\)-stable parabolic subalgebra \(q,\) we have an irreducible unitary representation \(\mathcal{R}_{q}^{S}(\mathbb{C})=A_{q}\) of \(G\) with trivial infinitesimal character, where \(S=\dim(u\cap t).\) The associated \((g,K)\)-module \(A_{q,K}\) contains an irreducible \(K\)-submodule \(V\) of highest weight (with respect to \(\Delta_{t}^{+}\)) \(2\delta(u\cap p,t)=\sum_{\beta\in\Delta(u\cap p,t)}\beta\) and it occurs with multiplicity one in \(A_{q,K}\). Any other irreducible \(K\)-module that occurs in \(A_{q,K}\) has highest weight of the form \(2\delta(u\cap p,t)+\sum_{\gamma\in\Delta(u\cap p,t)}n_{\gamma}\gamma,\) with \(n_{\gamma}\) a non-negative integer [17, Th. 2.5]. The \((g,K)\)-modules \(A_{q,K}\) were first constructed, in general, by Parthasarathy [12]. Vogan and Zuckerman [17] gave a construction of the \((g,K)\)-modules \(A_{q,K}\) via cohomological induction and Vogan [15] proved that these are unitarizable. Define an equivalence relation on the set of all \(\theta\)-stable parabolic subalgebras of \(\mathfrak{g}_{0},\) by \(q\) is equivalent to \(q^{\prime}\) if either \(Ad(k)(q)=q^{\prime},\) for some \(k\in K,\) or \(u\cap p=u^{\prime}\cap p.\) Also unitary equivalence is an equivalence relation on the set of all irreducible unitary representations \(A_{q}.\) Then the set of all equivalence classes of \(\theta\)-stable parabolic subalgebras are in one-one correspondence with the set of all equivalence classes of the irreducible unitary representations \(A_{q}\)[14, Prop. 4.5]. If \(q\) is a \(\theta\)-stable parabolic subalgebra of \(g,\) then the Levi subgroup \(L=\{g\in G:\operatorname{Ad}(g)(q)=q\}\) is a connected reductive Lie subgroup of \(G\) with Lie algebra \(t_{0}.\) As \(\theta(t_{0})=t_{0},L\cap K\) is a maximal compact subgroup of \(L\). One has \[H^{r}(g,K;A_{q,K})\cong H^{r-R(q)}(t,L\cap K;\mathbb{C}),\] where \(R(q):=\dim(u\cap p)\). Let \(Y_{q}\) denote the compact dual of the Riemannian globally symmetric space \(L/L\cap K\). Then \(H^{r}(t,L\cap K;\mathbb{C})\cong H^{r}(Y_{q};\mathbb{C})\). And hence \[H^{r}(g,K;A_{q,K})\cong H^{r-R(q)}(Y_{q};\mathbb{C}).\] If \(P_{\mathfrak{q}}(t)\) denotes the Poincare polynomial of \(H^{*}(\mathfrak{g},K;A_{\mathfrak{q},K}),\) then by the above result, we have \[P_{\mathfrak{q}}(t)=t^{R(\mathfrak{q})}P(Y_{\mathfrak{q}},t).\] Conversely, if \(\pi\) is an irreducible unitary represention of \(G\) with non-zero \((\mathfrak{g},K)\)-cohomology, then \(\pi\) is unitarily equivalent to \(A_{\mathfrak{q}}\) for some \(\theta\)-stable parabolic subalgebra \(\mathfrak{q}\) of \(\mathfrak{g}_{0}\)[17, Th. 4.1]. See also [16] for a beautiful description of the theory of \((\mathfrak{g},K)\)-modules \(A_{\mathfrak{q},K}.\) If \(\operatorname{rank}(G)=\operatorname{rank}(K)\) and \(\mathfrak{q}\) is a \(\theta\)-stable Borel subalgebra that is, \(\mathfrak{q}\) is a Borel subalgebra of \(\mathfrak{g}\) containing a Cartan subalgebra of \(\mathfrak{k},\) then \(A_{\mathfrak{q}}\) is a discrete series representation of \(G\) with trivial infinitesimal character. In this case, \(R(\mathfrak{q})=\frac{1}{2}\)\(\dim(G/K),\)\(L\) is a maximal torus in \(K\) and hence \[H^{r}(\mathfrak{g}^{\mathbb{C}},K;A_{\mathfrak{q},K})=\begin{cases}0&\text{ if }r\neq R(\mathfrak{q}),\\ \mathbb{C}&\text{if }r=R(\mathfrak{q}).\end{cases}\] If we take \(\mathfrak{q}=\mathfrak{g},\) then \(L=G\) and \(A_{\mathfrak{q}}=\mathbb{C},\) the trivial representation of \(G\). If \(G/K\) is Hermitian symmetric, choose a Borel-de Siebenthal positive root system in \(\Delta(\mathfrak{g},\mathfrak{t})\) containing \(\Delta_{\mathfrak{t}}^{+},\) and a unique non-compact simple root \(\nu;\) and define \(\mathfrak{p}_{+}=\sum_{\beta\in\Delta(\mathfrak{g},\mathfrak{t}),n_{\nu}(\beta )=1}\mathfrak{g}^{\beta},\mathfrak{p}_{-}=\sum_{\beta\in\Delta(\mathfrak{g}, \mathfrak{t}),n_{\nu}(\beta)=-1}\mathfrak{g}^{\beta};\) where \(n_{\nu}(\beta)\) is the coefficient of \(\nu\) in the decomposition of \(\beta\) into simple roots. Then \(\mathfrak{p}=\mathfrak{p}_{+}\oplus\mathfrak{p}_{-},\mathfrak{u}\cap \mathfrak{p}=(\mathfrak{u}\cap\mathfrak{p}_{+})\oplus(\mathfrak{u}\cap \mathfrak{p}_{-}).\) Define \(R_{+}(\mathfrak{q})=\)dim\((\mathfrak{u}\cap\mathfrak{p}_{+}),R_{-}(\mathfrak{q})=\)dim\((\mathfrak{u}\cap\mathfrak{p}_{-}).\) So \(R(\mathfrak{q})=R_{+}(\mathfrak{q})+R_{-}(\mathfrak{q}).\) One has a Hodge decomposition \[H^{r}(\mathfrak{g},K;A_{\mathfrak{q},K})=\oplus_{p+q=r}H^{p,q}(\mathfrak{g},K; A_{\mathfrak{q},K})=H^{p,q}(\mathfrak{g},K;A_{\mathfrak{q},K})\cong H^{p-R_{+}( \mathfrak{q}),q-R_{-}(\mathfrak{q})}(Y_{\mathfrak{q}};\mathbb{C});\] where \(p+q=r,p-q=R_{+}(\mathfrak{q})-R_{-}(\mathfrak{q}).\) See [2, Ch. II, SS4], [4], [17]. The pair \((R_{+}(\mathfrak{q}),R_{-}(\mathfrak{q}))\) is referred to be the _Hodge type_ of the representation \(A_{\mathfrak{q}}.\) ## 3. Discrete series representations We follow the notations from the previous section. Assume that \(\operatorname{rank}(G)=\operatorname{rank}(K),\) so that \(G\) admits discrete series representation. A non-singular linear function \(\lambda\) on \(i\mathfrak{t}_{0}\) relative to \(\Delta(\mathfrak{g},\mathfrak{t})\) dominant with respect to \(\Delta_{\mathfrak{t}}^{+},\) defines uniquely a positive root system \(\Delta_{\lambda}^{+}\) of \(\Delta(\mathfrak{g},\mathfrak{t})\) containing \(\Delta_{\mathfrak{t}}^{+}.\) Define \(\delta_{\mathfrak{g}}=\frac{1}{2}\sum_{\alpha\in\Delta_{\lambda}^{+}}\alpha, \delta_{\mathfrak{t}}=\frac{1}{2}\sum_{\alpha\in\Delta_{\mathfrak{t}}^{+}}\alpha.\) If \(\lambda+\delta_{\mathfrak{g}}\) is analytically integral(that is, \(\lambda+\delta_{\mathfrak{g}}\) is the differential of a Lie group homomorphism on the Cartan subgroup of \(G\) corresponding to \(\mathfrak{t}_{0}\)), then there exists a discrete series representation \(\pi_{\lambda}\) with infinitesimal character \(\chi_{\lambda}\)(it is the character of the Verma module of \(\mathfrak{g}\) with highest weight \(\lambda-\delta_{\mathfrak{g}}\)); the associated \((\mathfrak{g},K)\)-module \(\pi_{\lambda,K}\) contains an irreducible \(K\)-submodule with highest weight \(\Lambda=\lambda+\delta_{\mathfrak{g}}-2\delta_{\mathfrak{t}}\) and it occurs with multiplicity one in \(\pi_{\lambda,K}.\) Any other irreducible \(K\)-module that occurs in \(\pi_{\lambda,K}\) has highest weight of the form \(\Lambda+\sum_{\alpha\in\Delta_{\lambda}^{+}}n_{\alpha}\alpha,\) with \(n_{\alpha}\) a non-negative integer. Upto unitary equivalence these are all discrete series representations of \(G.\) This \(\lambda\) is called the _Harish-Chandra parameter_, \(\Lambda\) is called the _Blattner parameter_ of the discrete series representation \(\pi_{\lambda}.\) The positive root system \(\Delta_{\lambda}^{+}\) is called the _Harish-Chandra root order_ corresponding to \(\lambda.\) If \(G/K\) is Hermitian symmetric, then \(\pi_{\lambda}\) is a holomorphic discrete series representation _if and only if_ the Harish-Chandra root order corresponding to \(\lambda\) is a Borel-de Siebenthal positive root system. See [6], [7]. Irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology of the Lie group \(SO_{0}(2,m)\) Let \(I_{2,m}=\left(\begin{array}{cc}-I_{2}&0\\ 0&I_{m}\end{array}\right)\), where \(I_{m}\) denotes the identity matrix of order \(m\). Let \(G\) be the group \(SO_{0}(2,m)\), the connected component of the group \(\{g\in SL(m+2,\mathbb{R}):g^{t}I_{2,m}g=I_{2,m}\}\). Then \(G\) is a Lie group with Lie algebra \(\mathfrak{g}_{0}=\mathfrak{so}(2,m)=\{\left(\begin{array}{cc}X_{1}&X_{2}\\ X_{2}^{t}&X_{3}\end{array}\right):\) all \(X_{i}\text{ real},X_{1},X_{3}\) skew symmetric of order \(2\) and \(m\) respectively, \(X_{2}\) arbitrary\(\}\). The map \(\theta:G\longrightarrow G\) given by \(\theta(g)=I_{2,m}gI_{2,m}\) for all \(g\in G\) is a Cartan involution with maximal compact subgroup \(K=\{\left(\begin{array}{cc}A&0\\ 0&B\end{array}\right):A\in SO(2),B\in SO(m)\}\cong SO(2)\times SO(m)\). The differential of \(\theta\) at the identity element of \(G\) is the map \(X\mapsto I_{2,m}XI_{2,m}\) for all \(X\in\mathfrak{g}_{0}\), and is denoted by the same notation \(\theta:\mathfrak{g}_{0}\longrightarrow\mathfrak{g}_{0}\). Then \(\theta:\mathfrak{g}_{0}\longrightarrow\mathfrak{g}_{0}\) is a Cartan involution and \(\mathfrak{g}_{0}=\mathfrak{t}_{0}\oplus\mathfrak{p}_{0}\) is the Cartan decomposition corresponding to \(+1\) and \(-1\)-eigenspaces of \(\theta\). Note that \(\mathfrak{t}_{0}=\{\left(\begin{array}{cc}A&0\\ 0&B\end{array}\right):A\in\mathfrak{so}(2),B\in\mathfrak{so}(m)\}\cong\mathfrak{ so}(2)\oplus\mathfrak{so}(m)\), and it is the Lie subalgebra of \(\mathfrak{g}_{0}\) corresponding to the connected Lie subgroup \(K\) of \(G\). Note that \(G/K\) is an irreducible Hermitian symmetric space of non-compact type. The complexification of \(\mathfrak{g}_{0}\) is \(\mathfrak{g}=\mathfrak{so}(m+2,\mathbb{C})\), and \[\mathfrak{g}=\begin{cases}\mathfrak{b}_{l}&\text{if }m=2l-1,\\ \delta_{l}&\text{if }m=2l-2.\end{cases}\] Let \(\mathfrak{k}=\mathfrak{t}_{0}^{\mathbb{C}}\subset\mathfrak{g},\mathfrak{p}= \mathfrak{p}_{0}^{\mathbb{C}}\subset\mathfrak{g}\), and \(\mathfrak{t}_{0}^{\prime}\) be a maximal abelian subspace of \(\mathfrak{so}(m)\). Then \(\mathfrak{t}_{0}=\mathfrak{so}(2)\oplus\mathfrak{t}_{0}^{\prime}\) be a maximal abelian subspace of \(\mathfrak{t}_{0}\), and \(\mathfrak{h}=\mathfrak{t}_{0}^{\mathbb{C}}\) is a Cartan subalgebra of \(\mathfrak{k}\) as well as of \(\mathfrak{g}\). Let \(\Delta=\Delta(\mathfrak{g},\mathfrak{h})\) be the set of all non-zero roots of \(\mathfrak{g}\) with respect to the Cartan subalgebra \(\mathfrak{h}\), similarly \(\Delta_{\mathfrak{t}}=\Delta(\mathfrak{k},\mathfrak{h})\) be the set of all non-zero roots of \(\mathfrak{k}\) with respect to \(\mathfrak{h}\), and \(\Delta_{n}=\Delta\setminus\Delta_{\mathfrak{t}}=\) the set of all non-compact roots of \(\mathfrak{g}\) with respect to \(\mathfrak{h}\). Then \(\mathfrak{k}=\mathfrak{h}+\sum_{\alpha\in\Delta_{\mathfrak{t}}}\mathfrak{g}^ {\alpha},\mathfrak{p}=\sum_{\alpha\in\Delta_{n}}\mathfrak{g}^{\alpha}\), where \(\mathfrak{g}^{\alpha}\) is the root subspace of \(\mathfrak{g}\) of the root \(\alpha\in\Delta\). Let \(B\) denote the Killing form of \(\mathfrak{g}\). For any linear function \(\lambda\) on \(\mathfrak{h}\), there exists unique \(H_{\lambda}\in\mathfrak{h}\) such that \[\lambda(H)=B(H,H_{\lambda})\text{ for all }H\in\mathfrak{h}.\] Put \(\langle\lambda,\mu\rangle=B(H_{\lambda},H_{\mu})\) for any linear functions \(\lambda,\mu\) on \(\mathfrak{h}\), \(H_{\alpha}^{*}=2H_{\alpha}/\alpha(H_{\alpha})\) for all \(\alpha\in\Delta\), and \(\mathfrak{h}_{\mathbb{R}}=\sum_{\alpha\in\Delta}\mathbb{R}H_{\alpha}\). Then \(\mathfrak{h}_{\mathbb{R}}=i\mathfrak{t}_{0}\). For \(m\neq 2\), let \(\Delta^{+}\) be a Borel-de Siebenthal positive root system of \(\Delta\) with a unique non-compact simple root \(\phi_{1}\), that is \[n_{\phi_{1}}(\alpha)=\begin{cases}0&\text{if }\alpha\in\Delta_{\mathfrak{t}},\\ \pm 1&\text{if }\alpha\in\Delta_{n}.\end{cases}\] If \(m=2\), then \(\Delta^{+}=\{\phi_{1},\phi_{2}\}\), where both of \(\phi_{1}\), and \(\phi_{2}\) are non-compact and simple. Let \(\Delta_{\mathfrak{t}}^{+}=\Delta^{+}\cap\Delta_{\mathfrak{t}},\Delta_{n}^{+}= \Delta^{+}\cap\Delta_{n}\), and \(\Delta_{n}^{-}=-\Delta_{n}^{+}\). Write \(\mathfrak{p}_{+}=\sum_{\alpha\in\Delta_{n}^{+}}\mathfrak{g}^{\alpha}\), and \(\mathfrak{p}_{-}=\sum_{\alpha\in\Delta_{n}^{-}}\mathfrak{g}^{\alpha}\). Then \(\mathfrak{p}=\mathfrak{p}_{+}\oplus\mathfrak{p}_{-}\) is the irreducible decomposition of \(\mathfrak{p}\) under the adjoint representation of \(\mathfrak{t}\), if \(m\neq 2\). For \(m\neq 2\), let \(\Phi_{\mathfrak{t}}=\{\phi_{2},\phi_{3},\ldots,\phi_{l}\}\) be the set of all simple roots in in \(\Delta_{\mathfrak{t}}^{+}\). Then \(\Phi=\{\phi_{1},\phi_{2},\ldots,\phi_{l}\}\) is the set of all simple roots in \(\Delta\). In the diagrams of this article, the non-compact roots are represented by black vertices. Since \(A_{\mathrm{Ad}(k)(\mathfrak{q})}\) is unitarily equivalent to \(A_{\mathfrak{q}}\) for all \(k\in K,\) to determine all unitarily inequivalent \(A_{\mathfrak{q}}\), it is sufficient to determine all \(\theta\)-stable parabolic subalgebras \(\mathfrak{q}\) of \(\mathfrak{g}_{0}\) which contain \(\mathfrak{h}\oplus\sum_{\alpha\in\Delta_{\mathfrak{t}}^{+}}\mathfrak{g}^{\alpha}\). Let \(\mathfrak{q}\) be a \(\theta\)-stable parabolic subalgebra of \(\mathfrak{g}_{0}\) containing \(\mathfrak{h}\oplus\sum_{\alpha\in\Delta_{\mathfrak{t}}^{+}}\mathfrak{g}^{\alpha}\). Then there exists \(x\in\mathfrak{h}_{\mathbb{R}}\) such that \(\mathfrak{q}=\mathfrak{q}_{x}=\mathfrak{h}\oplus\sum_{\alpha(x)\geq 0, \alpha\in\Delta}\mathfrak{g}^{\alpha}=\mathfrak{l}_{x}\oplus\mathfrak{u}_{x},\) where \(\mathfrak{l}_{x}=\mathfrak{h}\oplus\sum_{\alpha(x)=0,\alpha\in\Delta} \mathfrak{g}^{\alpha}\) is the Levi subalgebra of \(\mathfrak{q}_{x}\), and \(\mathfrak{u}_{x}=\sum_{\alpha(x)>0,\alpha\in\Delta}\mathfrak{g}^{\alpha}\) is the nilradical of \(\mathfrak{q}_{x}\). Note that \(\alpha(x)\geq 0\) for all \(\alpha\in\Delta_{\mathfrak{t}}^{+}.\) Write \(\Delta(\mathfrak{u}_{x}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}:\beta(x) >0\},\) and \(\Delta(\mathfrak{u}_{x}\cap\mathfrak{p}_{-})=\{\beta\in\Delta_{n}^{-}:\beta(x) >0\}\). For \(x,y\in\mathfrak{h}_{\mathbb{R}},A_{\mathfrak{q}_{x}}\) is unitarily equivalent to \(A_{\mathfrak{q}_{x}}\)_iff_\(\Delta(\mathfrak{u}_{x}\cap\mathfrak{p}_{+})\cup\Delta(\mathfrak{u}_{x} \cap\mathfrak{p}_{-})=\Delta(\mathfrak{u}_{y}\cap\mathfrak{p}_{+})\cup\Delta( \mathfrak{u}_{y}\cap\mathfrak{p}_{-})\). So we will determine all possible candidates of \(\Delta(\mathfrak{u}_{x}\cap\mathfrak{p}_{+})\cup\Delta(\mathfrak{u}_{x}\cap \mathfrak{p}_{-})\), where \(x\in\mathfrak{h}_{\mathbb{R}}\) with \(\alpha(x)\geq 0\) for all \(\alpha\in\Delta_{\mathfrak{t}}^{+}.\) For \(x\in\mathfrak{h}_{\mathbb{R}}\) with \(\alpha(x)\geq 0\) for all \(\alpha\in\Delta_{\mathfrak{t}}^{+}\), we may write \(x=H_{\lambda}\) for some linear function \(\lambda\) on \(\mathfrak{h}_{\mathbb{R}}\) with \(\langle\lambda,\alpha\rangle\geq 0\) for all \(\alpha\in\Delta_{\mathfrak{t}}^{+}\). We write \(\mathfrak{q}_{\lambda}=\mathfrak{q}_{x},\mathfrak{l}_{\lambda}=\mathfrak{l}_{x}\), and \(\mathfrak{u}_{\lambda}=\mathfrak{u}_{x}\). Thus \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \langle\lambda,\beta\rangle>0\},\) and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\{\beta\in\Delta_{n}^{-}: \langle\lambda,\beta\rangle>0\}\). Clearly \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})\) and \(-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})\) are disjoint subsets of \(\Delta_{n}^{+}.\) Now we begin our proofs with this elementary lemma. **Lemma 4.1**.: _[_11_, Remark 3.3(iii)]_ _Let \(\lambda\) be a linear function on \(\mathfrak{h}_{\mathbb{R}}\) such that \(\langle\lambda,\alpha\rangle\geq 0\) for all \(\alpha\in\Delta_{\mathfrak{t}}^{+}.\) (i) Let \(\beta,\gamma\in\Delta_{n}\) be such that \(\gamma>\beta,\) and both belong to \(\Delta_{n}^{+}\) or \(\Delta_{n}^{-}\). Then \(\langle\lambda,\beta\rangle>0\implies\langle\lambda,\gamma\rangle>0\). (ii) Let \(\phi\in\Delta_{\mathfrak{t}}^{+}\) be simple and \(\beta\in\Delta_{n}\). If \(\beta-\phi\in\Delta,\langle\lambda,\beta-\phi\rangle=0,\) and \(\langle\lambda,\beta\rangle>0,\) then \(\langle\lambda,\phi\rangle>0.\) (iii) Let \(\phi\in\Delta_{\mathfrak{t}}^{+}\) be simple and \(\beta\in\Delta_{n}\). If \(\beta+\phi\in\Delta,\langle\lambda,\beta+\phi\rangle=0,\) and \(\langle\lambda,\beta\rangle=0,\) then \(\langle\lambda,\phi\rangle=0\). If \(\beta-\phi\in\Delta,\langle\lambda,\beta-\phi\rangle=0,\) and \(\langle\lambda,\beta\rangle=0,\) then \(\langle\lambda,\phi\rangle=0.\)_ Proof.: (i) Let \(\beta,\gamma\in\Delta_{n}\) be such that \(\gamma>\beta\) and both belong to \(\Delta_{n}^{+}\) or \(\Delta_{n}^{-}\). Then \(\gamma=\beta+\sum_{2\leq i\leq l}n_{i}\phi_{i},\) where \(n_{i}\in\mathbb{N}\cup\{0\}\) for all \(2\leq i\leq l\). Since \(\langle\lambda,\beta\rangle>0,\) and \(\langle\lambda,\phi_{i}\rangle\geq 0\) for all \(2\leq i\leq l\), we have \(\langle\lambda,\gamma\rangle>0.\) (ii) \(\langle\lambda,\beta-\phi\rangle=0\implies\langle\lambda,\phi\rangle=\langle \lambda,\beta\rangle>0.\) (iii) \(\langle\lambda,\beta+\phi\rangle=0\implies\langle\lambda,\phi\rangle=-\langle \lambda,\beta\rangle=0\), and \(\langle\lambda,\beta-\phi\rangle=0\implies\langle\lambda,\phi\rangle=\langle \lambda,\beta\rangle=0\). Lemma 4.1(i) says that \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})\) is either empty or a set of the form \(\cup_{1\leq i\leq r}\{\beta\in\Delta_{n}^{+}:\beta\geq\xi_{i}\}\), and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})\) is either empty or a set of the form \(\cup_{1\leq j\leq s}\{-\beta\in\Delta_{n}^{-}:-\beta\geq-\eta_{j}\}=\cup_{1 \leq j\leq s}(-\{\beta\in\Delta_{n}^{+}:\beta\leq\eta_{j}\})\), where \(\{\xi_{1},\xi_{2},\ldots,\xi_{r}\},\{\eta_{1},\eta_{2},\ldots,\eta_{s}\}\) are sets of pairwise non-comparable roots in \(\Delta_{n}^{+}\). If \(\mathfrak{g}=\mathfrak{b}_{l}(l\geq 2)\), then \(\Delta_{n}^{+}=\{\phi_{1},\phi_{1}+\phi_{2},\ldots,\phi_{1}+\phi_{2}+\cdots+ \phi_{l},\phi_{1}+\phi_{2}+\cdots+2\phi_{l},\phi_{1}+\phi_{2}+\cdots+2\phi_{l-1 }+2\phi_{l},\ldots,\phi_{1}+2\phi_{2}+\cdots+2\phi_{l}\}\). If \(\mathfrak{g}=\mathfrak{b}_{1}\), then \(\Delta_{n}^{+}=\{\phi_{1}\}\). If \(\mathfrak{g}=\delta_{l}(l\geq 4)\), then \(\Delta_{n}^{+}=\{\phi_{1},\phi_{1}+\phi_{2},\ldots,\phi_{1}+\phi_{2}+\cdots+ \phi_{l-2},\phi_{1}+\phi_{2}+\cdots+\phi_{l-2}+\phi_{l-1},\phi_{1}+\phi_{2}+ \cdots+\phi_{l-2}+\phi_{l},\phi_{1}+\phi_{2}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi _{l},\phi_{1}+\phi_{2}+\cdots+2\phi_{l-2}+\phi_{l-2}+\phi_{l-1}+\phi_{l}\}\). If \(\mathfrak{g}=\delta_{2}\), then \(\Delta_{n}^{+}=\{\phi_{1},\phi_{2}\}\). If \(\mathfrak{g}=\delta_{3}\), then \(\Delta_{n}^{+}=\{\phi_{1},\phi_{1}+\phi_{2},\phi_{1}+\phi_{3},\phi_{1}+\phi_{2 }+\phi_{3}\}\). Figure 2. Diagram of \(\Delta_{n}^{+}\) \(\begin{array}{ccc}\Delta_{n}^{+}:&\bullet\\ \text{(for $\mathfrak{g}=\delta_{2}$)}&\phi_{1}\end{array}\)\(\begin{array}{ccc}\bullet\\ \phi_{2}\end{array}\) In the Figure 2, the vertices represent roots in \(\Delta_{n}^{+}\). Two roots \(\beta,\gamma\in\Delta_{n}^{+}\) are joined by a line with an arrow in the direction of \(\gamma\) if \(\gamma=\beta+\phi\) for some simple root \(\phi\in\Delta_{\textbf{t}}^{+}\). In this case, the simple root \(\phi\) is given on one side of the line. ### Proof of Th.1.1 Let \(\omega_{1},\omega_{2},\ldots,\omega_{l}\) be the fundamental weights of \(\mathfrak{g}\) corresponding to the simple roots \(\phi_{1},\phi_{2},\ldots,\phi_{l}\) respectively. (i) \(\mathfrak{g}=\mathfrak{b}_{l}(l>1):\) Lemma 4.1(i) and the diagram of \(\Delta_{n}^{+}\) in Figure 2 show that \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})\) is either empty or a set of the form \(\{\beta\in\Delta_{n}^{+}:\beta\geq\xi\}\), and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})\) is either empty or a set of the form \(\{-\beta\in\Delta_{n}^{-}:-\beta\geq-\eta\}=-\{\beta\in\Delta_{n}^{+}:\beta \leq\eta\}\), and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+}),-\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})\) are disjoint subsets of \(\Delta_{n}^{+}\), where \(\xi,\eta\in\Delta_{n}^{+}\). Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})\) be empty. Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi\}\), where \(\xi>\phi_{1}+\phi_{2}+\cdots+\phi_{l}\) is not possible. For then \(\xi=\phi_{1}+\cdots+\phi_{i-1}+2\phi_{i}+\cdots+2\phi_{l}\), where \(2\leq i\leq l\). So \(\langle\lambda,\phi_{i}\rangle>0\), by Lemma 4.1(ii). Again \(\langle\lambda,\phi_{1}+\phi_{2}+\cdots+\phi_{i}\rangle=0,\langle\lambda,\phi_ {1}+\phi_{2}+\cdots+\phi_{i-1}\rangle=0\). Thus \(\langle\lambda,\phi_{i}\rangle=0\), a contradiction. If \(\xi\leq\phi_{1}+\phi_{2}+\cdots+\phi_{l}\), then \(\xi=\phi_{1}+\cdots+\phi_{i}\), for some \(1\leq i\leq l\), and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi\},\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\phi\), where \(\lambda=\omega_{i}\). Also \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})=\phi\), for \(\lambda=0\). Thus the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\phi\), is \(l+1\). Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i}\}\), where \(1\leq i\leq l-1\). Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi\}\), where \(\xi>\phi_{1}+\cdots+\phi_{l},\xi\neq\phi_{1}+\cdots+\phi_{i}+2\phi_{i+1}+ \cdots+2\phi_{l}\) is not possible. Because \(\xi>\phi_{1}+\cdots+\phi_{l},\xi\neq\phi_{1}+\cdots+\phi_{i}+2\phi_{i+1}+ \cdots+2\phi_{l}\) implies \(\xi=\phi_{1}+\cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l}\) for some \(2\leq j\leq l,j\neq i+1\). Then \(\langle\lambda,\phi_{i+1}\rangle>0,\langle\lambda,\phi_{j}\rangle>0\), by Lemma 4.1(ii). If \(2\leq j\leq i\), then \(\langle\lambda,\phi_{1}+\cdots+\phi_{i}+2\phi_{i+1}+\cdots+2\phi_{l}\rangle=0, \langle\lambda,\phi_{1}+\cdots+\phi_{i+1}+2\phi_{i+2}+\cdots+2\phi_{l}\rangle=0\). Thus \(\langle\lambda,\phi_{i+1}\rangle=0\), a contradiction. If \(i+2\leq j\leq l\), then \(\langle\lambda,\phi_{1}+\phi_{2}+\cdots+\phi_{j}\rangle=0,\langle\lambda,\phi_ {1}+\phi_{2}+\cdots+\phi_{j-1}\rangle=0\). Thus \(\langle\lambda,\phi_{j}\rangle=0\), a contradiction. If \(\xi\leq\phi_{1}+\phi_{2}+\cdots+\phi_{l}\), then \(\xi=\phi_{1}+\cdots+\phi_{j}\), for some \(i+1\leq j\leq l\), and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi\},\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in \Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{i}\}\), where \(\lambda=\frac{\omega_{i+1}}{\langle\omega_{i+1},\phi_{i+1}\rangle+\frac{\omega_ {j}}{\langle\omega_{j},\phi_{j}\rangle}-\frac{\omega_{1}}{\langle\omega_{1}, \phi_{1}\rangle}\). If \(\xi=\phi_{1}+\cdots+\phi_{i}+2\phi_{i+1}+\cdots+2\phi_{l}\), then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi\},\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in \Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{i}\}\), for \(\lambda=\frac{\omega_{i+1}}{\langle\omega_{i+1},\phi_{i+1}\rangle}-\frac{\omega_ {1}}{\langle\omega_{1},\phi_{1}\rangle}\). Also \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi\) is not possible, for \(\langle\lambda,\phi_{i+1}\rangle>0\), by Lemma 4.1(ii); and \(\langle\lambda,\phi_{1}+\cdots+\phi_{i}+2\phi_{i+1}+\cdots+2\phi_{l}\rangle=0, \langle\lambda,\phi_{1}+\cdots+\phi_{i+1}+2\phi_{i+2}+\cdots+2\phi_{l}\rangle=0\). Thus \(\langle\lambda,\phi_{i+1}\rangle=0\), a contradiction. Hence the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i}\}\), is \(l-i+1\) for all \(1\leq i\leq l-1\). Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{l}\}\). Since \(\xi>\phi_{1}+\cdots+\phi_{l},\xi=\phi_{1}+\cdots+\phi_{j-1}+2\phi_{j}+ \cdots+2\phi_{l}\) for some \(2\leq j\leq l\). Now \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi\},\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in \Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{l}\}\), for \(\lambda=\frac{\omega_{l}}{\langle\omega_{l},\phi_{l}\rangle}+\frac{\omega_{j}}{ \langle\omega_{j},\phi_{j}\rangle}-\frac{\lambda\omega_{1}}{\langle\omega_{1}, \phi_{1}\rangle}\). Also \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+ \cdots+\phi_{l}\}\), for \(\lambda=\frac{\omega_{l}}{\langle\omega_{l},\phi_{l}\rangle}-\frac{\omega_{1}}{ \langle\omega_{1},\phi_{1}\rangle}\). Thus the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{l}\}\), is \(l\). Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i-1}+2\phi_{i}+\cdots+2\phi_{l}\},2\leq i\leq l\). Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+ \cdots+\phi_{i-1}+2\phi_{i}+\cdots+2\phi_{l}\}\), for \(\lambda=\frac{\omega_{i-1}}{(\omega_{i-1},\phi_{i-1})}-\frac{2\omega_{1}}{( \omega_{1},\phi_{1})}\) if \(3\leq i\leq l\), and \(\lambda=-\omega_{1}\) if \(i=2\). If \(\xi=\phi_{1}+\cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l}\) for some \(2\leq j<i\), then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi\},\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta \in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{i-1}+2\phi_{i}+\cdots+2\phi_{l}\}\), for \(\lambda=\frac{\omega_{i-1}}{(\omega_{i-1},\phi_{i-1})}+\frac{\omega_{j}}{( \omega_{j},\phi_{j})}-\frac{3\omega_{1}}{(\omega_{1},\phi_{1})}\). Hence the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i-1}+2\phi_{i}+\cdots+2\phi_{l}\}\), is \(i-1\) for all \(2\leq i\leq l\). Thus the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology of the Lie group \(SO_{0}(2,2l-1)(l>1)\) is \(A=(l+1)+l+\cdots+2+l+1+\cdots+(l-1)=3l+(l-1)l/2+(l-1)l/2=l(l+2)\). \(\mathfrak{g}=\mathfrak{b}_{1}:\) In this case \(\Delta_{n}^{+}=\{\phi_{1}\}\). If \(\lambda=0\), then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}),\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})\) are empty. \(\lambda=\omega_{1}\) implies \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})=\{\phi_{1}\}\); and \(\lambda=-\omega_{1}\) implies \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})=\{-\phi_{1}\}\). Thus \(A=3=l(l+2)\). \(\mathfrak{g}=\delta_{l}(l\geq 3):\) Lemma 4.1(i) and the diagram of \(\Delta_{n}^{+}\) in Figure 2 show that \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})\) is either empty or a set of the form \(\{\beta\in\Delta_{n}^{+}:\beta\geq\xi\}\), or \(\{\beta\in\Delta_{n}^{+}:\beta\geq\xi_{1}\text{ or }\xi_{2}\}\), and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})\) is either empty or a set of the form \(\{-\beta\in\Delta_{n}^{-}:-\beta\geq-\eta\}=-\{\beta\in\Delta_{n}^{+}:\beta \leq\eta\}\), or \(-\{\beta\in\Delta_{n}^{+}:\beta\leq\xi_{1}\text{ or }\xi_{2}\}\), where \(\xi,\eta\in\Delta_{n}^{+};\xi_{1}=\phi_{1}+\cdots+\phi_{l-2}+\phi_{l-1},\xi_{2} =\phi_{1}+\cdots+\phi_{l-2}+\phi_{l}\). Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})\) be empty. Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi\}\), where \(\xi\geq\phi_{1}+\phi_{2}+\cdots+\phi_{l}\) is not possible. For then \(\xi=\phi_{1}+\cdots+\phi_{l}\), or \(\xi=\phi_{1}+\cdots+\phi_{i-1}+2\phi_{i}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_{l}\), where \(2\leq i\leq l-2\). If \(\xi=\phi_{1}+\cdots+\phi_{l}\), then \(\langle\lambda,\phi_{l-1}\rangle>0,\langle\lambda,\phi_{l}\rangle>0\), by Lemma 4.1(ii). Again \(\langle\lambda,\phi_{1}+\cdots+\phi_{l-1}\rangle=0,\langle\lambda,\phi_{1}+ \cdots+\phi_{l-2}+\phi_{l}\rangle=0,\langle\lambda,\phi_{1}+\cdots+\phi_{l-2} \rangle=0\). Thus \(\langle\lambda,\phi_{l-1}\rangle=0,\langle\lambda,\phi_{l}\rangle=0\), a contradiction. If \(\xi=\phi_{1}+\cdots+\phi_{i-1}+2\phi_{i}+\cdots+\phi_{l-1}+\phi_{l}(2\leq i \leq l-2)\), then \(\langle\lambda,\phi_{i}\rangle>0\), by Lemma 4.1(ii). Again \(\langle\lambda,\phi_{1}+\phi_{2}+\cdots+\phi_{i}\rangle=0,\langle\lambda,\phi_{1 }+\phi_{2}+\cdots+\phi_{i-1}\rangle=0\). Thus \(\langle\lambda,\phi_{i}\rangle=0\), a contradiction. If \(\xi<\phi_{1}+\phi_{2}+\cdots+\phi_{l}\), then \(\xi=\phi_{1}+\cdots+\phi_{i}\), for some \(1\leq i\leq l-1\), or \(\xi=\xi_{2}\) and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}:\beta \geq\xi\},\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\phi\), for \(\lambda=\omega_{i},\omega_{l}\) respectively. Also \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi_{1}\text{ or }\xi_{2}\},\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\phi\), for \(\lambda=\omega_{l-1}+\omega_{l}\), and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})=\phi\), for \(\lambda=0\). Thus the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\phi\), is \(l+2\). Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i}\}\), where \(1\leq i\leq l-3,l\geq 4\). Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi\}\), where \(\xi\geq\phi_{1}+\cdots+\phi_{l},\xi\neq\phi_{1}+\cdots+\phi_{i}+2\phi_{i+1}+ \cdots+2\phi_{l-2}+\phi_{l-1}+\phi_{l}\) is not possible. For if \(\xi=\phi_{1}+\cdots+\phi_{l}\), then \(\langle\lambda,\phi_{l-1}\rangle>0,\langle\lambda,\phi_{l}\rangle>0\) by Lemma 4.1(ii). Again \(\langle\lambda,\phi_{1}+\phi_{2}+\cdots+\phi_{l-2}\rangle=0,\langle\lambda,\phi_ {1}+\phi_{2}+\cdots+\phi_{l-1}\rangle=0,\langle\lambda,\phi_{1}+\phi_{2}+ \cdots+\phi_{l-2}+\phi_{l}\rangle=0\). Thus \(\langle\lambda,\phi_{l-1}\rangle=0,\langle\lambda,\phi_{l}\rangle=0\), a contradiction. Now \(\xi>\phi_{1}+\cdots+\phi_{l},\xi\neq\phi_{1}+\cdots+\phi_{i}+2\phi_{i+1}+ \cdots+2\phi_{l-2}+\phi_{l-1}+\phi_{l}\) implies \(\xi=\phi_{1}+\cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_{l}\) for some \(2\leq j\leq l-2,j\neq i+1\). Then \(\langle\lambda,\phi_{i+1} If \(i+2\leq j\leq l-2\), then \(\langle\lambda,\phi_{1}+\phi_{2}+\cdots+\phi_{j}\rangle=0,\langle\lambda,\phi_{1}+ \phi_{2}+\cdots+\phi_{j-1}\rangle=0\). Thus \(\langle\lambda,\phi_{j}\rangle=0\), a contradiction. If \(\xi<\phi_{1}+\phi_{2}+\cdots+\phi_{l}\), then \(\xi=\phi_{1}+\cdots+\phi_{j}\), for some \(i+1\leq j\leq l-1\), or \(\xi=\xi_{2}.\) Now \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\phi_{1}+\cdots+\phi_{j}\}(i+1\leq j\leq l-1),\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+ \cdots+\phi_{i}\}\) for \(\lambda=\omega_{i+1}+\omega_{j}-\omega_{1};\Delta(\mathfrak{u}_{\lambda}\cap \mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}:\beta\geq\xi_{2}\},\Delta(\mathfrak{ u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+ \cdots+\phi_{i}\}\) for \(\lambda=\omega_{i+1}+\omega_{l}-\omega_{1};\) and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\phi_{1}+\cdots+\phi_{i}+2\phi_{i+1}+\cdots+2\phi_{l-2}+\phi_{l-1}+ \phi_{l},\},\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in \Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{i}\}\) for \(\lambda=\omega_{i+1}-\omega_{1}.\) Also \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi_{1}\text{ or }\xi_{2}\},\Delta(\mathfrak{u}_{\lambda}\cap \mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{i}\}\) for \(\lambda=\omega_{i+1}+\omega_{l-1}+\omega_{l}-\omega_{1}.\) Again \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi\) is not possible, for \(\langle\lambda,\phi_{i+1}\rangle>0\), by Lemma 4.1(ii); and \(\langle\lambda,\phi_{1}+\cdots+\phi_{i}+2\phi_{i+1}+\cdots+2\phi_{l}\rangle=0, \langle\lambda,\phi_{1}+\cdots+\phi_{i+1}+2\phi_{i+2}+\cdots+2\phi_{l}\rangle=0\). Thus \(\langle\lambda,\phi_{i+1}\rangle=0\), a contradiction. Hence the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i}\}\), is \(l-i+2\) for all \(1\leq i\leq l-3\). Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{l-2}\}.\) Then \(\langle\lambda,\phi_{l-1}\rangle>0,\langle\lambda,\phi_{l}\rangle>0\) by Lemma 4.1(ii). Hence \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi\), or \(\{\beta\in\Delta_{n}^{+}:\beta\geq\xi\}\), where \(\xi>\phi_{1}+\cdots+\phi_{l}\) is not possible. For then \(\langle\lambda,\phi_{1}+\phi_{2}+\cdots+\phi_{l}\rangle=0,\langle\lambda,\phi _{1}+\phi_{2}+\cdots+\phi_{l-1}\rangle=0,\langle\lambda,\phi_{1}+\phi_{2}+ \cdots+\phi_{l-2}+\phi_{l}\rangle=0.\) Thus \(\langle\lambda,\phi_{l}\rangle=0,\langle\lambda,\phi_{l-1}\rangle=0\), a contradiction. Now \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi_{1}\},\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{ \beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{l-2}\}\) for \(\lambda=2\omega_{l-1}+\omega_{l}-\omega_{1};\Delta(\mathfrak{u}_{\lambda} \cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}:\beta\geq\xi_{2}\},\Delta( \mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta \leq\phi_{1}+\cdots+\phi_{l-2}\}\) for \(\lambda=\omega_{l-1}+2\omega_{l}-\omega_{1};\) and \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\phi_{1}+\cdots+\phi_{l}\},\Delta(\mathfrak{u}_{\lambda}\cap \mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{l-2}\}\) for \(\lambda=\omega_{l-1}+\omega_{l}-\omega_{1}.\) Also \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\xi_{1}\text{ or }\xi_{2}\},\Delta(\mathfrak{u}_{\lambda}\cap \mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{l-2}\}\) for \(\lambda=2\omega_{l-1}+2\omega_{l}-\omega_{1}.\) Hence the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{l-2}\}\), is \(4\). Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{l-2}+\phi_{a}\}\), where \(a=l-1,l;\) and \(b=l\) if \(a=l-1,\)and \(b=l-1\) if \(a=l.\) Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\phi_{1}+\cdots+\phi_{l-2}+\phi_{b}\},\Delta(\mathfrak{u}_{\lambda} \cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{l-2 }+\phi_{a}\}\) for \(\lambda=2\omega_{b}-\omega_{1};\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})= \{\beta\in\Delta_{n}^{+}:\beta\geq\phi_{1}+\cdots+\phi_{l-2}+\phi_{a}\}\) for \(\lambda=\omega_{a}+2\omega_{b}-2\omega_{1};\Delta(\mathfrak{u}_{\lambda}\cap \mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}:\beta\geq\phi_{1}+\cdots+\phi_{j-1}+2 \phi_{j}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_{l}\},\Delta(\mathfrak{u}_{\lambda} \cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+ \phi_{l-2}+\phi_{a}\}\) for \(\lambda=\omega_{a}+2\omega_{b}-2\omega_{1};\Delta(\mathfrak{u}_{\lambda}\cap \mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}:\beta\geq\phi_{1}+\cdots+\phi_{j-1}+2 \phi_{j}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_{l}\},\Delta(\mathfrak{u}_{\lambda} \cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{l-2 }+\phi_{a}\}\) for \(\lambda=\omega_{b}-\omega_{1}.\) Thus the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\xi_{1}\text{ or }\xi_{2}\}.\) Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\phi_{1}+\cdots+\phi_{l-2}+\phi_{l-1}+\phi_{l}\},\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\xi_{1} \text{ or }\xi_{2}\}\) for Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta \leq\phi_{1}+\cdots+\phi_{l-2}+\phi_{l-1}+\phi_{l}\}.\) Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\phi_{1}+\cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l-2}+\phi_{l-1}+ \phi_{l}\}(2\leq j\leq l-2),\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}) =-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{l-2}+\phi_{l-1}+ \phi_{l}\}\) for \(\lambda=\omega_{j}+\omega_{l-2}-3\omega_{1};\Delta(\mathfrak{u}_{\lambda}\cap \mathfrak{p}_{+})=\phi,\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{ \beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{l-2}+\phi_{l-1}+\phi_{l}\}\) for \(\lambda=\omega_{l-2}-2\omega_{1}.\) Thus the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{l-2}+\phi_{l-1}+\phi_{l}\},\) is \(l-2.\) Let \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i-1}+2\phi_{i}+\cdots+2\phi_{l-2}+\phi_{l-1}+ \phi_{l}\}(2\leq i\leq l-2),l\geq 4.\) Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+ \cdots+\phi_{i-1}+2\phi_{i}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_{l}\}\) for \(\lambda=\omega_{i-1}-2\omega_{1}\) if \(3\leq i\leq l-2,\) and \(\lambda=-\omega_{1}\) if \(i=2.\) Also \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\phi_{1}+\cdots+\phi_{j-1}+2\phi_{j}+\cdots+2\phi_{l-2}+\phi_{l-1}+ \phi_{l}\}(2\leq j\leq i-1),\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}) =-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{i-1}+2\phi_{i}+\cdots +2\phi_{l-2}+\phi_{l-1}+\phi_{l}\}\) for \(\lambda=\omega_{i-1}+\omega_{j}-3\omega_{1}.\) Thus the number of equivalence classes of irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology for which \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i-1}+2\phi_{i}+\cdots+2\phi_{l-2}+\phi_{l-1}+ \phi_{l}\}(2\leq i\leq l-2),\) is \(i-1.\) Hence \(A=(l+2)+(l+1)+\cdots+5+4+l+l+(l-1)+(l-2)+(l-3)+\cdots+1=l^{2}+4l-3.\) \(\mathfrak{g}=\delta_{2}:\) In this case \(\Delta_{n}^{+}=\{\phi_{1},\phi_{2}\}.\) If \(\lambda=0,\) then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}),\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})\) are empty. \(\lambda=\omega_{i}(1\leq i\leq 2)\) implies \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})=\{\phi_{i}\};\lambda=\omega_{1}+\omega_{2}\) implies \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})=\{\phi_{1},\phi_{2}\};\lambda=-\omega_{i}(1\leq i \leq 2)\) implies \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\{-\phi_{i}\},\Delta( \mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi;\lambda=\omega_{1}-\omega_{2}\) implies \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\{-\phi_{2}\},\Delta( \mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\phi_{1}\};\lambda=\omega_{2}-\omega _{1}\) implies \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\{-\phi_{1}\},\Delta( \mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\phi_{2}\};\) and \(\lambda=-\omega_{1}-\omega_{2}\) implies \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\phi,\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{-})=\{-\phi_{1},-\phi_{2}\}.\) Thus \(A=9=l^{2}+4l-3.\) (ii) An irreducible unitary representation \(\pi\) of \(SO_{0}(2,m)\) with trivial infinitesimal character is a discrete series representation _if and only if \(\pi\)_ is unitarily equivalent to \(A_{\mathfrak{b}},\) where \(\mathfrak{b}\) is a Borel subalgebra of \(\mathfrak{g}\) containing \(\mathfrak{h}+\sum_{\alpha\in\Delta_{t}^{+}}\mathfrak{g}^{\alpha}.\) If \(\mathfrak{b}\) is a Borel subalgebra of \(\mathfrak{g}\) containing \(\mathfrak{h}+\sum_{\alpha\in\Delta_{t}^{+}}\mathfrak{g}^{\alpha},\) then \(\mathfrak{b}=\mathfrak{b}_{\lambda}=\mathfrak{h}\oplus\mathfrak{u}_{\lambda}\) for some linear function \(\lambda\) on \(\mathfrak{h}_{\mathbb{R}}\) with \(\langle\lambda,\alpha\rangle\neq 0\) for all \(\alpha\in\Delta,\) and \(\langle\lambda,\alpha\rangle>0\) for all \(\alpha\in\Delta_{t}^{+}.\) Since \(\langle\lambda,\beta\rangle\neq 0\) for all \(\beta\in\Delta_{n}^{+},\) for the irreducible unitary representation \(A_{\mathfrak{b}}\) we have \((-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}))\cup\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})=\Delta_{n}^{+}.\) Conversely suppose that \(\lambda\) be a linear function on \(\mathfrak{h}_{\mathbb{R}}\) such that \(\langle\lambda,\alpha\rangle\geq 0\) for all \(\alpha\in\Delta_{t}^{+},\) and \((-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}))\cup\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})=\Delta_{n}^{+}.\) Since \(\langle\lambda,\alpha\rangle\geq 0\) for all \(\alpha\in\Delta_{t}^{+},\) and \(\langle\lambda,\beta\rangle\neq 0\) for all \(\beta\in\Delta_{n},\lambda=\sum_{1\leq i\leq l}\frac{c_{i\omega_{i}}}{\langle \omega_{i},\phi_{i}\rangle},\) where \(c_{1}\) is a non-zero real number and \(c_{i}\) is a non-negative real number for all \(2\leq i\leq l.\) If \(c_{1}>0,\) let \(d_{1}=c_{1};\) and \(d_{i}=c_{i}\) if \(c_{i}\neq 0,\)and \(d_{i}=1\) if \(c_{i}=0;\) for all \(2\leq i\leq l.\) If \(c_{1}<0,\) let \(\{i:2\leq i\leq l,c_{i}=0\}=\{i_{1},i_{2},\ldots,i_{k}\},\) and \(m_{j}=\max\{n_{\phi_{i_{j}}}(\beta):\beta\in\Delta_{n}^{+},\langle\lambda, \beta\rangle<0\}\) for all \(1\leq j\leq k.\) Assume that \(\mathfrak{g}\neq\delta_{2},\) and if \(\mathfrak{g}=\delta_{l}(l\geq 3),\)\(-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})\neq\{\beta\in\Delta_{n}^{+}: \beta\leq\xi_{i}\},\) where \(i=1,2.\) Then if \(\beta\in-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}),\beta^{\prime}\in \Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+});\beta<\beta^{\prime},\) and so \(n_{\phi_{i_{j}}}(\beta)\leq n_{\phi_{i_{j}}}(\beta^{\prime})\) for all \(1\leq j\leq k.\) In this case, if \(c_{1}<0,\) let \(d_{1}=c_{1}-\sum_{1\leq j\leq k}m_ Let \(\mathfrak{g}=\delta_{l}(l\geq 3)\), and \(-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\{\beta\in\Delta_{n}^{+}: \beta\leq\xi_{i}\}\), where \(i=1,2\); that is \(-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})=\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{l-2}+\phi_{a}\}(a=l-1,l)\). Then \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})=\{\beta\in\Delta_{n}^{+}: \beta\geq\phi_{1}+\cdots+\phi_{l-2}+\phi_{b}\}\), where \(b=l\) if \(a=l-1,\text{and}\ b=l-1\) if \(a=l\). Clearly \(\langle\lambda,\phi_{b}\rangle>0\), that is \(c_{b}>0\). If \(c_{a}=0\), let \(d_{1}=c_{1}-\sum_{1\leq j\leq k}m_{j};d_{a}=c_{a}+1;d_{b}=c_{b}+1\); and for all \(2\leq i\leq l-2,\ d_{i}=c_{i}\) if \(c_{i}\neq 0,\text{and}\ d_{i}=1\) if \(c_{i}=0\). If \(c_{a}\neq 0\), let \(d_{1}=c_{1}-\sum_{1\leq j\leq k}m_{j}\); and \(d_{i}=c_{i}\) if \(c_{i}\neq 0,\text{and}\ d_{i}=1\) if \(c_{i}=0\); for all \(2\leq i\leq l\). \(\lambda^{\prime}=\sum_{1\leq i\leq l}\frac{d_{i}\omega_{i}}{\langle\omega_{i}, \phi_{i}\rangle}\). Since \(\beta\in-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}),\beta^{\prime}\in \Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})\implies n_{\phi_{i_{j}}}( \beta)\leq n_{\phi_{i_{j}}}(\beta^{\prime})\) for all \(1\leq j\leq k,i_{j}\neq l-1,l\); we have \(\langle\lambda^{\prime},\alpha\rangle>0\) for all \(\alpha\in\Delta_{\mathfrak{t}}^{+}\); and if \(\beta\in\Delta_{n}\), \(\langle\lambda,\beta\rangle<0\implies\langle\lambda^{\prime},\beta\rangle<0\), and \(\langle\lambda,\beta\rangle>0\implies\langle\lambda^{\prime},\beta\rangle>0\). As above \(A_{\mathfrak{q}_{\lambda}}\) is is unitarily equivalent to \(A_{\mathfrak{q}_{\lambda^{\prime}}}\), which is a discrete series representation with trivial infinitesimal character. Let \(\mathfrak{g}=\delta_{2}\). Since \((-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}))\cup\Delta(\mathfrak{u} _{\lambda}\cap\mathfrak{p}_{+})=\Delta_{n}^{+}\), the candidates of \((-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}),\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+}))\) are \((\phi,\{\phi_{1},\phi_{2}\}),(\{\phi_{1}\},\{\phi_{2}\}),(\{\phi_{2}\},\{\phi_{ 1}\})\), and \((\{\phi_{1},\phi_{2}\},\phi)\). The corresponding \(\lambda^{\prime}\) are \(\omega_{1}+\omega_{2},-\omega_{1}+\omega_{2},\omega_{1}-\omega_{2},-\omega_{1 }-\omega_{2}\) respectively. Then \(A_{\mathfrak{q}_{\lambda}}\) is is unitarily equivalent to \(A_{\mathfrak{q}_{\lambda^{\prime}}}\), which is a discrete series representation with trivial infinitesimal character. The Blattner parameter of the discrete series representation \(A_{\mathfrak{b}_{\lambda}}\), where \(\mathfrak{b}_{\lambda}=\mathfrak{h}\oplus\mathfrak{u}_{\lambda}\) for some linear function \(\lambda\) on \(\mathfrak{h}_{\mathbb{R}}\) with \(\langle\lambda,\alpha\rangle\neq 0\) for all \(\alpha\in\Delta\), and \(\langle\lambda,\alpha\rangle>0\) for all \(\alpha\in\Delta_{\mathfrak{t}}^{+}\), is \(\sum_{\beta\in\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-})\cup\Delta( \mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})}\beta\). If \(m\neq 2,\mathfrak{g}\) is simple and in this case the discrete series representation \(A_{\mathfrak{b}_{\lambda}}\) is a holomorphic discrete series representation _if and only if_ the Blattner parameter is \(\sum_{\beta\in\Delta_{n}^{+}}\beta\), or \(\sum_{\beta\in\Delta_{n}^{-}}\beta\); that is \(\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{+})\) is either \(\Delta_{n}^{+}\) or empty. For since \(\mathfrak{g}\) is simple, the only Borel-de Siebenthal positive root system containing \(\Delta_{\mathfrak{t}}^{+}\) are \(\Delta_{\mathfrak{t}}^{+}\cup\Delta_{n}^{+}\), and \(\Delta_{\mathfrak{t}}^{+}\cup\Delta_{n}^{-}\). Hence the number of equivalence classes of holomorphic discrete series representations of \(SO_{0}(2,m)(m\neq 2)\) with trivial infinitesimal character is \(2\). If \(\mathfrak{g}=\delta_{2}\), any positive root system is Borel-de Siebenthal positive root system, and so any discrete series representation with trivial infinitesimal character is holomorphic. Let \(\mathfrak{g}=\mathfrak{b}_{l}(l\geq 2)\). The candidates of \((\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}),\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+}))\) for which \((-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}))\cup\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})=\Delta_{n}^{+}\), are \((\phi,\{\beta\in\Delta_{n}^{+}:\beta\geq\phi_{1}\}),(-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i}\},\{\beta\in\Delta_{n}^{+}:\beta\geq\phi_{1}+ \cdots+\phi_{i+1}\})(1\leq i\leq l-1),(-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{ 1}+\cdots+\phi_{l}\},\{\beta\in\Delta_{n}^{+}:\beta\geq\phi_{1}+\cdots+\phi_{l- 1}+2\phi_{l}\}),(-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+2\phi_{l}\},\{ \beta\in\Delta_{n}^{+}:\beta\geq\phi_{1}+\cdots+\phi_{i-2}+2\phi_{i-1}+\cdots+2 \phi_{l}\})(3\leq i\leq l),(-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+2\phi_{ 2}+\cdots+2\phi_{l}\},\phi)\). Thus the number of equivalence classes of discrete series representations of \(SO_{0}(2,m)\) with trivial infinitesimal character is \(2l\) if \(m=2l-1,l\geq 2\). If \(\mathfrak{g}=\mathfrak{b}_{1}\), the candidates of \((\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}),\Delta(\mathfrak{u}_{\lambda} \cap\mathfrak{p}_{+}))\) for which \((-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}))\cup\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})=\Delta_{n}^{+}\), are \((\phi,\{\phi_{1}\}),(\{-\phi_{1}\},\phi)\). Thus the number is \(2\). Let \(\mathfrak{g}=\delta_{l}(l\geq 3)\). The candidates of \((\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}),\Delta(\mathfrak{u}_{\lambda} \cap\mathfrak{p}_{+}))\) for which \((-\Delta(\mathfrak{u}_{\lambda}\cap\mathfrak{p}_{-}))\cup\Delta(\mathfrak{u}_{ \lambda}\cap\mathfrak{p}_{+})=\Delta_{n}^{+}\), are \((\phi,\{\beta\in\Delta_{n}^{+}:\beta\geq\phi_{1}\}),(-\{\beta\in\Delta_{n}^{+}: \beta\leq\phi_{1}+\cdots+\phi_{i}\},\{\beta\in\Delta_{n}^{+}:\beta\geq\phi_{1 }+\cdots+\phi_{i+1}\})(1\leq i\leq l-3,l\geq 4),(-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+ \cdots+\phi_{l-2}\},\{\beta\in\Delta_{n}^{+}:\beta\geq\xi_{1}\text{ or }\xi_{2}\}),(-\{\beta\in\Delta_{n}^{+}:\beta\leq\xi_{1}\},\{\beta\in\Delta_{n}^{ +}:\beta\geq\xi_{2}\}),(-\{\beta\in\Delta_{n}^{+}:\beta\leq\xi_{1}\}),(-\{\beta\in \Delta_{n}^{+}:\beta\leq\xi_{1}\},\{\beta\in\Delta_{n}^{+}:\beta\geq\xi_{2}\},\{\beta\in\Delta_{n}^{+}:\beta\leq\xi_{1}\}),(-\{\beta\in\Delta_{n}^{+}: \beta\leq\xi_{1}\}),(-\{\beta\in\Delta_{n}^{+}:\beta\leq\phi_{1}+\cdots+\phi_{l}\},\{\beta Thus the number of equivalence classes of discrete series representations of \(SO_{0}(2,m)\) with trivial infinitesimal character is \(2l\) if \(m=2l-2,l\geq 3.\) If \(\mathfrak{g}=\delta_{2},\) then obviously the number is \(4.\) **Remark 4.2**.: _Note that the set of all Hodge types \((R_{+}(\mathfrak{a}),R_{-}(\mathfrak{a}))\) of irreducible unitary representations \(A_{\mathfrak{q}}\) is given by_ \[\begin{cases}\{(i,j):i,j\in\mathbb{N}\cup\{0\},l\leq i+j\leq|\Delta_{n}^{+}|\} \cup\{(i,i):0\leq i\leq[\frac{|\Delta_{n}^{+}|}{2}]\}&\text{if $\mathfrak{g}= \mathfrak{b}_{l}$},\\ \{(i,j):i,j\in\mathbb{N}\cup\{0\},l-1\leq i+j\leq|\Delta_{n}^{+}|\}\cup\{(i,i) :0\leq i\leq[\frac{|\Delta_{n}^{+}|}{2}]\}&\text{if $\mathfrak{g}=\delta_{l}$};\end{cases}\] _where \(|\Delta_{n}^{+}|\) is the number of roots in \(\Delta_{n}^{+}.\) The discrete series representations with trivial infinitesimal character correspond to the set \(\{(i,j):i,j\in\mathbb{N}\cup\{0\},i+j=|\Delta_{n}^{+}|\}.\)_ Poincare polynomials of cohomologies of the irreducible unitary representations with non-zero \((\mathfrak{g},K)\)-cohomology of the Lie group \(SO_{0}(2,m)\) To determine the Poincare polynomial of \(H^{*}(\mathfrak{g},K;A_{\mathfrak{q},K}),\) we need to determine the spaces \(Y_{\mathfrak{q}},\) and to do so we need to analyze the \(\theta\)-stable parabolic subalgebras containing \(\mathfrak{h}\oplus\sum_{\alpha\in\Delta_{t}^{+}}\mathfrak{g}^{\alpha}\) more closely. Note that a parabolic subalgebra of \(\mathfrak{g}\) is \(\theta\)-stable _if and only if_ it contains a maximal abelian subspace of \(\mathfrak{e}_{0}.\) Thus a parabolic \(\mathfrak{q}\) subalgebra of \(\mathfrak{g}\) which contains \(\mathfrak{h}\oplus\sum_{\alpha\in\Delta_{t}^{+}}\mathfrak{g}^{\alpha},\) is \(\theta\)-stable. So there exists a positive root system \(\Delta_{t}^{+}\) of \(\Delta\) containing \(\Delta_{t}^{+}\) and a subset \(\Gamma\) of \(\Phi_{\mathfrak{q}},\) the set of all simple roots in \(\Delta_{\mathfrak{q}}^{+},\) such that \(\mathfrak{q}=\mathfrak{l}\oplus\mathfrak{u},\) where \(\mathfrak{l}=\mathfrak{h}\oplus\sum_{n_{\phi}(\alpha)=0\text{ for all }\phi\in\Gamma} \mathfrak{g}^{\alpha}\) is the Levi subalgebra of \(\mathfrak{q},\) and \(\mathfrak{u}=\sum_{n_{\phi}(\alpha)>0\text{ for some }\phi\in\Gamma} \mathfrak{g}^{\alpha}\) is the nilradical of \(\mathfrak{q};\) where \(\alpha=\sum_{\phi\in\Phi_{\mathfrak{q}}}n_{\phi}(\alpha)\phi\in\Delta.\) Note that the Levi subalgebra \(\mathfrak{l}\) is the direct sum of an \(|\Gamma|\)-dimensional centre and a semisimple Lie algebra whose Dynkin diagram is the subdiagram of the dynkin diagram of \(\mathfrak{g}\) consisting of the vertices \(\Phi_{\mathfrak{q}}\setminus\Gamma.\) If \(\mathfrak{q}\) is a parabolic subalgebra which contains \(\mathfrak{h}\oplus\sum_{\alpha\in\Delta_{t}^{+}}\mathfrak{g}^{\alpha},\) there are many positive root systems of \(\Delta\) containing \(\Delta_{t}^{+}\cup\Delta(\mathfrak{u}\cap\mathfrak{p}_{-})\cup\Delta( \mathfrak{u}\cap\mathfrak{p}_{+}).\) For example, \(\Delta_{t}^{+}\cup\Delta(\mathfrak{u}\cap\mathfrak{p}_{-})\cup(\Delta_{n}^{+ }\setminus(-\Delta(\mathfrak{u}\cap\mathfrak{p}_{-})))\) is a positive root system of \(\Delta,\) as we have seen in the proof of Th. 1.1(ii) that there exists a non-singular linear function \(\lambda^{\prime}\) on \(\mathfrak{h}_{\mathbb{R}}\) such that \(\lambda^{\prime}\) is dominant with respect to \(\Delta_{t}^{+}\cup\Delta(\mathfrak{u}\cap\mathfrak{p}_{-})\cup(\Delta_{n}^{+ }\setminus(-\Delta(\mathfrak{u}\cap\mathfrak{p}_{-}))).\) We define \(\Delta_{\mathfrak{q}}^{+}=\Delta_{t}^{+}\cup\Delta(\mathfrak{u}\cap\mathfrak{ p}_{-})\cup(\Delta_{n}^{+}\setminus(-\Delta(\mathfrak{u}\cap\mathfrak{p}_{-}))).\) In the Tables 1 and 2, we have determined \(\Phi_{\mathfrak{q}},\Gamma,Y_{\mathfrak{q}}\) for each \(\theta\)-stable parabolic subalgebra containing \(\mathfrak{h}\oplus\sum_{\alpha\in\Delta_{t}^{+}}\mathfrak{g}^{\alpha}.\) In the Tables 1 and 2, we can see that \(Y_{\mathfrak{q}}\) is either singleton, or \(\frac{SU(k)}{S(U(1)\times U(k-1))}(k\geq 2),\) or \(\frac{SO(2k+1)}{SO(2)\times SO(2k-1)}(k\geq 1),\) or \(\frac{SO(2k)}{SO(2)\times SO(2k-2)}(k\geq 2).\) We have \(P(\text{singleton},t)=1,\)\(P(\frac{SU(k)}{S(U(1)\times U(k-1))},t)=1+t^{2}+t^{4}+\cdots+t^{2k-2}\) for all \(k\,\geq\,2,P(\frac{SO(2k+1)}{SO(2)\times SO(2k-1)},t)=1+t^{2}+t^{4}+\cdots+t^{4 k-2}\) for all \(k\,\geq\,1,P(\frac{SO(2k)}{SO(2)\times SO(2k-2)},t)=1+t^{2}+t^{4}+\cdots+t^{2k-4}+2t^{2k-2}+t^{ 2k}+\cdots+t^{4k-4}\) for all \(k\geq 2.\) See [5]. Since \(H^{r}(\mathfrak{g},K;A_{\mathfrak{q},K})=H^{p,q}(\mathfrak{g},K;A_{\mathfrak{q},K})\cong H^{p-R_{+}(\mathfrak{q}),q-R_{-}(\mathfrak{q})}(Y_{\mathfrak{q}}; \mathbb{C}),\) for unique non-negative integers \(p,q\) with \(p+q=r,p-q=R_{+}(\mathfrak{q})-R_{-}(\mathfrak{q});\) we write two variable Poincare polynomial \(P_{\mathfrak{q}}(x,t)\) for \(H^{*}(\mathfrak{g},K;A_{\mathfrak{q},K}),\) and the coefficient of the term \(x^{p}t^{q}\) in \(P_{\mathfrak{q}}(x,t)\) is \(\dim(H^{p,q}(\mathfrak{g},K;A_{\mathfrak{q},K})).\) [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [[ [ [ [ [ [ [[ [ [ [ [ [[ [ [ [ [[ [ [ [ [ [ [ [[ [[ [ [ [ [[ [[ [[ [[ [[ [[ [[ [ [ [ [ [[ [[ [ [ [ [[ [ [[ [ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [ [[ [[[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [ [ [ [[ [[ [[ [ [ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [[ [ [ [[ [[ [[ [ [ [ [[ [ [[ [ [[ [[ [[ [[ [[ [[ [[ [[ ## Acknowledgement Both authors acknowledge the financial support from the Department of Science and Technology (DST), Govt. of India under the Scheme "Fund for Improvement of S&T Infrastructure (FIST)" [File No. SR/FST/MS-I/2019/41]. Ankita Pal acknowledges the financial support from Council of Scientific and Industrial Research (CSIR) [File No. 08/155(0091)/2021-EMR-I].
2302.00075
Cooperation and the social brain hypothesis in primate social networks
The social brain hypothesis states that the relative size of the neocortex is larger for species with higher social complexity as a result of evolution. Various lines of empirical evidence have supported the social brain hypothesis, including evidence from the structure of social networks. Social complexity may itself positively impact cooperation among individuals, which occurs across different animal taxa and is a key behavior for successful group living. Theoretical research has shown that particular structures of social networks foster cooperation more easily than others. Therefore, we hypothesized that species with a relatively large neocortex tend to form social networks that better enable cooperation. In the present study, we combine data on brain and body mass, data on social networks, and theory on the evolution of cooperation on networks to test this hypothesis in primates. We have found a positive effect of brain size on cooperation in social networks even after controlling for the effect of other structural properties of networks that are known to promote cooperation.
Neil G. MacLaren, Lingqi Meng, Melissa Collier, Naoki Masuda
2023-01-31T20:15:06Z
http://arxiv.org/abs/2302.00075v2
# Cooperation and the social brain hypothesis in primate social networks ###### Abstract The social brain hypothesis states that the relative size of the neocortex is larger for species with higher social complexity as a result of evolution. Various lines of empirical evidence have supported the social brain hypothesis, including evidence from the structure of social networks. Social complexity may itself positively impact cooperation among individuals, which occurs across different animal taxa and is a key behavior for successful group living. Theoretical research has shown that particular structures of social networks foster cooperation more easily than others. Therefore, we hypothesized that species with a relatively large neocortex tend to form social networks that better enable cooperation. In the present study, we combine data on brain and body mass, data on social networks, and theory on the evolution of cooperation on networks to test this hypothesis in primates. We have found a positive effect of brain size on cooperation in social networks even after controlling for the effect of other structural properties of networks that are known to promote cooperation. ## I Introduction The social brain hypothesis posits that the relative size of the neocortex in the brain is positively correlated with social complexity [1]. The hypothesis has obtained quantitative support in terms of various indices of social complexity for primates and other taxa [2]. Social networks are a key aspect of social complexity [3; 4; 5]. In fact, various social network measures have been related to the social brain hypothesis. The original study in this area showed that group size is correlated with neocortex size in primates [6]. Further work demonstrated that the number of contacts an individual has, which is called the degree of the node in network analysis, is positively correlated with relative neocortex size in primate species [7] and the size of the amygdala and some other brain regions in human individuals [8; 9; 10]. Other network indices such as the so-called complexity of the network [8], edge connectivity and two-clans [11], and global efficiency [4] have also been shown to be correlated with the size of the neocortex [4; 11] or amygdala [8] of primates including humans. Social networks have functions. Focusing on the animal kingdom, the structure of social networks affects, for example, the speed of diffusion of information and diseases, mating behavior, predator avoidance, communication efficiency, and group movement [4; 12; 13; 14]. However, social networks also have costs. For example, network structure determines disease transmission potential and epidemic outcomes in populations, because a pathogen can only spread if the relevant form of contact exists between two individuals. Networks with high degree heterogeneity (i.e. high variation in the number of contacts among individuals) have increased transmission potential due to the presence of superspreaders which cause rapid, explosive outbreaks of disease in a population [15]. Animal social networks that we observe today may therefore be a result of evolutionary processes in which more advantageous network structures have proliferated at the expense of less advantageous structures under restrictions imposed by the environment and trade-offs between different objectives. One function for which social networks are particularly relevant is cooperation. Individuals of various animal species cooperate with each other, even cooperating with non-kin and in social dilemma situations in which non-cooperation is more lucrative than cooperation [16; 17; 18; 19] (but see [20]). Although cooperation under social dilemmas is an evolutionary puzzle, theoretical research has suggested various mechanisms enabling cooperation [21; 22]. Network reciprocity, or the effect of the network structure, is one mechanism to promote cooperation [23; 24; 25; 26]. Specifically, a relatively small node degree (i.e., the number of neighboring individuals per individual) [27; 28] and heterogeneity among individuals in the network in terms of the degree [29; 30] can both promote cooperation compared to well-mixed populations depending on the assumptions underlying the evolutionary process models. In the present study, we extend the social brain hypothesis to the domain of cooperation. We ask whether species with a large neocortex size form social networks that foster cooperation to a greater extent than networks for other species. While cooperation occurs in various animal taxa [17; 19], here we focus on primates because neocortex data are available for many primate species and various indices that correlate with neocortex size have been documented for primates, as we reviewed above. Recently developed mathematical theory enables us to quantify the ease of cooperation for networks with arbitrary structure [28]. We use this theory and look for significant determinants of cooperation as a function of the neocortex size and other key properties of the network structure including those which are known to affect cooperation. ## II Methods ### Threshold benefit-to-cost ratio for cooperation We assume a network with \(N\) nodes that is connected, undirected, and weighted. On the given network, we consider the gift-giving game, which is a simple variant of the prisoner's dilemma game. In the gift-giving game, one player, called the donor, decides whether or not to pay a cost \(c\)\((>0)\) to benefit another player, called the recipient. If the donor pays \(c\), which we refer to as cooperation, then the recipient gains \(b\)\((>c)\). If the donor does not, which we refer to as defection, then the recipient gains nothing. The donor receives a higher payoff by defecting than cooperating. However, the average payoff over the players in the network is larger when the donor cooperates rather than defects. We assume that the gift-giving game occurs twice on each edge of the network by swapping the roles of the donor and recipient in a single round of the evolutionary dynamics. The payoff of each player \(i\) in each round is given as the weighted average of the payoff that \(i\) obtains over all \(i\)'s neighbors, where the weight for averaging is proportional to the edge weight. For updating the strategy of the players in each round of evolutionary dynamics, we assume the death-birth process with selection on birth, which is known to foster cooperation under certain conditions [27; 28]. In the death-birth process with selection on birth, one first selects a node \(i\) to be replaced uniformly at random. Then, one selects a parent to reproduce with the probability proportional to its fitness among the neighbors of \(i\). The fitness of each player is assumed to be linearly but only weakly depending on the payoff, which is the regime called the weak selection. Under this setting, without mutation, all players will eventually become either cooperators (i.e., fixation of cooperation) or defectors (i.e., fixation of defection). We use the recently developed random-walk-based theory that enables one to calculate the fixation probability for cooperation for arbitrary network structure [28]. We assume that just one node, selected uniformly at random, is initially a cooperator and the other \(N-1\) nodes are defectors. If the fixation probability for cooperator exceeds the baseline value \(1/N\), we say that natural selection favors cooperation [27; 28; 31]. Under the weak selection limit, Allen et al. showed the expression for the threshold benefit-to-cost ratio, denoted by \((b/c)^{*}\), such that the fixation probability for cooperation is larger than \(1/N\) when \(b/c>(b/c)^{*}\), i.e., Eq. (2) in Ref. [28]. If \((b/c)^{*}\) is smaller, the network supports cooperation more strongly because natural selection favors cooperation for relatively small values of \(b/c\). We calculated \((b/c)^{*}\) for each network using our in-house code in Python 3.10, which implements the procedures described in [28]. ### Data The data for this study come from the Animal Social Network Repository (ASNR) [32; 33]. The ASNR contains 770 non-human social networks from eight animal classes and 69 species. For each network in this data set, nodes represent an individual animal. Edges represent a specific type of contact between two animals, such as grooming in primates and trophallaxis in ants, as well as more general contact such as group membership and spatial proximity. There are 114 primate social networks in the ASNR, including 60 grooming networks, 31 spatial proximity networks, 10 mating networks, and 13 networks with other contact types. Most sampled populations are free-ranging (84), with some captive (18) and some semi-ranging (7) populations, as well as five populations for which the type was not recorded. There are 99 catarrhine primate networks, 13 platyrrhine networks, and 2 strepsirrhine networks. Sampling of the different contrasts represented in the ASNR is thus somewhat unbalanced but reflects the sampling effort present in the literature. To test our hypothesis we require that, to the best extent possible, the edges represent prosocial contacts between individuals. Other contact types, such as dominance or mating, may reflect motives that are not relevant to the spread of cooperative behaviors, and proximity-based networks may reflect individuals who are co-located by chance or interest in a common resource rather than for social interaction. We therefore used the ASNR networks with the interaction types labeled "grooming", "physical contact", and "overall mix"; the "overall mix" category captures one additional network that recorded grooming behavior. We thus obtained 67 possible networks, which we regarded as undirected weighted networks. Thirteen out of the 67 networks yielded negative \((b/c)^{*}\) values, which imply that spiteful behavior evolves instead of cooperation [28; 34]. We discarded these networks because we are interested in cooperation under social dilemma situations. Additionally, we discarded one network that was composed of two disconnected dyads and used the remaining 53 connected networks for our analysis. Most species had a single network in the repository. The exceptions were _P. cynoccephalus_ (which had 23 networks), _M. fascicularis_ (2), _M. fuscata_ (4), _M. mulatta_ (9), and _M. radiata_ (2). For these species we took the median for \((b/c)^{*}\) and for the network-based explanatory variables explained in Section II.3 to prevent a few species, such as _P. cynocephalus_ and _M. mulatta_, from dominating the set of networks to be analyzed. In this manner, we reduced the 53 networks to observations on 17 species for further analysis. We used the species-level neocortex ratio (NCR) estimate from [7] for all but one species, _Colobus guereza_; a species-level NCR estimate was not available in [7], so we used the genus-level NCR estimate from [6]. Additionally, we used the total brain mass and body mass data from [35] for all species except _Papio papio_, for which the data is not present. For _Papio papio_, we used the data of the closely related species _P. cynocephalus_[36]. We included brain and body size as simpler measures of species' anatomical and physiological complexity that may also correlate with sociality [37]. These three measures (i.e., brain mass, body mass, and NCR) are highly correlated with each other (see Section III). ### Analysis Data analysis was conducted in R 4.2 [38]. We used the "MuMIn" package [39] to implement the model selection procedure described below. Code in R and Python to reproduce these analyses is available at [https://github.com/ngmaclaren/cooperation-threshold](https://github.com/ngmaclaren/cooperation-threshold). We used generalized linear models (GLMs) to test whether NCR and other variables were associated with the ease of cooperation, \((b/c)^{*}\). We considered eight explanatory variables: NCR, brain mass in grams, body mass in grams, and five network indices some of which are known to influence \((b/c)^{*}\). The five network indices are the number of nodes in the network (denoted by \(N\)), the average degree over the \(N\) nodes (denoted by \(\langle k\rangle\)), the average node strength (i.e., the average of the weighted degree over the \(N\) nodes), denoted by \(\langle s\rangle\), the average clustering coefficient (denoted by \(C\)), i.e., the average over all nodes of the number of complete triangles divided by the number of possible triangles involving each node [40; 41], and the average weighted clustering coefficient (denoted by \(\tilde{C}_{\rm w}\)), which is calculated similarly to the unweighted version except that it uses the geometric mean of the edge weights instead of a count of edges [42]. Because brain mass, body mass, \(\langle s\rangle\), and \(\tilde{C}_{\rm w}\) are positive and obey right-skewed distributions, we used the natural logarithm transform of each of these variables. We began our modeling process from a position of relative ignorance, including these eight explanatory variables as predictors. By design, our outcome variable, \((b/c)^{*}\), is positive and continuous, suggesting a model with gamma-distributed errors. To test our choice, we built five different models, each with all eight explanatory variables, with different error models and link functions (i.e., gamma and Gaussian distributions with both inverse and log links, and a quasi-Poisson model) and calculated the deviances of each [43]. As expected, the gamma models fit well (\(\chi^{2}\) test with \(d.f.=8\); \(p=0.942\) and \(0.986\) for the inverse and log links, respectively), whereas the other models did not (\(p<0.001\) for each). The residual deviances associated with both gamma-based models are small (inverse link: 2.87, log link: 1.81), further suggesting good fit [43]. We chose the inverse link because it is the canonical link function for the gamma distribution [43] and because it improves interpretability in this analysis. The number of explanatory variables (i.e., eight) is relatively large given the number of observations (i.e., 17). Therefore, we evaluated all possible models with only up to five explanatory variables and calculated the AICc for each model. AICc is a modification of the Akaike Information Criterion (AIC) that is preferred for model selection when data sets are relatively small [44]. Because we use the inverse link function, a positive coefficient implies that the inverse of the outcome variable, \((b/c)^{*}\), increases as the explanatory variable increases. In other words, positive coefficients suggest that an increase in the explanatory variable decreases \((b/c)^{*}\) on average, promoting cooperation in the network. ## III Results There were 206 models, or combinations of the explanatory variables, with five or fewer explanatory variables for predicting \((b/c)^{*}\). We show the sorted AICc values for these models in Fig. 1. The three best models in terms of the AICc have similar AICc values compared to the other models, with \(\Delta\)AICc \(<1\), where \(\Delta\)AICc is the difference between the AICc for a model and that for the model minimizing the AICc. The fourth and fifth best models have \(\Delta\)AICc \(\approx 2\). All the other models have \(\Delta\)AICc \(>3\), forming a series of models with few obvious cut-points until the poorest models. Therefore, we focus on the five models with \(\Delta\)AICc \(<3\); we show these five models in Table 1. Table 1 indicates that no model among the best five models has more than three explanatory variables. Therefore, adding more explanatory variables would not be useful in better explaining \((b/c)^{*}\) across the different networks. The two network features, i.e., average degree and the weighted clustering coefficient, have consistently negative coefficients, with average degree appearing in each of the best five models and the weighted clustering coefficient appearing in all but one model. The present result that a smaller average degree promotes cooperation is cons \begin{table} \begin{tabular}{l c c c} \hline \hline Model 1 & Coef. & SE & \(p\) \\ \hline Intercept & \(-0.344\) & \(0.176\) & \(0.072\) \\ \(\langle k\rangle\) & \(-0.009\) & \(0.003\) & \(0.020\) \\ \(\ln(\tilde{C}_{\rm w})\) & \(-0.057\) & \(0.025\) & \(0.043\) \\ \(\ln(\text{Body mass})\) & \(0.039\) & \(0.020\) & \(0.075\) \\ \(\Delta\text{AICc}\) & \(0.00\) & & \\ \hline Model 2 & Coef. & SE & \(p\) \\ \hline Intercept & \(-0.295\) & \(0.163\) & \(0.093\) \\ \(\langle k\rangle\) & \(-0.008\) & \(0.003\) & \(0.020\) \\ \(\ln(\tilde{C}_{\rm w})\) & \(-0.059\) & \(0.025\) & \(0.036\) \\ \(\ln(\text{Brain mass})\) & \(0.064\) & \(0.036\) & \(0.095\) \\ \(\Delta\text{AICc}\) & \(0.13\) & & \\ \hline Model 3 & Coef. & SE & \(p\) \\ \hline Intercept & \(-0.201\) & \(0.127\) & \(0.137\) \\ \(\langle k\rangle\) & \(-0.010\) & \(0.004\) & \(0.019\) \\ \(\ln(\tilde{C}_{\rm w})\) & \(-0.047\) & \(0.028\) & \(0.115\) \\ NCR & \(0.097\) & \(0.059\) & \(0.121\) \\ \(\Delta\text{AICc}\) & \(0.69\) & & \\ \hline Model 4 & Coef. & SE & \(p\) \\ \hline \((\text{Intercept})\) & \(-0.021\) & \(0.105\) & \(0.845\) \\ \(\langle k\rangle\) & \(-0.009\) & \(0.004\) & \(0.055\) \\ \(\ln(\tilde{C}_{\rm w})\) & \(-0.067\) & \(0.040\) & \(0.114\) \\ \(\Delta\text{AICc}\) & \(1.83\) & & \\ \hline Model 5 & Coef. & SE & \(p\) \\ \hline \((\text{Intercept})\) & \(-0.182\) & \(0.124\) & \(0.165\) \\ \(\langle k\rangle\) & \(-0.009\) & \(0.003\) & \(0.015\) \\ NCR & \(0.138\) & \(0.054\) & \(0.024\) \\ \(\Delta\text{AICc}\) & \(1.95\) & & \\ \hline \hline \end{tabular} \end{table} Table 1: The best five models, i.e., the models with \(\Delta\text{AICc}<3\). A positive coefficient suggests that variable is associated with a smaller \((b/c)^{*}\), i.e., easier evolution of cooperation. SE stands for the standard error. We remind that \(\langle k\rangle\) and \(\tilde{C}_{\rm w}\) represent average degree and the weighted clustering coefficient, respectively. findings [27; 28]. Body mass and brain mass, which are highly correlated with each other (with Pearson correlation coefficient \(r=0.953\)), are each the third predictor in the best two models, which are nearly indistinguishable in terms of the AICc; \(\Delta\)AICc for Model 2 = 0.13. Compared to these best two models, the third best model uses NCR in place of body or brain mass as the third explanatory variable. Note that NCR is also highly correlated with body mass (\(r=0.81\)) and brain mass (\(r=0.84\)), partly because we are only analyzing primates. The fourth best model only contains the average degree and the weighted clustering coefficient. We note that the number of nodes \(N\), average weighted degree \(\langle s\rangle\), and the unweighted clustering coefficient \(C\) did not appear among the best five models. In each of best five models in which a brain size variable (i.e., brain mass or NCR) appears (i.e., Models 2, 3, and 5), the coefficient for the brain size variable is positive. This result suggests that social networks for primates with a larger brain size tend to better accommodate cooperative behavior if the average degree and the clustering coefficient (i.e., abundance of triangles) are the same. We visualize this relationship in Fig. 2. However, formally, either brain mass or NCR only satisfies \(p<0.05\) for Model 5 (\(p=0.024\)); brain mass and NCR yield \(p=0.095\) in Model 2 and \(p=0.121\) in Model 3, respectively. We visualize in Fig. 3 the coefficient values and associated 95% confidence intervals for each explanatory variable for each of the best five models. In Fig. 3, a circle's position and the span of the line segment in the horizontal direction indicate the value of the estimated coefficient and its 95% confidence interval, respectively. The figure shows that, Figure 1: AICc for all possible GLMs with between zero and five explanatory variables. although some 95% confidence intervals for the brain size variables include zero, the value and sign of the coefficient estimates are consistent, and the confidence intervals only marginally cross zero. As expected, the coefficients for the average degree and weighted clustering coefficient are consistently negative. We thus conclude that our data analysis provides evidence in favor of our hypothesis: brain size, measured in two different ways, is positively associated with the ease with which cooperation spreads in primate social networks, albeit not strongly. ## IV Discussion Consistent with network reciprocity, animal social networks foster cooperation in terms of fixation probability [45]. Advancing this finding one step further, we found positive support for the social brain hypothesis in terms of network reciprocity in primates. In reaching our conclusion, we controlled for major network properties that affect cooperation, such as the average degree and the clustering coefficient, as well as the size of the entire group. We point out that the average degree, i.e., average number of others one individual contacts, is also a major outcome variable that the social brain hypothesis aims to explain in terms of brain size. Therefore, the present results are orthogonal to these previous ones. Exploration of network properties that covary with both brain Figure 2: Threshold for cooperation, \((b/c)^{*}\), as a function of the NCR for Model 3. Each circle represents a primate species. The solid line represents the predicted \((b/c)^{*}\). The dotted lines indicate twice the standard error of prediction. size and \((b/c)^{*}\) and finding generative mechanisms of such network properties warrant future work. We found that the weighted clustering coefficient negatively contributes to evolution of cooperation and that the unweighted clustering coefficient does not have effect, at least for the best five models. This result is apparently inconsistent with spatial reciprocity, which dictates that high clustering in networks promotes cooperation [46; 47; 22]. In fact, these results have been derived for the fraction of cooperators in the quasi-stationary state of evolutionary dynamics in relatively large networks rather than the fixation probability for the cooperator strategy; we examined the latter quantity in this study. The effect of clustering on the fixation probability for cooperation is not systematically known. For example, some numerical simulations suggest that clustering, which is present in most empirical networks, does not facilitate the fixation of cooperation [48; 27]. Therefore, our results are in fact not contradictory to the known results for spatial reciprocity, and fixation of cooperation in clustered networks remains to be investigated. Cooperative group living is often advantageous in the animal kingdom because it can provide protection from predators and increase the efficiency of foraging tactics [49; 18]. However, one of the most commonly cited disadvantages to cooperative group living is the increase in disease transmission potential [49; 50]. In fact, previous work suggests that the average degree is the most important aspect of network structure in determining the transmission potential for pathogens on Figure 3: Coefficient estimates for the best five models. The circles represent the coefficient values. The lines represent the 95% confidence intervals. NCR: neocortex ratio, \(\langle k\rangle\): average degree, \(\tilde{C}_{\text{w}}\): average weighted clustering coefficient. a network [51]. Our results show that average degree is negatively associated with the evolution of cooperation, a finding supported by previous theoretical work [27]. Given that small average degrees are beneficial for both enhancing cooperation and reducing pathogen transmission opportunity, cooperation and protection against disease transmission potential might have coevolved through a decrease in the average degree of social networks. Maintaining contacts is also costly for individuals. However, a large average degree helps robustness of networks against node and edge failures [14; 52]. We may be able to further discussion of the evolution of network structure and social brain hypotheses by simultaneously taking into account multiple functions of animal society such as cooperation, protection against infection, robustness, and communication efficiency. The present work also opens avenues for further work to explore intersection between social brain hypothesis, networks, and cooperation. Investigation of social networks of species other than primates is worthwhile. Additional study of important contrasts within the primates--such as between catarrhine and platyrrhine primates, or between captive and wild groups--can also be informative. We are also aware that most of the social networks we used are grooming networks. Network structure may vary according to the type of prosocial contact even for the same group of animals [51], which is worthy of investigation. Although further comparative work along these lines is currently limited by available data [32], various technological and algorithmic developments of automatic data collection [14; 53] are expected to allow us to access more data and explore these topics in a near future. ###### Acknowledgements. N. Masuda acknowledges support from AFOSR European Office (under Grant No. FA9550-19-1-7024), the Japan Science and Technology Agency (JST) Moonshot R&D (under Grant No. JPMJMS2021), and the National Science Foundation (under Grant No. 2052720). M. Collier acknowledges funding from the Morris Animal Foundation (under Grant No. D22ZO-059). We thank Shweta Bansal for her thoughtful comments on this work. We thank Pratha Sah, Jose Mendez, Grant Rosensteel, Elly Meng, and Sania Ali for their contributions to the development and growth of the Animal Social Network Repository.
2309.14414
Characterizing the line emission from molecular clouds. II. A comparative study of California, Perseus, and Orion A
$Aims.$ We characterize the molecular-line emission of three clouds whose star-formation rates span one order of magnitude: California, Perseus, and Orion A. $Methods.$ We use stratified random sampling to select positions representing the different column density regimes of each cloud and observe them with the IRAM-30m telescope. We cover the 3 mm wavelength band and focus our analysis on CO, HCN, CS, HCO+, HNC, and N2H+. $Results.$ We find that the line intensities depend most strongly on the H2 column density. A secondary effect, especially visible in Orion A, is a dependence of the line intensities on the gas temperature. We explored a method that corrects for temperature variations and show that, when it is applied, the emission from the three clouds behaves very similarly. CO intensities vary weakly with column density, while the intensity of traditional dense-gas tracers such as HCN, CS, and HCO+ varies almost linearly with column density. N2H+ differs from all other species in that it traces only cold dense gas. The intensity of the rare HCN and CS isotopologs reveals additional temperature-dependent abundance variations. Overall, the clouds have similar chemical compositions that, as the depth increases, are sequentially dominated by photodissociation, gas-phase reactions, molecular freeze-out, and stellar feedback in the densest parts of Orion A. Our observations also allowed us to calculate line luminosities for each cloud, and a comparison with literature values shows good agreement. We used our HCN data to explore the behavior of the HCN conversion factor, finding that it is dominated by the emission from the outermost cloud layers. It also depends strongly on the gas kinetic temperature. Finally, we show that the HCN/CO ratio provides a gas volume density estimate, and that its correlation with the column density resembles that found in extragalactic observations.
M. Tafalla, A. Usero, A. Hacar
2023-09-25T18:00:01Z
http://arxiv.org/abs/2309.14414v1
# Characterizing the line emission from molecular clouds ###### Abstract Context: Aims:We aim to characterize and compare the molecular-line emission of three clouds whose star-formation rates span one order of magnitude: California, Perseus, and Orion A. Methods:We used stratified random sampling to select positions representing the different column density regimes of each cloud and observed them with the IRAM 30 m telescope. We covered the 3 mm wavelength band and focused our analysis on CO, HCN, CS, HCO\({}^{+}\), HNC, and N\({}_{2}\)H\({}^{+}\). Results:We find that the line intensities depend most strongly on the H\({}_{2}\) column density, with which they are tightly correlated. A secondary effect, especially visible in Orion A, is a dependence of the line intensities on the gas temperature. We explored a method that corrects for temperature variations and show that, when it is applied, the emission from the three clouds behaves very similarly. CO intensities vary weakly with column density, while the intensity of traditional dense-gas tracers such as HCN, CS, and HCO\({}^{+}\) varies almost linearly with column density. N\({}_{2}\)H\({}^{+}\) differs from all other species in that it traces only cold dense gas. The intensity of the rare HCN and CS isotopologs reveals additional temperature-dependent abundance variations. Overall, the clouds have similar chemical compositions that, as the depth increases, are sequentially dominated by photodissociation, gas-phase reactions, molecular freeze-out, and stellar feedback in the densest parts of Orion A. Our observations also allowed us to calculate line luminosities for each cloud, and a comparison with literature values shows good agreement. We used our HCN(1-0) data to explore the behavior of the HCN conversion factor, finding that it is dominated by the emission from the outermost cloud layers. It also depends strongly on the gas kinetic temperature. Finally, we show that the HCN/CO ratio provides a gas volume density estimate, and that its correlation with the column density resembles that found in extragalactic observations. Conclusions: ## 1 Introduction Characterizing the large-scale emission of molecular clouds is necessary to determine their internal structure and star-formation properties and help connect galactic and extragalactic observations. It is, however, a challenging task due to the large size of the clouds and the limited bandwidth and pixel number of heterodyne receivers. The first efforts to characterize the molecular emission from full clouds focused on mapping the bright lines of CO and its isotopologs (e.g., Ungerechts and Thaddeus 1987; Dame et al. 2001; Ridge et al. 2006; Goldsmith et al. 2008). CO, however, is easily thermalized, so its emission is insensitive to the different density regimes of a cloud. In the past decade, a new generation of wide-band heterodyne receivers has made it possible to observe multiple lines simultaneously (Carter et al. 2012), and this has led to a new generation of multi-tracer studies of molecular clouds (Kauffmann et al. 2017; Pety et al. 2017; Shimajiri et al. 2017; Watanabe et al. 2017; Barnes et al. 2020). These studies have characterized cloud emission by making fully sampled maps, a technique that provides a very detailed picture of the gas distribution but often requires hundreds of hours of telescope time. For this reason, multiline studies have been restricted to single clouds (or parts of them), making it very difficult to compare the emission of different targets. While maps provide the detailed description required to characterize the distribution of cloud material into filaments and cores, or determine its velocity field, clouds are turbulent objects whose structure is expected to be mostly transient (Larson 1981; Heyer and Brunt 2004). It is therefore likely that many properties of a cloud do not depend on the small-scale details of its emission and can be captured without the need of making maps. Following this idea, Tafalla et al. (2021, hereafter Paper I) presented an alternative method for characterizing the emission of a molecular cloud by observing a limited number of positions selected using stratified random sampling. This technique is commonly used in polling (Cochran 1977) and selects the positions to be observed by first dividing the cloud into a number of column density bins and then choosing a set of random target positions from each bin. To apply this technique to the Perseus molecular cloud, Paper I used the column density map presented by Zari et al. (2016) and divided the cloud into ten logarithmically spaced H\({}_{2}\) column density bins. From each bin, a set of ten cloud positions were chosen at random, creating a sample of 100 target positions that were observed with the Institut de Radioastronomie Millimetrique (IRAM) 30 m telescope. An advantage of the stratified sampling technique is that it requires significantly less telescope time than mapping, allows deep integrations to be obtained at low column densities, and, as shown in Paper I, can accurately estimate basic emission properties of a cloud, such as the mean intensity and its dispersion inside each column density bin. These parameters can later be compared with the results from numerical simulations to test models of cloud formation (Priestley et al., 2023). In this paper we present the results obtained from sampling the emission of the California and Orion A clouds using the stratified random sampling technique, and we compare the results with those of the Perseus cloud already presented in Paper I. California and Orion A are highly complementary to Perseus because they are also nearby but are forming stars at very different rates. Distances to California, Perseus, and Orion A have been estimated as 470\(\pm\)24 pc, 294\(\pm\)15 pc, and 432\(\pm\)22 pc, respectively by Zucker et al. (2019) using _Gaia_ Data Release 2 data, although these quantities should be considered as mean values given the complex 3D morphology of each cloud (Grobscheld et al., 2018; Rezaei Kh. & Kainulainen, 2022). Concerning their star-formation activity, Lada et al. (2010) estimated star-formation rates that span one order of magnitude: 70 M\({}_{\odot}\) Myr\({}^{-1}\) for California, 150 M\({}_{\odot}\) Myr\({}^{-1}\) for Perseus, and 715 M\({}_{\odot}\) Myr\({}^{-1}\) for Orion A. Orion A represents the nearest high-mass star-forming cloud, and as a result, it has been the focus of an intense observational efforts carried out at multiple wavelengths and spatial resolutions (Genzel & Stutzki, 1989). The large-scale distribution of CO and its isotopologs has been mapped repeatedly as radio telescopes improved in sensitivity and resolution (Kutner et al., 1977; Maddalena et al., 1986; Bally et al., 1987; Castets et al., 1990; Sakamoto et al., 1994; Nagahama et al., 1998; Wilson et al., 2005; Ripple et al., 2013; Nishimura et al., 2015; Kong et al., 2018). Additional molecular species have been mapped by Kauffmann et al. (2017) using the Five College Radio Astronomy Observatory (FCRAO), Nakamura et al. (2019) using the Nobeyama 45 m radio telescope, and Yun et al. (2021) using the Taeduda Radio Astronomy Observatory (TRAO) 14 m antenna. More focused mapping of the so-called integral-shaped filament (ISF), where the formation of high-mass stars is taking place, has been done in C\({}^{18}\)O by Suri et al. (2019), H\({}^{13}\)CO\({}^{+}\) by Ikeda et al. (2007), N\({}_{2}\)H\({}^{+}\) and HC\({}_{3}\)N by Tatematsu et al. (2008, with further high resolution N\({}_{2}\)H\({}^{+}\) mapping carried out by Hacar et al. 2017a and Hacar et al. 2018), NH\({}_{3}\) by Friesen et al. (2017) as part of the Green Bank Ammonia Survey (GAS), and HCN-HNC by Hacar et al. (2020). In contrast with Orion A, the California molecular cloud has only recently been recognized as a distinct star-forming region (Lada et al., 2009), so its molecular emission has received less attention. Maps of most of its CO emission have been presented by Guo et al. (2021) and Lewis et al. (2021), while maps of other tracers have been restricted to the brightest regions of the cloud, namely L1478 (Chung et al., 2019, in CS, N\({}_{2}\)H\({}^{+}\), and HCO\({}^{+}\)) and L1482 (Alvarez-Gutierrez et al., 2021, in N\({}_{2}\)H\({}^{+}\), HCO\({}^{+}\), and HNC). Multiline observations of a selection of dense cores selected from _Herschel_ continuum data have been presented by Zhang et al. (2018). ## 2 Observations ### Sampling method As mentioned in the Introduction and discussed with detail in Paper I, we used the stratified random sampling technique to select a representative list of cloud positions that will be subject to molecular-line observations. We used the H\({}_{2}\) column density as a proxy for the emission, an approach that was tested in Paper I and is consistent with the expectation from principal component analysis of different clouds, which shows that column density is the main predictor of the molecular line intensity (Ungerechts et al., 1997; Grater et al., 2017). For California and Orion A, Lada et al. (2017) and Lombardi et al. (2014), respectively, have produced high-quality H\({}_{2}\) column density maps using far-IR continuum data obtained with the _Herschel_ Space Observatory (Pilbratt et al., 2010), and we relied on them for our application of the stratified random sampling. These maps are complementary to the column density map produced by the same group for Perseus (Zari et al., 2016) and used in Paper I. All these maps have been ultimately derived from maps of dust emission and absorption properties, so the \(N\)(H\({}_{2}\)) determination depends on assumptions about the gas-to-dust ratio and conversion between extinction bands. For this, we followed Lombardi et al. (2014), Zari et al. (2016), and Lada et al. (2017) and assumed the standard coefficients determined by Bohlin et al. (1978), Savage & Mathis (1979), and Rieke & Lebofsky (1985). Following Paper I, we divided the range of reliable H\({}_{2}\) column densities (\(\gtrsim 1.5\times 10^{21}\) cm\({}^{-2}\), Zari et al. 2016) into logarithmically spaced bins of 0.2 dex width. To reach the maximum column density measured for California and Orion A (\(\approx 5\times 10^{22}\) cm\({}^{-2}\) and \(\approx 3\times 10^{23}\) cm\({}^{-2}\), respectively), we required 8 and 12 column density bins. Each of these bins was sampled by choosing 10 random positions, so as a result, a total of 80 positions were chosen to sample the California cloud and 120 positions were chosen for Orion A. The location of these positions is shown in Fig. A.1 and A.2 superposed on the H\({}_{2}\) column density maps of the clouds. Coordinates of the target positions, together with values of the column density and the main line intensities are provided in Tables B.1 and B.2. ### IRAM 30m telescope observations We observed our target positions in the California and Orion A clouds using the IRAM 30 m diameter telescope during three periods in 2018 November, 2019 July, and 2020 December. The setup was identical to that used in Paper I to sample Perseus: the 83.7-115.8 GHz frequency band was observed combining two tunings of the Eight MIxer Receiver (EMIR; Carter et al., 2012), which was followed by the fast Fourier Transform Spectrometer (FTS; Klein et al., 2012) to provide a frequency resolution of 200 kHz (\(\approx 0.6\) km s\({}^{-1}\)). For the brighter Orion A cloud, we also observed several frequency windows in the range 213.7-267.7 GHz selected for containing higher-J transitions of species observed at 3mm, such as CO(2-1), HCN(3-2), and CS(5-4). These higher frequency observations were also carried out with the EMIR receiver followed by the FTS spectrometer at a frequency resolution of 200 kHz (\(\approx 0.25\) km s\({}^{-1}\) at the operating frequency). In addition, selected positions of both California and Orion A were observed in HCO\({}^{+}\)(1-0), HCN(1-0), and C\({}^{18}\)O(2-1) with high velocity resolution (0.03-0.07 km s\({}^{-1}\)) using the VESPA auto-correlator to determine line shapes and check for self-absorption features. All survey positions were observed in frequency switching mode with throws of \(\pm\)7.7 MHz and total integration times of approximately 10 minutes after combining the two linear polarizations of the receiver. Calibration of the atmospheric attenuation was carried out observing the standard sequence of sky-ambient-cold loads every 10-15 min, pointing was corrected every two hours approximately by making cross scans of bright continuum sources, and focus was corrected using bright continuum sources at the beginning and several times during the observing session. The resulting spectra were folded, averaged, and baseline subtracted using the CLASS reduction program1. The data were also converted into the main beam brightness scale using the recommended telescope beam efficiencies2, which range from 0.81 at 86 GHz to 0.59 at 230 GHz. The use of a main beam brightness scale follows standard practice in single-dish calibration, although it may represent an overcorrection when applied to the emission from the outermost parts of the clouds, which can be extended over several degrees. An alternative calibration choice for this emission would be to include the contribution from the telescope error beam, which in the IRAM 30m telescope has three components with widths up to \(2,000^{\prime\prime}\), close to the size of the Moon. The coupling efficiency of this error beam is about 0.93 at 86 GHz and 0.84 at 230 GHz (Kramer et al. 2013), which represent an increase with respect to the main beam efficiency of 15 and 42%, respectively. Using these error beam efficiencies has therefore little effect on the 3 mm wavelength data, which constitute the bulk of our survey, although it may affect the 1 mm wavelength data if an accurate calibration is required. Even at 1 mm, however, the use of error beam efficiencies is probably justified only for the outer parts of the cloud, and its use could potentially introduce an artificial calibration discontinuity at the transition between the compact and extended emission regimes. For this reason, we preferred to use a single calibration scale based on the main beam brightness temperature, with the caveat that the 1 mm intensities likely have an increased level of uncertainty. In this main beam brightness scale, the typical rms level of the spectra is 7-14 mK per 0.6 km s\({}^{-1}\) channel at 100 GHz. Footnote 1: [http://www.iram.fr/IRAMFR/GILDAS](http://www.iram.fr/IRAMFR/GILDAS) Footnote 2: [https://publicwiki.iram.es/Iram3@Enfficiencies](https://publicwiki.iram.es/Iram3@Enfficiencies) As in Paper I, our analysis of the emission relies on the velocity-integrated intensity of the different molecular lines, which hereafter will be referred to as \(I\). In cases of no detection, we integrated the emission inside the velocity range at which \({}^{13}\)CO was detected, since this species was detected in all the bins and its velocity range always agreed with that of the weaker tracers in case of simultaneous detection. To simplify the velocity integration, we first re-centered all the spectra to zero velocity using the centroid of the \({}^{13}\)CO line as a reference, and then we integrated the emission in a common velocity range. For spectra with multiple hyperfine components, such as HCN(1-0) and N\({}_{2}\)H\({}^{+}\)(1-0), we added together the contribution of all the components to derive a single intensity value. The uncertainty of the integrated intensity was estimated by propagating the contribution of the rms noise in the spectrum over the window of integration, although we found that for weak and undetected lines, the dominant source of uncertainty was the presence of small-level ripples in the spectrum baseline, which often exceeded the propagation of the thermal noise by a factor of a few. Additional sources of uncertainty that likely dominate the intensity of the brightest lines are calibration errors and errors in the telescope efficiencies. Following Paper I, we modeled these contributions by adding in quadrature a 10% error to the integrated intensities. The resulting values for all the lines discussed in this paper are presented in Tables 1 and 2. ## 3 Results ### CO intensity versus H\({}_{2}\) column density We started our analysis by comparing the emission of the different CO isotopologs in our three target clouds. Figure 1 presents velocity-integrated intensities for the \(J\)=1-0 and \(J\)=2-1 transitions of \({}^{12}\)CO, \({}^{13}\)CO, and C\({}^{18}\)O as a function of H\({}_{2}\) column density (no \(J\)=2-1 data were taken for the California cloud apart from a small number of C\({}^{18}\)O spectra). As the plots show, the intensity of each transition correlates significantly with \(N\)(H\({}_{2}\)) over the approximately two orders of magnitude spanned by this parameter (approximately from \(10^{21}\) to \(10^{23}\) cm\({}^{-2}\)). Since each observed position was selected randomly among all cloud positions in the same column density bin, the correlation indicates that in each cloud, the value of \(N\)(H\({}_{2}\)) is by itself a good predictor of the CO line intensity irrespective of the spatial location of the position. As discussed in Paper I, our correlations between \({}^{12}\)CO(1-0) and \({}^{13}\)CO(1-0) intensities with \(N\)(H\({}_{2}\)) in Perseus match well the correlations between the same parameters derived from the full maps of Ridge et al. (2006). For the Orion A cloud, Yun et al. (2021) have recently presented intensity correlations for \({}^{13}\)CO(1-0), C\({}^{18}\)O(1-0), and several dense-gas tracers, all based on full maps of the cloud. As shown with detail in Appendix C, these full-map correlations match well the results from our sampling observations, reinforcing the idea that the correlations are not an artifact of the sampling technique, but a common property of the clouds. The plots in Fig. 1 also show that the correlation between the different intensities and the H\({}_{2}\) column density is similar in the three clouds, with the caveat that California spans a narrower range of \(N\)(H\({}_{2}\)) than Perseus and Orion A. There are indeed noticeable differences between the clouds, and they are further discussed below, but the impression from the panels of Fig. 1 is that the main trends in the intensity versus \(N\)(H\({}_{2}\)) correlation are common to the three clouds. In all three clouds, for example, the intensities of the \({}^{12}\)CO and \({}^{13}\)CO lines drop abruptly below an \(N\)(H\({}_{2}\)) of about 2\(\times 10^{21}\) cm\({}^{-2}\), which corresponds to \(A_{\rm V}\sim 2\) mag. (A similar drop may occur in C\({}^{18}\)O but is not noticeable due to the lower signal to noise of the lines.) This sharp drop likely results from the photodissociation of the CO molecules by the interstellar radiation field in the cloud outer layers, as modeled in Paper I for the case of Perseus. A similar drop has been observed in other clouds, like Taurus (Pineda et al. 2008), and is predicted by photodissociation models (van Dishoeck & Black 1988; Wolfire et al. 2010). Interior to the photodissociation boundary, the intensity of the \({}^{12}\)CO(1-0) and \({}^{13}\)CO(1-0) lines has a flatter-than-linear dependence on \(N\)(H\({}_{2}\)), an effect that likely results from the combined high optical depth and thermalization of these lines. For the thinner C\({}^{18}\)O lines, the flattening at \(N\)(H\({}_{2}\)) \(>10^{22}\) cm\({}^{-2}\) likely results from the freeze-out of the CO molecules at the high densities characteristic of the high N(H\({}_{2}\)), as discussed in detail in Paper I for the case of Perseus. ### Temperature effects and temperature-corrected CO intensities In addition to similarities, Fig. 1 shows systematic differences between the distribution of the CO intensity in the different clouds. The most noticeable one is the enhanced scatter and slightly steeper slope of the Orion A data at high column densities, an effect that is especially prominent in the J=2-1 lines of \({}^{12}\)CO and \({}^{13}\)CO. Since CO is thermalized and therefore sensitive to temperature, it is tempting to attribute the observed differences in the CO emission to differences in the distribution of the gas temperature across each cloud. These differences are known to exist in Orion A (e.g., Nagahama et al. 1998; Nishimura et al. 2015; Friesen et al. 2017), and they are likely larger than in California and Perseus due to its higher star-formation rate (Lada et al. 2010). To further investigate the effect of temperature variations in the CO emission, we needed to first estimate the gas temperature at each cloud position. In Paper I we used the C\({}^{18}\)O(2-1)/C\({}^{18}\)O(1-0) ratio to estimate a relatively constant gas temperature of around 11 K in Perseus, and a similar method can be used to estimate the temperature in Orion A. For California, however, our C\({}^{18}\)O(2-1) observations only covered a minority of the cloud positions due to time constraints, so we could not use the C\({}^{18}\)O line ratio to derive the temperature across the cloud. As an alternative, we explored several other methods for determining the gas temperature using data available for the three clouds. These methods include using the peak intensity of the optically thick \({}^{12}\)CO(1-0) line, using the HCN/HNC ratio as recently proposed by Hacar et al. (2020), and using an empirical scaled-version of the dust temperature that has been calibrated to match the C\({}^{18}\)O line ratio predictions for the positions where it is available. Appendix D.1 describes the details of each method an evaluates its quality comparing the results with those of the C\({}^{18}\)O(2-1)/C\({}^{18}\)O(1-0) line ratio, which we consider the best available temperature indicator. As can be seen there, the dust temperature method supplemented with the NH\({}_{3}\)-derived temperature estimates for the central part of Orion A from Friesen et al. (2017) provides the best results, and for this reason, it is the method of choice for carrying out the temperature corrections described below. Once we had an estimate of the gas temperature at each cloud position, we used its value to correct the different line intensities for temperature variations across the cloud. Calculating a temperature correction factor for each position required determining how the intensity of the emerging lines change as a function of temperature, which is a nontrivial problem given the complex dependence of the intensity on multiple physical parameters. After exploring several options, we found that the best solution was to use a radiative transfer model, like the one presented in Paper I, and to predict the dependence of the different line intensities on temperature under realistic cloud conditions. A full description of the method is presented in Appendix D.2, while here we summarize its most relevant aspects. The Perseus model presented in Paper I assumes isothermal gas at 11 K and a simple parameterization of the physical and chemical structure of the cloud, and is able to reproduce simultaneously the emission from the different observed lines. To use this model to determine how the intensity of the different lines varies as a function of the gas temperature, we reran it using a grid of temperatures that ranges from 8 K to 100 K. Dividing each model intensity by the intensity obtained at a reference temperature of 10 K, we derived a series of correction factors \(f_{10K}(T_{\rm k})\). Using these correction factors, we can convert any observed intensity into the expected intensity that the gas would emit if it were at 10 K. We define this "corrected" intensity as \[I_{10\rm K}\equiv\frac{I}{f_{10\rm K}(T_{\rm k})}, \tag{1}\] where \(I\) is the observed intensity and \(f_{10\rm K}(T_{\rm k})\) is the model correction factor. Values for these factors for each transition observed in our survey are presented in Tables 1 and 2. Using the above factors, we converted the intensities of the CO isotopologs into their expected values for gas at 10 K, and present the results as a function of \(N\)(H\({}_{2}\)) in Fig. 2. Compared to the uncorrected intensities, the Orion A corrected intensities show a significant decrease of dispersion, which is a factor of 1.6 for the main isotopologs. As a result, the rms dispersion of the Orion A CO intensities is typically 0.2 dex, which is similar to the rms estimated for Perseus and California. These two clouds have also been temperature corrected, but the effect of the correction is minimal due to the clouds almost-constant temperature. In addition to decreasing the dispersion, the temperature correction helps equalize the line intensities of the three clouds. For all transitions, the temperature-corrected intensities of Perseus and Orion A are practically indistinguishable, apart from the spike of points near \(2\times 10^{23}\) cm\({}^{-2}\) in Orion A caused by the wings of the Orion Kleinmann-Low (Orion-KL) outflow. The line intensities of the California cloud are also close to those of Figure 1: Velocity-integrated intensities of the \(J\)=1–0 (top) and \(J\)=2–1 (bottom) lines of the main CO isotopologs as a function of H\({}_{2}\) column density. The data are color-coded by cloud: blue circles for California, green for Perseus, and red for Orion A. The dashed line in the \({}^{13}\)CO(2–1) panel indicates the slope of a linear trend. No \(J\)=2–1 data were taken for the California cloud apart from 13 C\({}^{18}\)O spectra from high column density positions. The lowest value of the intensity scale (0.1 K km s\({}^{-1}\)) approximately corresponds to the line detection limit. Perseus and Orion A, although they cluster at the lower end of the range spanned by the other two clouds. This equalization of the temperature-corrected intensities strongly suggests that most differences seen between the uncorrected intensities are due to differences in the gas temperature, and that once the temperature has been equalized (even using an approximate method like ours) a common pattern of emission emerges from the data. The existence of this common emission pattern indicates that after H\({}_{2}\) column density, the gas kinetic temperature is the next physical parameter that controls the intensity of the CO lines, and that when it is adjusted, the intensity of the CO isotopolog lines lines can be predicted for each \(N(\)H\({}_{2})\) value with a precision close to 0.2 dex. The only peculiar emission feature in Fig. 2 that remains unaffected by the temperature correction is the sudden increase in the intensity of the Orion A lines at column densities close to \(2\times 10^{23}\) cm\({}^{-2}\). As mentioned before, this intensity increase is associated with the Orion-KL molecular outflow, and is accompanied by the appearance of prominent high velocity wings in the CO spectra. For the optically thick CO lines, the appearance of the wings increases the velocity range available for the CO emission to escape, and this effect contributes to the large increase seen in intensity. The intensity increase, however, can also be seen in the optically thin C\({}^{18}\)O emission, so it must be accompanied by a local increase in the CO abundance that likely results from the action of outflow shocks and dust heating caused by the feedback of the massive stars in Orion-KL. The very extended N\({}_{2}\)H\({}^{+}\) emission seen toward the ISF (Haar et al. 2018) indicates the presence of large-scale CO freeze-out, and this process is likely being reversed in the vicinity of Orion-KL by the action of the stellar feedback. While smaller abundance enhancements occur toward individual low mass stars (such as the L1448 region in Perseus), the extreme effect of the Orion-KL outflow is unique to the Orion A cloud, and seems to represent a different regime of chemical abundance driven by high-mass star formation. This regime coincides with the small fraction of gas having column densities higher than of 2 10\({}^{23}\) cm\({}^{-2}\), which corresponds to a mass density of 0.94 g cm\({}^{-2}\), a value practically equal to the 1 g cm\({}^{-2}\) threshold for star formation proposed by Krumholz and McKee (2008). This different chemical regime is therefore likely to occur in other regions of high-mass formation, although further observations are required to reach a firm conclusion. ### Quantifying the intensity comparison So far, our comparison of the intensity profiles in the different clouds has been purely qualitative. To quantify it, we need a statistical tool that tests whether two distributions of points are equal. A convenient choice for this is the test proposed by Fasano and Franceschini (1987, the FF test hereafter), which generalizes the classical Kolmogorov-Smirnov test to multidimensional distributions of data. As the Kolmogorov-Smirnov test, the FF test determines the probability (p-value) that any two 2D samples arise from the same underlying distribution. To apply the test, it is customary to choose a threshold probability \(\alpha\) (typically 0.05) so that if the p-value is lower than \(\alpha\), the hypothesis that the two samples arise from the same distribution (the null hypothesis) can be considered as rejected. As often stressed in the literature (e.g., Press et al. 1992), p-values larger than \(\alpha\) do not guarantee that the two samples arise from exactly the same distribution, but that they are not different enough to reach a definitive conclusion. Given this caveat, and the arbitrary choice of the threshold value \(\alpha\), our use of the FF test does not pretend to prove or disprove mathematically that any two intensity distributions are completely equivalent, but to explore how significant any differences may be. To apply the FF test to our data, we used its implementation in the R statistical program3(R Core Team 2018) as presented by Puritz et al. (2021), which provides a fast and straightforward evaluation of the test p-value. Since the FF test compares distributions in pairs, we applied the test to each pair of clouds and to each CO transition observed in both clouds. To avoid any bias caused by the different \(N(\)H\({}_{2})\) extent of the different clouds, we restricted the comparison between clouds to the intensity values inside a common range of \(N(\)H\({}_{2})\), which for comparisons involv Figure 2: Temperature-corrected intensities of the \(J\)=1–0 (top) and \(J\)=2–1 (bottom) lines of the main CO isotopologs as a function of H\({}_{2}\) column density in California, Perseus, and Orion A (blue, green, and red circles, respectively). Note the lower level of scatter and the better inter-cloud agreement compared to the uncorrected intensities shown in Fig. 1. The dashed line in the \({}^{13}\)CO(2–1) panel indicates the slope of a linear trend. ing the California cloud is 2\(\times\)10\({}^{21}\)-4.8\(\times\)10\({}^{22}\) cm\({}^{-2}\), and for comparisons between Perseus and Orion A is 2\(\times\)10\({}^{21}\)-1.5\(\times\)10\({}^{23}\) cm\({}^{-2}\). Finally, to study the effect of the temperature correction, we ran the FF test before and after applying the correction. Table 1 summarizes the p-values derived from the FF test for each cloud pair and transition for which data are available. Values in bold face indicate probabilities that exceed the standard 0.05 threshold value and are therefore consistent with the two intensity distributions being equivalent. As can be seen, each pair comparison involving a C\({}^{18}\)O transition exceeds the 0.05 threshold irrespectively of whether the temperature correction has been applied or not, although the temperature-corrected p-values are slightly larger and therefore suggest a better match. This low sensitivity of the FF test to the temperature correction results from the small value of the correction for the optically thin C\({}^{18}\)O lines (Fig. D.3), and has the advantage of making the C\({}^{18}\)O line comparison almost independent of the temperature estimate. Since the C\({}^{18}\)O emission is optically thin, the similarity between the intensity distributions of the three clouds suggests that the clouds share a similar CO abundance distribution. Another conclusion that can be derived from the data in Table 1 is that, in contrast with the C\({}^{18}\)O results, the comparison of the CO and \({}^{13}\)CO emission from California with both Perseus and Orion A returns p-values significantly lower than the 0.05 threshold irrespectively of the use of the temperature correction. If we interpret the C\({}^{18}\)O comparison as indicative that the three clouds have similar CO abundance profiles, the low p-values of the CO and \({}^{13}\)CO comparison suggest that either the excitation or the optical depth in California is different from that in Perseus and Orion A. Figure 2 suggests that the CO and \({}^{13}\)CO intensities in California are lower than in Perseus and Orion A, and an experiment of multiplying the California intensities by a factor of 1.5 brings the clouds in better agreement. This result suggests that we are either overcorrecting the California intensities or that the California intensities are intrinsically lower due to optical depth effects, such as self-absorptions caused by the narrower lines. Further observations of these isotopologs are needed to reach a firm conclusion. Finally, the p-values in Table 1 confirm the success of the temperature correction equalizing the intensity of the main CO isotopolog in Perseus and Orion A. Before applying a temperature correction, the FF test for both CO(1-0) and CO(2-1) returns p-values below the 0.05 threshold, while the p-values after applying the temperature correction increase to 0.49 and 0.67 for \(J\)=1-0 and 2-1, respectively. This large change in the p-value reflects the large size of the temperature correction for the very optically thick lines of the main isotopolog, and illustrates the need of considering temperature variations when comparing the CO emission from different clouds. Somewhat surprisingly, the temperature correction does not bring the p-value of the \({}^{13}\)CO intensities over the 0.05 threshold, although it increases the p-value of the 2-1 transition by one order of magnitude. The reason for this failure may represent an intrinsic difference between the clouds, possibly caused by different isotopic fractionation (Langer et al., 1980; Ishii et al., 2019), or simply represent a failure of our temperature correction caused by an underestimate of the \({}^{13}\)CO optical depth. To summarize, our analysis shows that the FF test is a useful tool for quantitatively comparing the distribution of intensities between clouds. When applied to the C\({}^{18}\)O emission, the FF test shows that the emission from the three clouds is statistically indistinguishable, and since this emission is optically thin, it likely indicates that the clouds have similar CO abundance distributions. The observed differences between the distributions of the optically thick \({}^{12}\)CO and \({}^{13}\)CO emissions likely arise from differences in the internal temperature structure of the clouds. ### Traditional dense-gas tracers We now turn our attention to several molecular species that combine a high dipole moment with bright 3 mm wavelength lines: HCN, CS, HCO\({}^{+}\), and HNC. Following Paper I, we collectively refer to these species as "traditional dense-gas tracers" since they have been used in the past as indicators of high-density gas in molecular clouds (e.g., Evans, 1999). Recent research shows that these tracers are less selective of dense material than initially thought (Kauffmann et al., 2017; Pety et al., 2017; Watanabe et al., 2017; Shimajiri et al., 2017; Evans et al., 2020; Tafalla et al., 2021; Dame & Lada, 2023), although they are still widely used by the extragalactic community due to their bright emission lines (Gao & Solomon, 2004; Usero et al., 2015; Gallagher et al., 2018; Jimenez-Donaire et al., 2019) Since the traditional dense-gas tracers combine a large abundance and a high dipole moment, their low-\(J\) lines are expected to be optically thick over a significant fraction of any cloud. For our selected lines, this expectation is confirmed by the value of the intensity ratio between the main and a rare isotopolog \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{6}{c}{No temperature correction} \\ \hline Clouds & CO(1–0) & \({}^{13}\)CO(1–0) & C\({}^{18}\)O(1–0) & CO(2–1) & \({}^{13}\)CO(2–1) & C\({}^{18}\)O(2–1) \\ \hline California\({}^{b}\) - Perseus & 4.9 10\({}^{-9}\) & 4.8 10\({}^{-6}\) & **5.2 10\({}^{-1}\)** & – & – & – \\ California - Orion A & 7.8 10\({}^{-10}\) & 2.9 10\({}^{-9}\) & **9.4 10\({}^{-2}\)** & – & – & – \\ Perseus - Orion A & 2.8 10\({}^{-2}\) & 1.7 10\({}^{-2}\) & **2.2 10\({}^{-1}\)** & 4.7 10\({}^{-3}\) & 3.9 10\({}^{-3}\) & **8.6 10\({}^{-2}\)** \\ \hline \multicolumn{6}{c}{With temperature correction} \\ \hline Clouds & CO(1–0) & \({}^{13}\)CO(1–0) & C\({}^{18}\)O(1–0) & CO(2–1) & \({}^{13}\)CO(2–1) & C\({}^{18}\)O(2–1) \\ \hline California - Perseus & 2.5 10\({}^{-6}\) & 5.0 10\({}^{-6}\) & **5.2 10\({}^{-1}\)** & – & – & – \\ California - Orion A & 8.4 10\({}^{-9}\) & 2.0 10\({}^{-5}\) & **9.7 10\({}^{-2}\)** & – & – & – \\ Perseus - Orion A & **4.9 10\({}^{-1}\)** & 2.1 10\({}^{-2}\) & **3.1 10\({}^{-1}\)** & **6.7 10\({}^{-1}\)** & 4.4 10\({}^{-2}\) & **3.0 10\({}^{-1}\)** \\ \hline \end{tabular} 1 \end{table} Table 1: FF test p-values for a comparison between the emission of the different CO isotopologs in each pair of clouds.\({}^{a}\) (H\({}^{13}\)CN, C\({}^{34}\)S, H\({}^{13}\)CO\({}^{+}\), and HN\({}^{13}\)C). As shown in Fig. 2, all these ratios systematically lie below the expected abundance ratio by a large margin, indicating that the main isotopolog lines must be highly saturated over most of the cloud positions. As it happened with CO, the high optical depth of the traditional dense-gas tracer lines makes their emission potentially sensitive to temperature variations. To evaluate this effect, we present in Fig. 3 plots of the integrated intensity of HCN(1-0), CS(2-1), HNC(1-0), and HCO\({}^{+}\)(1-0) as a function of H\({}_{2}\) column density for two cases: uncorrected data (left panels) and data corrected for temperature variations using the factors described in Appendix D.2 (right panels). As can be seen, the temperature-corrected data present a lower degree of scatter and a better agreement between the emission from the three clouds compared to the uncorrected data. We interpreted this equalizing effect of the temperature correction as an indication that most of the inter and intra cloud differences in the emission seen in the uncorrected data arise from variations in the gas kinetic temperature. An exception to this equalizing effect are the large increases in the HCN(1-0) and CS(2-1) intensity at column densities around \(2\times 10^{23}\) cm\({}^{-2}\) in the Orion A cloud. As we will see the next section using the more abundance-sensitive rare isotopologs, these intensity increases are likely the result of abundance enhancements caused by high-mass star-formation feedback. Since our goal was to investigate how the molecular emission is generated inside the clouds, we focused on the temperature-corrected data and searched for additional features in the intensity profiles. A first feature to notice is the tight correlation between the intensity of all traditional dense-gas tracers and the H\({}_{2}\) column density. The correlation is stronger in HCN(1-0), CS(2-1), and HNC(1-0), and is characterized by values of the Pearson's \(r\) coefficient in the range of 0.87-0.90 (Table 2). The HCO\({}^{+}\)(1-0) intensities, on the other hand, present a significantly higher degree of scatter, which is mostly caused by the California data lying significantly below the Orion A and Perseus data. The Pearson's \(r\) coefficient is 0.80, which still indicates a strong level of correlation between the HCO\({}^{+}\) emission and the H\({}_{2}\) column density. Paper I already noted that the HCO\({}^{+}\)(1-0) intensities in Perseus presented a higher degree of scatter than the other lines, and the new California data show an even larger dispersion. As shown in the next section, the intensity of the H\({}^{13}\)CO\({}^{+}\) rare isotopologs presents little scatter and a tight correlation with \(N\)(H\({}_{2}\)), suggesting that the larger scatter of the main isotopologs Figure 3: Velocity-integrated intensities of the traditional dense-gas tracers HCN(1–0), CS(2–1), HNC(1–0), and HCO\({}^{+}\)(1–0) as a function of H\({}_{2}\) column density. _Left panels_: Original uncorrected data. _Right panels_: Data after applying the correction factors described in Appendix D.2 to simulate emission at a constant temperature of 10 K. Note the decrease in dispersion and the better agreement between clouds after the temperature correction. The data are color-coded as in previous figures: blue for California, green for Perseus, and red for Orion A. The dashed line in the top panels represents a linear trend for comparison. lines results from optical depth effects. An inspection of high velocity resolution HCO\({}^{+}\)(1-0) spectra taken toward selected positions reveals the presence of significant self-absorption features in some spectra that artificially truncate the emission and therefore lower the intensity. In addition to a strong correlation with \(N\)(H\({}_{2}\)), Fig. 3 shows that the temperature-corrected intensity of the traditional dense-gas tracers has an approximately linear dependence on column density (indicated by the dashed lines in the top panels). The only significant deviation from this trend occurs near \(N\)(H\({}_{2}\)) = 2 \(\times\) 10\({}^{23}\) cm\({}^{-2}\), where several tracers present a sudden intensity increase coincident with the position of the Orion Nebula Cluster (ONC) and the Orion-KL outflow. The origin of this increase seems to be a combination of abundance variations in some species and a drop in the optical depth due to the wide outflow wings caused by the outflow (as seen in the isotopic ratios of Fig E.2). In the next section we use the intensity distribution of the rare isotopologos to disentangle these two contributions. For the current analysis, we focused on the slope of the distribution as determined from a least squares fit to the combined data of the three clouds (including the anomalously bright positions at high column densities). The fit results, summarized in the third column of Table 2, show that the slopes lie in a narrow range (0.97-1.16) and are therefore very close to unity, as expected from the inspection of Fig. 3. This quasi-linear slope can be followed without significant changes until the emission reaches the detection limit (\(\approx\) 0.1 K km s\({}^{-1}\)), indicating the absence of breaks until the H\({}_{2}\) column density reaches its lowest values (\(\approx\) 10\({}^{21}\) cm\({}^{-2}\)). The radiative transfer model presented in Paper I showed that the gas volume density approximately follows the column density (Sect. 5.1), so the continuity of the slope in \(N\)(H\({}_{2}\)) indicates that there is no particular density at which the emission of the traditional dense-gas tracers suddenly changes behavior or disappears. Because of this, we can say that the tracers are sensitive to the gas density (since their intensity gradually increases with this parameter), but that they are not selective of any density value, since the emission depends continuously on this parameter. In addition, molecular clouds present probability distribution functions of column density that increase nonlinearly toward low column densities (Lombardi et al., 2015). As a result, the cloud-integrated intensity of any traditional high-density tracer is expected to be dominated by the contribution of the cloud low-density regions, as previously shown by different authors (Kauffmann et al., 2017; Pety et al., 2017; Watanabe et al., 2017; Shimajiri et al., 2017; Evans et al., 2020; Tafalla et al., 2021; Jones et al., 2023). To finish our analysis, we again used the FF test to quantify the similarities between the emission distributions in the three clouds. Table 3 presents the p-values derived using the FF test for both cases of no temperature correction and temperature correction. In agreement with the expectation from Fig. 3, using temperature correction improves the agreement between the clouds, reinforcing the idea that despite its simplicity, the correction partially compensates for differences in the gas temperature between the clouds. The better agreement of the temperature-corrected intensities also suggests that there are important similarities between the emission of the three clouds. The largest differences occur again when comparing California with Perseus and Orion A, especially for the case of HCO\({}^{+}\)(1-0). For the other lines, the FF test produces a mix of p-values slightly larger or smaller than the 0.05 threshold, confirming that there are noticeable similarities between the emission of the clouds, but not necessarily to the point of making them indistinguishable. In contrast with the peculiar behavior of the California cloud, a comparison between the intensity-corrected intensities in Perseus and Orion A produces p-values that are always larger than the 0.05 threshold for the four observed lines (although only marginally for CS(2-1)). The emission of Perseus and Orion A therefore appears indistinguishable when comparing positions in their common range of H\({}_{2}\) column density. ### Rare isotopologos of the traditional dense-gas tracers The lines of the traditional dense-gas tracers discussed in the previous section are optically thick and therefore relatively insensitive to possible abundance variations. To investigate these variations, we had to rely on less abundant isotopologos, such as H\({}^{13}\)CN, C\({}^{34}\)S, HN\({}^{13}\)C, and H\({}^{13}\)CO\({}^{+}\), whose lines are likely to be optically thin as suggested by the relative intensities of the hyperfine components of H\({}^{13}\)CN(1-0). Figure 4 presents the intensity distribution of the most abundant rare isotopologos of the traditional dense-gas tracers studied in the previous section. The left and middle panels show their emission as a function of H\({}_{2}\) column density both using uncorrected intensities (left panels) and intensities corrected for temperature variations using the prescriptions detailed in Appendix D.2 (middle panels). As can be seen, the rare-isotopolog lines are weaker than the main-species lines by about one order of magnitude, and as a result, their detection in our survey is limited to H\({}_{2}\) column densities larger than approximately 10\({}^{22}\) cm\({}^{-2}\). In this range of detection, the intensity of most species is very similar in the three clouds. For H\({}^{13}\)CO\({}^{+}\)(1-0), this is in contrast with the behavior of the main isotopologo, which as we saw in Fig. 3, presents strong excursions toward low intensities in California. The lack of similar excursions in the H\({}^{13}\)CO\({}^{+}\)(1-0) intensity supports the interpretation that HCO\({}^{+}\)(1-0) suffers from optical depth effects, \begin{table} \begin{tabular}{l c c} \hline \hline Transition & \(r\)-Pearson & Slope \\ \hline HCN(1–0) & 0.87 & \(1.12\pm 0.04\) \\ CS(2–1) & 0.88 & \(1.07\pm 0.03\) \\ HCO\({}^{+}\)(1–0) & 0.80 & \(0.97\pm 0.04\) \\ HNC(1–0) & 0.90 & \(1.16\pm 0.03\) \\ \hline \end{tabular} \end{table} Table 2: Statistics of the (logarithmic) correlation between the temperature-corrected intensity of traditional dense-gas tracers and H\({}_{2}\) column density. \begin{table} \begin{tabular}{l c c c} \hline \hline Clouds & HCN(1-0) & CS(2-1) & HNC(1-0) & HCO\({}^{+}\)(1-0) \\ \hline Cal-Pers & \(2.3\ 10^{-2}\) & \(3.7\ 10^{-2}\) & \(\mathbf{7.2\ 10^{-2}}\) & \(1.0\ 10^{-6}\) \\ Cal-Ori & \(4.1\ 10^{-2}\) & \(\mathbf{1.1\ 10^{-1}}\) & \(1.7\ 10^{-2}\) & \(2.2\ 10^{-4}\) \\ Pers-Ori & \(3.0\ 10^{-2}\) & \(\mathbf{3.5\ 10^{-1}}\) & \(\mathbf{2.5\ 10^{-1}}\) & \(3.0\ 10^{-2}\) \\ \hline \multicolumn{4}{c}{With temperature correction} \\ \hline Clouds & HCN(1-0) & CS(2-1) & HNC(1-0) & HCO\({}^{+}\)(1-0) \\ \hline Cal-Pers & \(\mathbf{1.4\ 10^{-1}}\) & \(2.8\ 10^{-2}\) & \(\mathbf{1.7\ 10^{-2}}\) & \(\mathbf{7.2\ 10^{-5}}\) \\ Cal-Ori & \(3.7\ 10^{-2}\) & \(\mathbf{2.7\ 10^{-1}}\) & \(1.9\ 10^{-2}\) & \(1.9\ 10^{-5}\) \\ Pers-Ori & \(\mathbf{2.8\ 10^{-1}}\) & \(\mathbf{5.9\ 10^{-2}}\) & \(\mathbf{5.1\ 10^{-1}}\) & \(\mathbf{3.3\ 10^{-1}}\) \\ \hline \end{tabular} 1 \end{table} Table 3: FF p-values for traditional dense-gas tracers. most likely from the strong self-absorptions known to affect the narrower California lines. If we compare the left and middle panels of Fig. 4, we notice that applying the temperature correction decreases some of the intensity excursions seen in H\({}^{13}\)CN(1-0) and C\({}^{34}\)S(2-1), but has otherwise little effect on the data. This is due to the small value of the correction, which is typically less than a factor of 2 (Fig. D.3), and as a result, it cannot remove the strong intensity increase seen in H\({}^{13}\)CN(1-0) and C\({}^{34}\)S(2-1) toward \(N(\mathrm{H_{2}})\approx 2\times 10^{23}\) cm\({}^{-2}\). To investigate the intensity increase of H\({}^{13}\)CN(1-0) and C\({}^{34}\)S(2-1) in Orion A, the right panels of Fig. 4 present plots of the ratio between the temperature-corrected intensity and the H\({}_{2}\) column density as a function of the gas kinetic temperature for the four rare isotopologues. Positions with gas temperature below 20 K are present in all three clouds, while all warmer positions are located in the ISF of Orion A, especially in the vicinity of Orion KL and the ONC. As can be seen, the ratios for HN\({}^{13}\)C and H\({}^{13}\)CO\({}^{+}\) (bottom panels) remain approximately constant, with a possible slight decrease at intermediate temperatures, despite a factor of 10 variation in the gas temperature. On the other hand, the ratios for H\({}^{13}\)CN and C\({}^{34}\)S (top panels), present significant correlations with the gas temperature for values larger than approximately 20 K. In the 20-100 K temperature range, the ratio for H\({}^{13}\)CN increases by about one order of magnitude while the ratio for C\({}^{34}\)S increases about a factor of 5. Since the correlations seen in the right panels of Fig. 4 involve intensities already corrected for temperature variations, their most likely origin must be differences in the abundance of the species as a function of the gas temperature. If this is case, the approximately constant intensity/column density ratios of H\({}^{13}\)CO\({}^{+}\) and HN\({}^{13}\)C suggest that these two species are relatively immune to temperature-related abundance variations, which makes them stable tracers of the column density. The increase in the intensity/column density ratios of H\({}^{13}\)CN and C\({}^{34}\)S, on the other hand, strongly indicates that the abundance of these two species depends sensitively on temperature once this parameter exceeds a threshold value of around 20 K. The larger increase of the H\({}^{13}\)CN intensity/column density ratio indicates that this species is more sensitive to temperature than C\({}^{34}\)S, and that its abundance is expected to vary as a result of gas temperature increases caused by the action of star formation. Our finding of an HCN abundance enhancement at high temperatures is in good agreement with previous research on the chemistry of Orion A, which has found a significant increase of the HCN/HNC ratio with the gas kinetic temperature (Goldsmith et al. 1981; Schilke et al. 1992; Hacar et al. 2020). Although not fully understood, this increase likely results from the activation of temperature-sensitive neutral-neutral reactions that alter the total abundance of the two species and their abundance ratio (Herbst et al. 2000; Graninger et al. 2014). Less work has been carried out on the possible temperature dependence of the abundance of CS. We note however that most positions with high CS abundance in Orion A present evidence for high-velocity wings caused by the Orion-KL outflow, and that chemical surveys of bipolar outflow gas often show significant abundance enhancement. Figure 4: Intensity distributions of the rare isotopologues of the traditional dense-gas tracers represented in Fig. 3. _Left panels:_ Original uncorrected data. _Middle panels:_ Data after applying the correction factors described in Appendix D.2 to simulate emission at a constant temperature of 10 K. _Right panels:_ Ratio between the temperature-corrected intensity and the H\({}_{2}\) column density (in units of \(10^{22}\) cm\({}^{-2}\)) as a function of gas temperature. All data are color-coded as in previous figures. ments of both HCN and CS (but little or no enhancement of HNC and HCO\({}^{+}\); see Bachiller & Perez Gutierrez 1997, Tafalla et al. 2010, and Lefloch et al. 2021). Clearly more work is needed to understand the different contributions to the abundance behavior of HCN, CS, HNC, and HCO\({}^{+}\) at high temperatures. For the purposes of our study, our main conclusion is that the temperature enhancement resulting from star-formation feedback introduces a new chemical regime in the cloud gas that seems to coincide with the onset of high-mass star formation. To conclude our analysis of the rare isotopologs, we present in Table 4 the results of the FF test for all combinations of the lines and pairs of clouds, both before and after applying the temperature correction. Since the isotopologs are not detected at H\({}_{2}\) column densities lower than about \(10^{22}\) cm\({}^{-2}\), only values larger than this threshold have been considered. As in previous FF tests, the California data have been compared with Perseus and Orion A data having column densities smaller or equal to \(4.8\times 10^{22}\) cm\({}^{-2}\), and the Perseus data have been compared with Orion A data up to a column density of \(1.5\times 10^{23}\) cm\({}^{-2}\). As can be seen in the table, all p-values for the uncorrected comparison exceed the 0.05 threshold except for the comparison of HN\({}^{13}\)C(1-0) between California and Orion A, which returns a value of 0.04. After applying the temperature correction, most p-values remain larger than the 0.05 threshold, except for the already mentioned HN\({}^{13}\)C(1-0) and the comparison of C\({}^{34}\)S(2-1) between Perseus and Orion, whose p-value drops by one order of magnitude with respect to the uncorrected comparison. This drop seems anti-intuitive in view of the plots of Fig. 4, and seems to occur because the temperature correction decreases the dispersion of the data, so any slight difference between the emission of the clouds becomes more significant. Still, the fact that the majority of p-values exceed the 0.05 threshold is an indication that while small differences may exist, the emission from the three clouds presents strong similarities when similar column densities are compared. The main differences between the clouds seem therefore to arise from the fact that they reach very different peak column densities. ### N\({}_{2}\)H\({}^{+}\) and the onset of molecular freeze-out In contrast with the traditional dense-gas tracers, which freeze out onto the cold dust grains at high densities and low temperatures, N\({}_{2}\)H\({}^{+}\) remains in the gas phase and its abundance is enhanced at high densities, likely as a result of the freezing out of CO (Kuiper et al. 1996; Caselli et al. 1999; Aikawa et al. 2001; Tafalla et al. 2002; Lee et al. 2004). Since N\({}_{2}\)H\({}^{+}\) also has a high dipole moment (Havenhu et al. 1990), it has become a tracer of choice for identifying the dense and cold condensations responsible for star formation in clouds (Bergin & Tafalla 2007). To illustrate the behavior of the N\({}_{2}\)H\({}^{+}\) emission in the three clouds of our sample, we present its distribution in Fig. 5. As in previous figures, the left and middle panels show the distribution of intensity as a function of H\({}_{2}\) column density both without temperature correction and after applying the temperature correction factors described in Appendix D.2. Since the temperature correction for N\({}_{2}\)H\({}^{+}\) never exceeds a factor of 2 (Fig. D.3), the two distributions in the figure look very similar. As can be seen from Fig. 5, the distribution of N\({}_{2}\)H\({}^{+}\)(1-0) intensity differs significantly from that of the traditional dense-gas tracers. It follows an almost linear correlation with \(N\)(H\({}_{2}\)) at high column densities, but below \(2\times 10^{22}\) cm\({}^{-2}\) it drops non-linearly with column density, and remains undetected at values below \(10^{22}\) cm\({}^{-2}\). This sudden change in the emission is unique to N\({}_{2}\)H\({}^{+}\)(1-0), and makes this species a highly selective tracer of the dense gas in a cloud. As Fig. 5 shows, the distribution of N\({}_{2}\)H\({}^{+}\)(1-0) emission in the three clouds is very similar independently of whether a temperature correction has been applied or not. This good agreement is confirmed by the FF test results reported in Table 5, which show that the p-value in all comparisons exceeds the 0.05 threshold both with and without temperature correction. The good match between the clouds in the nonlinear region between \(10^{22}\) and \(2\times 10^{22}\) cm\({}^{-2}\) is especially remarkable because the nonlinear change is likely caused by the onset of CO freeze-out, a process that is also sensitive to volume density because the freeze-out time depends on the collision time between molecules and dust grains (Leger 1983). The similar location of the N\({}_{2}\)H\({}^{+}\)(1-0) change in the three clouds suggests that the critical density for CO freeze-out is reached at a similar column density in all of them despite their very different peak H\({}_{2}\) column densities and star-formation rates. Also noticeable in the distribution of N\({}_{2}\)H\({}^{+}\)(1-0) is the sharp drop toward the highest column densities reached in Orion A at about \(10^{23}\) cm\({}^{-2}\) coinciding with the ONC and Orion BN/KL. This drop has been previously noticed by a number of authors, including Tatematsu et al. (2008), Kauffmann et al. (2017), Hacar et al. (2018), and Yun et al. (2021), and occurs at the same column densities at which HCN and CS present their abundance increase associated with the high-temperature gas. To investigate the effect of temperature in the N\({}_{2}\)H\({}^{+}\) abundance, we again calculated the ratio between the N\({}_{2}\)H\({}^{+}\) intensity and the H\({}_{2}\) column density, and present the result as a function of the gas temperature in the right panel of Fig. 5. To make this plot, we restricted the comparison to H\({}_{2}\) column densities larger than 2 \(10^{22}\) cm\({}^{-2}\) since this is the approximate range at which the N\({}_{2}\)H\({}^{+}\) intensity depends quasi-linearly with \(N\)(H\({}_{2}\)). As can be seen, the intensity-column density gradually decreases in gas hotter than about 30 K, and by 100 K, it has decreased by about one order of magnitude with respect to its low-temperature value. Although the N\({}_{2}\)H\({}^{+}\) drop presents more scatter than the increases in H\({}^{13}\)CN and C\({}^{34}\)S, the trend points again to a change in the gas chemical composition triggered by the feedback from star formation. A drop in the N\({}_{2}\)H\({}^{+}\) abundance is indeed expected as a result of the release of CO from the dust grains due to protostellar heating, and has been previously observed toward individual star-forming regions (Jorgensen 2004; Caselli & Ceccarelli 2012; Jorgensen et al. 2020). Observations of additional high-mass star-forming regions are necessary to confirm this interpretation. ### Line luminosity estimates from sampling observations and comparison with mapping results So far we have only used the sampling data to study the distribution of line intensities as a function of H\({}_{2}\) column density. This type of distribution represents the most immediate output from the sampling observations, and as we have seen, provides a detailed description of the emission properties from a cloud. The sampling data can also be used to estimate other emission properties, such as the line luminosity of a cloud, which corresponds to the integral of the line intensity over the cloud surface area. This luminosity can be calculated from the sampling data by adding the contribution from each column density bin to the product of the mean line intensity (\(I_{n}\)) times the surface area sub tended by that bin (\(A_{n}\)). In other words, \[L=\sum_{n=1}^{m}I_{n}\ A_{n}, \tag{2}\] where in our case \(n\) runs from 1 to \(m\)= 8, 10, and 12 for California, Perseus, and Orion A, respectively. In practice, the mean line intensity \(I_{n}\) can be estimated by averaging all the spectra observed toward a given column density bin (ten positions in our survey) and integrating the emission over the full velocity range. The surface area \(A_{n}\) subtended by the bin can be estimated from the available extinction maps (Lombardi et al. 2014; Zari et al. 2016; Lada et al. 2017) by counting the number of pixels belonging to the bin and multiplying the result by the pixel area assuming an appropriate cloud distance. Combining these two quantities, it is straightforward to use the sampling data to estimate the luminosity of any line emitted by a cloud. To test whether the above method provides accurate estimates of the line luminosities, we searched the literature for line luminosity determinations based on the standard mapping technique in any of our three target clouds, and we calculated equivalent luminosity estimates using our sampling data. As expected, most available luminosity determinations involve CO transitions since this molecule has been the tracer of choice for large-scale mapping. The most complete set of luminosity determinations that can be used to compare with our sampling estimates is that of Lewis et al. (2022), who have determined \({}^{12}\)CO(1-0) luminosities for California, Perseus, and Orion A as part of their study of 12 nearby molecular clouds. These authors have used for their estimates data from the Milky Way survey of Dame et al. (2001), and while they do not explicitly provide the resulting luminosities, those can be trivially derived from the \(\alpha_{\rm CO}\) conversion factors and the cloud masses presented in their Table 1. Due to the large scale of the maps used by Lewis et al. (2022) (see their Figs. 12 and 13), the resulting luminosities should be compared with sampling estimates using the full extent of the clouds. Additional CO luminosities for the Orion A cloud have been presented by Nishimura et al. (2015), who have estimated values for both the \(J\)=1-0 and 2-1 transitions of \({}^{12}\)CO, \({}^{13}\)CO, and C\({}^{18}\)O. These luminosities, however, have been estimated after applying to the data a noise-reduction mask that assigns "zero values at the emission free pixels" (their Sect. 2.1), and as a result, this set of luminosity estimates neglects the contribution from the outer parts of the cloud, which according to our sampling estimates is significant. To properly compare luminosities estimated using sampling with the results from Nishimura et al. (2015), it is \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{No temperature correction} \\ \hline Clouds & H\({}^{13}\)CN(1-0) & C\({}^{34}\)S(2-1) & HN\({}^{13}\)C(1-0) & H\({}^{13}\)CO\({}^{+}\)(1-0) \\ \hline Cal-Pers & \(\mathbf{6.3\ 10^{-1}}\) & \(\mathbf{4.1\ 10^{-1}}\) & \(\mathbf{1.5\ 10^{-1}}\) & \(\mathbf{8.3\ 10^{-1}}\) \\ Cal-Ori & \(\mathbf{5.3\ 10^{-1}}\) & \(\mathbf{4.0\ 10^{-1}}\) & \(\mathbf{4.3\ 10^{-2}}\) & \(\mathbf{8.0\ 10^{-1}}\) \\ Pers-Ori & \(\mathbf{3.3\ 10^{-1}}\) & \(\mathbf{4.6\ 10^{-1}}\) & \(\mathbf{6.6\ 10^{-1}}\) & \(\mathbf{3.3\ 10^{-1}}\) \\ \hline \multicolumn{5}{c}{With temperature correction} \\ \hline Clouds & H\({}^{13}\)CN(1-0) & C\({}^{34}\)S(2-1) & HN\({}^{13}\)C(1-0) & H\({}^{13}\)CO\({}^{+}\)(1-0) \\ \hline Cal-Pers & \(\mathbf{3.1\ 10^{-1}}\) & \(\mathbf{4.5\ 10^{-1}}\) & \(\mathbf{1.2\ 10^{-1}}\) & \(\mathbf{4.8\ 10^{-1}}\) \\ Cal-Ori & \(\mathbf{1.1\ 10^{-1}}\) & \(\mathbf{7.3\ 10^{-1}}\) & \(2.7\ 10^{-3}\) & \(\mathbf{3.6\ 10^{-1}}\) \\ Pers-Ori & \(\mathbf{2.2\ 10^{-1}}\) & \(\mathbf{4.3\ 10^{-2}}\) & \(\mathbf{4.1\ 10^{-1}}\) & \(\mathbf{3.4\ 10^{-1}}\) \\ \hline \end{tabular} 10 \end{table} Table 4: FF p-values for isotopologues of traditional dense-gas tracers. Figure 5: Distributions of N\({}_{2}\)H\({}^{+}\)(1–0) integrated intensity. _Left panel_: Original uncorrected data. _Middle panel_: Data after applying the correction factors described in Appendix D.2 to simulate emission at a constant temperature of 10 K. _Right panel_: Ratio between the temperature-corrected intensity and the H\({}_{2}\) column density (in units of 10\({}^{22}\) cm\({}^{-2}\)) as a function of gas temperature. All data are color-coded as in previous figures. \begin{table} \begin{tabular}{l c c} \hline \hline Clouds & No \(T_{\rm k}\) corr. & With \(T_{\rm k}\) corr. \\ \hline Cal-Pers & \(\mathbf{8.8\ 10^{-1}}\) & \(\mathbf{6.1\ 10^{-1}}\) \\ Cal-Ori & \(\mathbf{3.7\ 10^{-1}}\) & \(\mathbf{2.0\ 10^{-1}}\) \\ Pers-Ori & \(\mathbf{3.1\ 10^{-1}}\) & \(\mathbf{2.1\ 10^{-1}}\) \\ \hline \end{tabular} 10 \end{table} Table 5: FF test p-values for N\({}_{2}\)H\({}^{+}\)(1–0). therefore critical to exclude the fraction of the cloud that these authors have masked out. This information is only available for the \(J\)=2-1 lines, whose (masked) maps are publicly available (Sect. 6 in Nishimura et al. 2015), so our comparison with the sampling method has to be restricted to the \(J\)=2-1 transitions. For them, we downloaded the maps from the repository, verified the luminosity values given by Nishimura et al. 2015 in their Table 1, and estimated the amount of surface area in each of our column density bins that was left unmasked. Using these values, we used Eq. 2 to derive sampling-based luminosities that can be properly compared with the values presented by Nishimura et al. (2015). A final set of CO luminosities based on mapping observations can be derived from the publicly available maps of the Coordinated Molecular Probe Line Extinction and Thermal Emission (COMPLETE) survey of Perseus presented by Ridge et al. 2006 These \({}^{12}\)CO(1-0) and \({}^{13}\)CO(1-0) maps contain more than \(10^{5}\) pixels each, and we spatially integrated them to estimate cloud luminosities. Since the COMPLETE survey did not cover the full extent of the Perseus cloud (see the coverage in Fig. 2 of Ridge et al. 2006), we estimated the amount of area of each bin covered by the COMPLETE maps and used these values to calculate equivalent sampling-based estimates. Apart from CO, the only species whose line luminosity has been estimated in any of our three target clouds is HCN. Dame & Lada (2023) have recently presented an estimate of the HCN(1-0) luminosity from Perseus using a map made with the Center for Astrophysics (CfA) 1.2 m telescope that attempts to cover the full extent of the cloud emission. While this map only extends to a column density equivalent to our second bin (based on its CO cutoff), these authors have used their larger CO map to estimate the contribution from the remaining "weak, unobserved HCN," so we used this corrected luminosity to compare with out sampling estimate for the full cloud. Fig. 6 summarizes the comparison between the mapping and sampling luminosity estimates by representing one quantity against the other (numerical values are given in Table F.1). The diagonal dashed line indicates the locus of equal luminosity estimates, and the parallel dotted lines delimit the region where the mapping and sampling estimates agree at the 50% level. As can be seen, the luminosity estimates span almost three orders of magnitude and systematically cluster along the equal-value dashed line, indicating an overall good agreement between the two methods used to estimate luminosities. As the figure indicates, the level of agreement between the mapping and sampling estimates seems to slightly vary between the different data sets, although there is no evidence for significant variations with the choice of cloud or tracer. The Lewis et al. (2022) estimates (blue circles), which are the only ones that include simultaneously our three target clouds, present differences with the mapping results that are only at the level of 30% or less. This is despite the use of of very different telescopes: the beam solid angle of the CfA 1.2m telescope is 625 times larger than that of the IRAM 30m telescope used for our sampling observations. A slightly worse level of agreement is seen in the comparison between the Orion A sampling results and the estimates of Nishimura et al. (2015), which are represented in the figure by three red circles (corresponding by decreasing order to the \(J\)=2-1 transitions of \({}^{12}\)CO, \({}^{13}\)CO, and C\({}^{18}\)O). The differences between the two data sets are at the 50% level, which is the largest value in all our comparisons. While we can only speculate as to why the sampling luminosities are significantly larger than the mapping ones in this case, we note that this comparison is the only one involving 1 mm wavelength data. As mentioned in Sect. 2.2, our use of the main beam brightness scale at 1 mm may over calibrate the IRAM 30 m data by about 40% in the case of very extended emission. The data from Nishimura et al. (2015), on the other hand, were taken with the Osaka 1.85m telescope, which has a low sidelobe level, and whose \(T_{\rm g}^{*}\) scale is more appropriate for extended emission (Onishi et al. 2013; Nishimura et al. 2015). A difference in the calibration scheme used to reduce the two data sets may therefore be responsible for part of the disagreement between the estimated luminosities. A better level of agreement is seen in the comparison with the Perseus data of Ridge et al. (2006) (green circles), where the differences between the mapping and sampling luminosities are of 15% or less. This good agreement is consistent with the results from the comparison between the distribution of line intensities as a function of \(N\)(H\({}_{2}\)) for this data set and our Perseus observations carried out in Paper I. A similar level of agreement is seen for the HCN(1-0) luminosity estimate presented by Dame & Lada (2023) for Perseus (orange circle), suggesting that the ability of the sampling method to estimate line luminosities is not limited to observations of the CO transitions. In addition to showing that sampling observations can provide accurate estimates of the line luminosities, the comparison with the mapping data shows that our choice of column density bins captures the bulk of the emission even in the case of the very extended CO lines. This is supported by the good match with the luminosities from Lewis et al. (2022). Since these authors extracted their maps from a Milky Way survey, we can safely assume that their maps were not artificially limited by mapping coverage, but by the natural extent of the clouds. Our sampling luminosities agree with those of Lewis et al. (2022) to better than Figure 6: Comparison between line-luminosity estimates based on mapping data from the literature and equivalent estimates using sampling observations. The blue circles represent CO(1–0) luminosities of California, Perseus, and Orion A from Lewis et al. (2022), the red circles the CO(2–1), \({}^{13}\)CO(2–1), and C\({}^{18}\)O(2–1) luminosities of Orion A from from Nishimura et al. (2015), the green circles the CO(1–0) and \({}^{13}\)CO(1–0) luminosities of Perseus from Ridge et al. (2006), and the yellow circle the HCN(1–0) luminosity of Perseus from Dame & Lada (2023). The dashed line marks the locus of equal estimates, and the dotted lines correspond to differences of 50%. All estimates assume the cloud distances given in Sect. 1. 30%, so any emission coming from regions outside the lowest column density bin of our sampling must contribute negligibly to the total cloud output. This result was expected from the finding of sharp drops in the CO intensity toward the lowest column density bins of all the clouds, which were interpreted as resulting from molecular photodissociation caused by the external UV radiation field. It suggests that any molecular gas outside the lowest bin in our sampling is likely to be CO dark. To summarize, our comparison shows that the stratified random sampling method can be used to estimate line luminosities that agree with previously published values at the a level typically better than about 30%. This result is reassuring in view of the large differences in the observing techniques, spatial coverage, calibration schemes, and size of the telescopes involved in the comparison. Our comparison also shows that to obtain accurate luminosity estimates, both techniques require special care. The sampling technique requires using high-quality extinction maps to estimate the surface area subtended by each column density bin, and sampling the emission down to column densities of around 1-\(2\times 10^{21}\) cm\({}^{-2}\). The mapping technique requires good spatial coverage of the cloud and the ability to account for the weak emission from the outer parts of the cloud, whose contribution to the luminosity is not negligible due to their large surface area and cannot be masked out. Having validated the method, we calculated luminosities for all the lines studied in the previous sections, and the results are summarized in Table 6. It should be noted that the N\({}_{2}\)H\({}^{+}\)(1-0) values represent only lower limits because the emission of this line was not detected in the outer layers of the cloud, and their potential contribution cannot be estimated with our data. Further discussion of the HCN(1-0) luminosities and their relation with the amount of dense gas in the clouds is deferred to Sect. 4.2. ## 4 Discussion ### The three main chemical regimes of a molecular cloud The similar dependence on \(N\)(H\({}_{2}\)) and \(T_{\rm gas}\) of the line intensities in California, Perseus, and Orion A suggests that the three clouds share a similar chemical structure, and that this structure can be described using \(N\)(H\({}_{2}\)) and \(T_{\rm gas}\) as the main physical parameters. In this section we combine the results of our analysis of the intensity distributions in the three clouds with the results of the radiative transfer model of the Perseus cloud presented in Paper I to determine the main characteristics of the chemical structure of the clouds. A cartoon view of the proposed structure is presented in Fig. 7. We start our discussion with the cloud outermost layers. Their chemical composition can only be studied using the few species that are bright enough to be detected toward the lowest column density bins. As shown in Figs. 1 and 2, the emission of both \({}^{12}\)CO and \({}^{13}\)CO is detected at all column densities, and presents a sharp change around \(N\)(H\({}_{2}\)) = 1-2 \(\times 10^{21}\) cm\({}^{-2}\), which is equivalent to a visual extinction of \(A_{\rm V}\) = 1-2 mag. Similar sharp changes of the CO emission have been seen toward the edges of other molecular clouds (Pineda et al. 2008; Ripple et al. 2013), and they most likely result from the photodissociation of CO by the interstellar radiation field (van Dishoeck & Black 1988; Wolfire et al. 2010). As discussed in Paper I, the Perseus data also show hints that the intensity of some traditional dense-gas tracers present an outer change similar to CO, and that some UV-sensitive species, such as C\({}_{2}\)H and CN present slight outer abundance enhancements in agreement with the expectations from models of photodissociation regions (Cuadrado et al. 2015). All these effects indicate that in the three clouds, the column density value of 1-2\(\times 10^{21}\) cm\({}^{-2}\) marks the approximate boundary between the outer UV-dominated regime and the shielded cloud interior, where most molecular species seem to keep approximately constant abundances (as suggested by the radiative transfer model of Paper I). In Fig. 7, we represent this region as the outermost layer of the cloud, and label its interior as the regime of "undepleted abundances." The next significant change in the cloud chemical composition seems to occur after the column density has increased by about one order of magnitude. The plots of N\({}_{2}\)H\({}^{+}\) intensity show that this tracer experiments an order of magnitude increase between \(10^{22}\) and \(2\times 10^{22}\) cm\({}^{-2}\), after which the intensity approximately follows quasi-linearly \(N\)(H\({}_{2}\)) (Fig. 5). As mentioned in Sect. 3.6, this sharp increase in the N\({}_{2}\)H\({}^{+}\) abundance is expected to correspond to the onset of CO freeze-out onto the dust grains, and is a consequence of the gradual increase in the gas volume density as the column density increases. The occurrence of this onset at similar column densities in California, Perseus, and Orion A points to a similar increase in the gas volume density as a function of column density in the three clouds, which suggests that the clouds share an important similarity in their internal structure. Since the freeze-out of CO is accompanied by a similar freeze-out of other carbon species such as CS, HCO\({}^{+}\), and HCN (Kuiper et al. 1996; Tafalla et al. 2006), we interpreted the column density value of \(10^{22}\) cm\({}^{-2}\) as an approximate boundary of the second regime of the cloud, which we call the molecular freeze-out regime (Fig. 7). The final chemical regime suggested by our observations has a less sharp boundary, but approximately corresponds to column densities in excess of \(10^{23}\) cm\({}^{-2}\). This regime is only present in the Orion A cloud since California and Perseus do not reach such high values of \(N\)(H\({}_{2}\)), and is represented by the high-mass \begin{table} \begin{tabular}{l c|l c} \hline \hline Line & Luminosity & Line & Luminosity \\ & (K km s\({}^{-1}\) pc\({}^{2}\)) & & (K km s\({}^{-1}\) pc\({}^{2}\)) \\ \hline \multicolumn{4}{c}{California} \\ \hline \({}^{12}\)CO(1–0) & 35,200 & CS(2–1) & 78 \\ \({}^{13}\)CO(1–0) & 4,220 & HNC(1–0) & 68 \\ C\({}^{18}\)O(1–0) & 226 & HCO\({}^{+}\)(1–0) & 173 \\ HCN(1–0) & 210 & N\({}_{2}\)H\({}^{+}\)(1–0) & \(\geq 8\) \\ \hline \multicolumn{4}{c}{Perseus} \\ \hline \({}^{12}\)CO(1–0) & 6,900 & \({}^{12}\)CO(2–1) & 5,850 \\ \({}^{13}\)CO(1–0) & 1,160 & \({}^{13}\)CO(2–1) & 938 \\ C\({}^{18}\)O(1–0) & 83 & C\({}^{18}\)O(2–1) & 61 \\ HCN(1–0) & 77 & HNC(1–0) & 30 \\ CS(2–1) & 61 & HCO\({}^{+}\)(1–0) & 82 \\ N\({}_{2}\)H\({}^{+}\)(1–0) & \(\geq 9\) & & \\ \hline \multicolumn{4}{c}{Orion A} \\ \hline \({}^{12}\)CO(1–0) & 28,200 & \({}^{12}\)CO(2–1) & 20,100 \\ \({}^{13}\)CO(1–0) & 4,070 & \({}^{13}\)CO(2–1) & 4,150 \\ C\({}^{18}\)O(1–0) & 234 & C\({}^{18}\)O(2–1) & 316 \\ HCN(1–0) & 524 & HNC(1–0) & 210 \\ CS(2–1) & 236 & HCO\({}^{+}\)(1–0) & 544 \\ N\({}_{2}\)H\({}^{+}\)(1–0) & \(\geq 36\) & & \\ \hline \end{tabular} \end{table} Table 6: Line luminosity estimates. star-forming regions in the ISF. As discussed in Sect. 3.5, these regions present elevated gas temperatures (\(>\)30 K) and abundance enhancements in selected species such as HCN and CS, and are likely the result of high-temperature chemistry triggered by high-mass star formation. This regime therefore represents the effect of stellar feedback on the cloud gas, and its properties likely depend less systematically on \(N\)(H\({}_{2}\)) than the other two regimes due to the more stochastic nature of the star-formation activity. Given that our sampling of column densities larger than \(10^{23}\) cm\({}^{-2}\) is limited to only Orion A, observations of other high-mass star-forming regions are still required to properly characterize this chemical regime. ### HCN as a dense-gas tracer Due to its bright lines, HCN has become the tracer of choice to estimate the amount of dense gas in extragalactic studies of star formation. In their classical analysis of the HCN emission from a wide variety of galaxies, Gao & Solomon (2004) found a close-to-linear correlation between the far-IR luminosity and the HCN(1-0) luminosity, and interpreted it as indicating that the star-formation rate of a galaxy depends on the amount of dense gas traced by HCN. To determine the mass of the dense gas associated with the HCN emission, Gao & Solomon (2004) used a combination of large velocity gradient (LVG) radiative transfer and virial analysis, and concluded that \[M_{\rm dense}=\alpha({\rm HCN})~{}L_{\rm HCN}, \tag{3}\] with a conversion factor \(\alpha\)(HCN) of around 10 M\({}_{\odot}\) (K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\). Although Gao & Solomon (2004) recognized the approximate nature of their \(\alpha\)(HCN) estimate, and stated that an accurate determination required "further extensive studies," their proposed value has become a de facto standard for extragalactic studies of star formation (e.g., Usero et al., 2015; Gallagher et al., 2018; Jimenez-Donaire et al., 2019). As mentioned in Sect. 3.4, recent studies of the emission from galactic clouds have shown than HCN(1-0) is not a truly selective tracer of the dense gas since its cloud-scale emission is dominated by the contribution from extended and relatively low density gas (Kauffmann et al., 2017; Pety et al., 2017; Watanabe et al., 2017; Shimajiri et al., 2017; Evans et al., 2020; Tafalla et al., 2021; Dame & Lada, 2023). While this result calls into question a literal interpretation of the \(\alpha\)(HCN) derivation by Gao & Solomon (2004), the existence of a tight linear correlation between the HCN luminosity and the far-IR luminosity, a reliable tracer of the star formation rate, still indicates that HCN traces either the amount of dense gas or a gas property that is closely connected to the star-forming material. Understanding the origin of the Gao & Solomon (2004) relation therefore remains an open question whose answer requires investigating the origin of the HCN(1-0) emission from local clouds. To investigate the role of the HCN emission as a dense-gas tracer, we used our sampling data to evaluate the HCN conversion factor in the California, Perseus, and Orion A clouds. Before discussing the results, it should be noted that there are multiple definitions of the HCN conversion factor in the literature (e.g., see Table A.2. in Shimajiri et al., 2017), so it is important to first clarify how the conversion factor is defined. Broadly speaking, two types of definitions have been proposed depending on whether the factor is considered as a global cloud parameter or a a local quantity that varies with the line of sight. Each type of definition focuses on a different aspect of the relation between the HCN emission and the cloud gas, and provides a useful clue to the origin of the HCN emission. We therefore discuss them in sequence. #### 4.2.1 The global \(\alpha_{08}\)(HCN) factor The global \(\alpha\)(HCN) factor relates the cloud-integrated HCN(1-0) luminosity to the total mass of the dense gas, and follows the spirit of the original Gao & Solomon (2004) definition as given in Eq. 3. This factor is most relevant for extragalactic observations since they do not resolve the emission from individual clouds and therefore need to rely on cloud-integrated quantities. In their original derivation, Gao & Solomon (2004) assumed that the HCN emission was truly selective of the dense gas, and estimated that \(\alpha\)(HCN) was approximately equal to 2.1 \(\langle n\)(H\({}_{2}\))\({}^{1/2}\)/\(T_{\rm b}\)\(M_{\odot}\)(K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\), where \(\langle n\)(H\({}_{2}\))\(\rangle\) is the average gas density and \(T_{\rm b}\) is the line brightness temperatures. Assuming that these parameters take values of \(3\times 10^{4}\) cm\({}^{-3}\) and 35 K, respectively, Gao & Solomon (2004) derived the often-used result that \(\alpha\)(HCN) \(\approx 10\)\(M_{\odot}\)(K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\). Since we now know that the HCN emission is not truly selective of the dense gas (Kauffmann et al., 2017; Pety et al., 2017; Watanabe et al., 2017; Shimajiri et al., 2017; Evans et al., 2020; Tafalla et al., 2021; Dame & Lada, 2023), it has become customary to determine the value of the global \(\alpha\)(HCN) factor by defining the amount of dense gas using an independent criterion, such as the amount of mass over an extinction threshold of \(A_{\rm K}=0.8\) mag, which seems to correlate with the star-formation rate of a cloud (Lada et al., 2010; see also Evans et al., 2014). From now on, we refer to this definition of the conversion factor as \(\alpha_{08}\)(HCN). Figure 7: Cartoon view of the main chemical regimes identified in the observed clouds. The labels refer to representative species observed in each regime, and the arrows indicate their abundance trends. The numerical values on the horizontal scale represent approximate estimates of the column density at which the transition between the regimes occurs. This plot is inspired by Fig. 12 of Bergin & Tafalla (2007). In Sect. 3.7 we show how the sampling data can be used to derive line luminosities, and present estimates for HCN and other species in each of out sample clouds (Table 6). To better understand how the HCN(1-0) luminosity arises from the different layers of each cloud, we now present in Fig. 8 histograms of the luminosity as a function of \(N(\rm H_{2})\) for California, Perseus, and Orion A. Each bin in the histogram corresponds to a column density bin of our sampling, and the dotted vertical lines mark the \(A_{\rm K}=0.8\) mag threshold used to define the dense gas (\(\approx 6.7\times 10^{21}\) cm\({}^{-2}\); e.g., Lombardi et al. 2014). In the California cloud, no HCN(1-0) emission was detected in the average spectrum of the lowest column density bin, so we set its line contribution to zero, while in Perseus and Orion A the HCN(1-0) emission was detected even in the lowest column density bin. As can be seen in the figure, the highest column density bins contribute the least to the HCN luminosity in each cloud. This occurs because despite their brighter lines, these high column density bins cover a very small area, so their contribution to the total luminosity cannot compete with that of the weaker but much more extended emission from the low column density gas. Using the \(A_{\rm K}=0.8\) mag threshold as a boundary for the dense gas, our data indicate that the high density material only contributes to the total HCN(1-0) luminosity by 8% in California, 37% in Perseus, and 55% in Orion A (Table 7). Our estimate for Perseus is very close to the 40% estimated by Dame & Lada 2023 from their mapping observations, and a similar range of values has been determined for clouds in the inner and outer Galaxy by Evans et al. (2020) and Patra et al. (2022), respectively. The low (and variable) contribution from the high density gas to the total HCN luminosity of California, Perseus, Orion A, and other clouds shows that if the HCN luminosity is proportional to the amount of star-forming gas in a cloud, it is not because HCN traces that gas directly, but may be because it acts as a proxy of the star-forming material (Jimenez-Donaire et al. 2023). With this caveat in mind, we determined the amount of dense gas (\(M_{08}\)) in each cloud by integrating the \(\rm H_{2}\) column density over the 0.8 mag threshold in the maps of Lombardi et al. (2014), Zari et al. (2016), and Lada et al. (2017), and assuming a solar value for the metallicity (Asplund et al. 2021). Dividing the derived dense-gas masses by the HCN(1-0) luminosities, we estimated the \(\alpha_{08}\)(HCN) factors reported in Table 7. As can be seen, our \(\alpha_{08}\)(HCN) estimates span more than a factor of 3, and range (in units of \(M_{\odot}\)(K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\)) from 23 for California to 73 in Perseus, with Orion A having an intermediate value of 46. We note that our estimate for Perseus is very close to the 76 \(M_{\odot}\)(K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\) derived by Dame & Lada (2023) when these authors take into account the contribution of the HCN emission that lies outside their mapping boundary. Our value for Orion A, on the other hand, is more than a factor of 2 higher than the value estimated by Kauffmann et al. (2017), most likely due to a different estimate of the HCN luminosity, while no determination of \(\alpha_{08}\)(HCN) for California had so-far been presented. Taken together, our estimates suggest that \(\alpha_{08}\)(HCN) varies between clouds, and that no single HCN conversion factor can be used as a reference value. A similar diversity of conversion factors has been found by Evans et al. (2020) and Patra et al. (2022). In our case, all values are significantly larger than the canonical 10 \(M_{\odot}\)(K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\) derived by Gao & Solomon (2004) and commonly used in extragalactic work (Usero et al. 2015; Gallagher et al. 2018; Jimenez-Donaire et al. 2019). While our sample of three clouds is too small to investigate in general the origin of the \(\alpha_{08}\)(HCN) variations, it already offers some clues on what cloud properties are likely to affect the value of \(\alpha_{08}\)(HCN). A first property to consider is the gas temperature, which we have seen in previous sections significantly influences the intensity of most molecular lines. A dependence of \(\alpha_{08}\)(HCN) on temperature was already predicted by Gao & Solomon (2004), who derived a \(1/T_{\rm b}\) scaling from their original estimate. While these authors used the line brightness temperature as a parameter instead of the more physical gas kinetic temperature, it is expected that the two will be related even if the lines are not fully thermalized. In this regard, it is interesting to note that Gao & Solomon (2004) assumed an HCN(1-0) brightness temperature of 35 K, which may have been correct for some of the ultra-luminous IR galaxies of their sample, but is clearly too large for the galactic clouds of our study. This can seen from Fig. 10 in Tafalla et al. 2021, which shows that the HCN(1-0) line toward the densest parts of Perseus reaches a brightness temperature that is a full order of magnitude lower than assumed by Gao & Solomon (2004). As a result, even a simple application of the \(T_{\rm b}\) scaling law predicts a conversion factor close to 100 \(M_{\odot}\)(K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\), which is closer to the value we derive for Perseus. Since we now know that the Gao & Solomon (2004) estimate is too simple an approximation, it is critical to determine how \(\alpha\)(HCN) depends on temperature with real data. While this cannot be done using our limited sample of clouds, it can be investigated using the local version of \(\alpha\)(HCN), and for this reason, we defer further discussion of the temperature effects to the next subsection. Another parameter that affects the value of \(\alpha_{08}\)(HCN) and can cause variations between clouds is the contribution from the outermost cloud layers. As illustrated in Fig. 8, these layers contribute significantly to the total HCN luminosity, but since they do not contribute to the amount of dense gas, their net effect is to decrease the value of \(\alpha_{08}\)(HCN). The most extreme example of Figure 8: Contribution to the HCN(1–0) luminosity of the different \(\rm H_{2}\) column density bins to which each cloud has been assigned. The vertical dashed line indicates the column density corresponding to \(A_{\rm K}=0.8\) mag, proposed by Lada et al. (2010) as the boundary of the cloud dense gas. For the California cloud, no HCN(1–0) emission was detected toward the lowest column density bin, so its contribution to the luminosity has been set to zero (see the main text). this effect is seen in the California cloud, where about 92% of the HCN luminosity emerges from regions below the \(A_{\rm K}=0.8\) mag dense gas threshold (the fraction is 63% in Perseus and 45% in Orion A; see Table 7). Not surprisingly, California presents the lowest \(\alpha_{08}\)(HCN) value of our sample (23 \(M_{\odot}\)(K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\)), which is about half that of Orion A and one third of that Perseus. While California may represent an extreme example of a cloud in terms of its diffuse structure Lada et al. (2017), Perseus and Orion A also present significant differences in terms of the contribution from their outer layers to the HCN luminosity, an effect also seen by Evans et al. (2020) toward clouds in the inner galaxy. While it may be possible that any cloud-to-cloud differences average out in extragalactic observations that contain multiple clouds inside a telescope bin, it is likely that a multiplicity of \(\alpha_{08}\)(HCN) values is intrinsic to any cloud population. We finish our analysis of the global \(\alpha_{08}\)(HCN) factor by noting that the potentially large contribution from the outer layers to the HCN(1-0) luminosity imposes a serious difficulty in estimating HCN(1-0) luminosities and therefore \(\alpha_{08}\)(HCN) factors both to the mapping and sampling techniques. Our observations of Perseus and Orion A show that there is residual HCN(1-0) emission even in the lowest column density bins of these clouds, whose extinction is in the range to \(A_{\rm V}\approx 1\)-2 mag. Any observation that does not sample this low-extinction regime runs therefore the risk of underestimating the total luminosity and therefore overestimating the \(\alpha_{08}\)(HCN) factor. For the California cloud, the HCN(1-0) line was not detected in the lowest column density bin, and from the rms level of the average spectrum we estimate that the possible contribution from this bin is on the order of 10% of the total luminosity, although the figure is clearly uncertain. A shown in Sect. 3.7, our luminosity estimate of the more extended CO(1-0) emission matches the independent estimate of Lewis et al. (2022), so we have some confidence that the sampling technique can provide a meaningful estimate of the HCN luminosity even if dominated by the cloud outermost layers. #### 4.2.2 The local \(\alpha_{\rm X}\)(HCN) factor We now investigate the information about the HCN(1-0) emission as a dense-gas tracer that can be derived from the local definition of the conversion factor. This definition follows common practice in the analysis of the CO emission as a cloud tracer, where the term conversion factor is referred indistinctly to both the ratio between H\({}_{2}\) column density and integrated intensity (represented by \(X\)), and the ratio between total cloud mass and line luminosity (represented by \(\alpha\); see Bolatto et al. 2013 for a review). Following this convention, we define \[\alpha_{\rm X}\] (4) where we have assumed a solar abundance of the elements (Asplund et al. 2021) to convert the H\({}_{2}\) column density into a mass density, and used the subindex X following the convention proposed by Dame & Lada (2023) (see, e.g., Eq. 7 in Evans et al. (2022) for an equivalent definition using CO). As expected from Eq. 4, the global and local conversion factors are closely related: the global factor corresponds to the intensity-weighted cloud average of the local factor after substituting the total H\({}_{2}\) column density by the column density of dense gas. The local \(\alpha_{\rm X}\)(HCN) factor has been used by Shimajiri et al. (2017) to investigate the relation between the HCN emission and the gas mass in the Aquila, Ophiuchus, and Orion B clouds (see their Fig. 6). Our sampling observations provide a natural data set to carry out a similar investigation in California, Perseus, and Orion A since the ratio between the H\({}_{2}\) column density and the HCN(1-0) intensity can be directly determined from the sampling data. The left panels of Fig. 9 show the distribution of \(\alpha_{\rm X}\)(HCN) as a function of \(N\)(H\({}_{2}\)) for California, Perseus, and Orion A when no temperature correction has been applied to the HCN emission. As can be seen, the \(\alpha_{\rm X}\)(HCN) parameter remains approximately constant in California and Perseus, as expected from the close-to-linear dependence of the HCN(1-0) intensity on \(N\)(H\({}_{2}\)) found in Sect. 3.4. In addition, it presents relatively low levels of dispersion of 0.27 and 0.21 dex, respectively. The \(\alpha_{\rm X}\)(HCN) parameter in Orion A, on the other hand, remains approximately constant for \(N\)(H\({}_{2}\)) lower than \(10^{23}\)cm\({}^{-2}\) but drops significantly at higher column densities and presents a higher level of scatter of about 0.53 dex. Overall, our \(\alpha_{\rm X}\)(HCN) distributions look similar to those found by Shimajiri et al. (2017) in Aquila, Ophiuchus, and Orion B, which show close-to-constant dependence on extinction. The Perseus distribution, in addition, presents a similar average to that determined by Dame & Lada (2023) for this cloud (=215), as would have been expected from their similar estimate of the \(X\) factor. To investigate the origin of the higher dispersion of \(\alpha_{\rm X}\)(HCN) in Orion A, we looked at the dependence of the HCN emission on temperature. Section 3.4 and previous studies of the HCN(1-0) emission in Orion A have found a systematic dependence of the emission on temperature (Goldsmith et al. 1981; Schilke et al. 1992; Graninger et al. 2014), and Hacar et al. (2020) estimated that the ratio between HCN(1-0) intensity and visual extinction depends quadratically on the gas kinetic temperature up to 40 K (these authors excluded from their analysis the hottest vicinity of the ONC). Figure 10 presents our estimate of the dependence of the \(\alpha_{\rm X}\)(HCN) factor on gas temperature for the combined data from California, Perseus, and Orion A. As can be seen, at low temperatures (\(\sim 10\) K) the data from the three clouds shows a significant overlap, while at higher temperatures (\(>15\) K), which are only represented by the Orion A data, \(\alpha_{\rm X}\)(HCN) systematically decreases with temperature by almost \begin{table} \begin{tabular}{l c c c c c} \hline \hline Cloud & \(M_{08}\) & \(L_{\rm T}\)[HCN(1–0)] & \(L_{08}\)[HCN(1–0)] & \(f_{08}\) & \(\alpha_{08}\)(HCN) \\ & [\(M_{\odot}\)] & [K km s\({}^{-1}\) pc\({}^{2}\)] & [K km s\({}^{-1}\) pc\({}^{2}\)] & & [\(M_{\odot}\)(K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\)] \\ \hline California & 4,800 & 210 & 16 & 0.08 & 23 \\ Perseus & 5,600 & 77 & 28 & 0.37 & 73 \\ Orion A & 24,000 & 524 & 290 & 0.55 & 46 \\ \hline \end{tabular} 1 \end{table} Table 7: Dense gas masses, HCN(1–0) luminosities, and global HCN conversion factors\({}^{a}\). two orders of magnitude. From a linear fit to the log-log plot we determine that \(\alpha_{\rm X}\)(HCN) depends on the gas kinetic temperature as \[\alpha_{\rm X}\rm(HCN)\ [M_{\odot}\ (K\ km\ s^{-1}\ pc^{2})^{-1}]=235\pm 41\ \left( \frac{T_{\rm gas}}{10\ K}\right)^{-1.640.1}. \tag{5}\] This dependence of \(\alpha_{\rm X}\) on the gas kinetic temperature is steeper than the \(T_{\rm b}^{-1}\) predicted by Gao & Solomon (2004) (assuming a close relation between the brightness and kinetic temperatures). This is likely the result from a combination of our more realistic dependence of the line intensity on the gas temperature (Appendix D.2) and the high sensitivity of the HCN abundance with gas temperature found in Sect. 3.5. The systematic correlation of \(\alpha_{\rm X}\)(HCN) with gas temperature suggests that the peculiar behavior of the Orion A data in Fig. 9 results from the temperature variations in the cloud. To test this idea, we multiplied the \(\alpha_{\rm X}\)(HCN) factor by (\(T_{\rm k}\)/10 K)\({}^{1.6}\), which is the inverse of the power law derived from the fit, and present the result in the right panels of Fig. 9. As can be seen, the temperature-corrected conversion factor in Orion A shows an approximately constant dependence on \(N\)(H\({}_{2}\)) and has a similar dispersion to that measured in California and Perseus (0.2-0.3 dex). A slight drop of \(\alpha_{\rm X}\)(HCN) all low \(N\)(H\({}_{2}\)) in the temperature-corrected values of Perseus and Orion A likely results from small errors in the temperature at low column densities when using the dust temperature as a reference (Sect. 3.2). The similar distribution of the corrected \(\alpha_{\rm X}\)(HCN) factors in the three clouds suggests that gas temperature differences were responsible for the observed differences in the uncorrected factors. This interpretation differs from that of Shimajiri et al. (2017), who also found differences in the conversion factor between their clouds, but associated them with variations in the local far-UV (FUV) radiation field, which they estimated to range from \(G_{0}=1\) to more than 4000. Interpreting the \(\alpha_{\rm X}\)(HCN) differences as a result of the FUV radiation, however, presents several problems. First of all, it is unlikely that the HCN-emitting gas is directly exposed to the high levels of FUV radiation measured toward the exterior of the clouds since UV radiation quickly photodissociates the HCN molecules (Aguado et al. 2017). In addition, a dependence of \(\alpha_{\rm X}\)(HCN) on the external FUV radiation field seems to contradict the observed constant behavior of this factor as a function of column density since the Figure 10: \(\alpha_{\rm X}\)(HCN) factor as a function of gas temperature in California, Perseus, and Orion A. The solid line represents the fit described in the text. The data are color-coded as in previous figures. Figure 9: Local \(\alpha_{\rm X}\)(HCN) factor as a function of H\({}_{2}\) column density for the California, Perseus, and Orion A clouds. _Left:_\(\alpha_{\rm X}\)(HCN) factor calculated using Eq. 4. _Right:_\(\alpha_{\rm X}\)(HCN) factor multiplied by the temperature factor (\(T_{\rm k}\)/10 K)\({}^{1.6}\) to compensate for the dependence determined in Eq. 5. Note the reduced dispersion of the Orion A data. In all panels, the units of \(\alpha_{\rm X}\)(HCN) are in M\({}_{\odot}\) (K km s\({}^{-1}\) pc\({}^{2}\))\({}^{-1}\), and the dashed lines represent the mean value of the temperature-corrected factor for each cloud. FUV radiation is expected to be strongly attenuated by the cloud internal extinction. It is therefore more likely that the cloud-to-cloud variations found by Shimajiri et al. (2017) also arise from differences in the cloud gas temperature. This is further supported by the fact that Shimajiri et al. (2017) used the dust temperature to infer the \(G_{0}\) factor, so there is a possible ambiguity interpreting the effect of the two parameters. If the gas temperature has the strong effect on the local \(\alpha_{\rm X}\)(HCN) factor suggested by Eq. 5, a similar dependence on temperature is expected to affect the global factor, which we have seen represents an intensity-weighted average of \(\alpha_{\rm X}\)(HCN) over a whole cloud. This temperature dependence may be difficult to observe in nearby galactic clouds because the contribution from relatively warm regions will likely be overwhelmed by the contribution from the more extended colder gas that we have seen dominates the HCN emission. In extragalactic observations, on the other hand, it may be possible to encounter more extreme conditions where the warm gas dominates the global HCN emission. Indeed, observations of luminous and ultra-luminous infrared galaxies suggest that these systems have significantly lower conversion factors than normal galaxies (Garcia-Burillo et al. 2012), as expected from their elevated gas temperatures. This result should serve as a warning that for neither the local nor the global versions of the conversion factor, one size fits all, and that care must be exercised when applying the same conversion factor to inhomogeneous samples of clouds or galaxies. ### The HCN/CO ratio and its correlation with the H\({}_{2}\) column density Another parameter commonly used to interpret extragalactic observations is the HCN/CO intensity ratio. Assuming that the HCN intensity is proportional to the dense gas column density, and that the CO intensity is proportional to the total gas column density, the HCN/CO intensity ratio is expected to measure the fraction of dense gas (Gao & Solomon 2004; Usero et al. 2015; Leroy et al. 2017). Recent work by Gallagher et al. (2018) and Jimenez-Donaire et al. (2019) has found a significant correlation between the HCN/CO ratio and the molecular column density in (normal) galaxies averaged over 1-2 kpc spatial scales. Gallagher et al. (2018) have interpreted this correlation as indicating that both the HCN/CO ratio and the gas column density are sensitive tracers of the density distribution in molecular clouds. Our line survey provides estimates of the HCN and CO intensity together with the \(N\)(H\({}_{2}\)) column density, so we can use the data to investigate the correlation between the HCN/CO ratio and \(N\)(H\({}_{2}\)) in galactic clouds. Figure 11 presents the HCN(1-0)/CO(1-0) intensity ratio (HCN/CO hereafter) for the three clouds of our survey as a function of both the H\({}_{2}\) column density (lower x-axis) and the molecular surface density commonly used in extragalactic work (upper x-axis).4 No temperature correction has been applied to either the HCN or CO data, so the results can be directly compared with extragalactic observations. Applying a temperature correction to the two lines, however, only has a minor effect on the intensity ratio since the two corrections almost cancel out and the resulting scatter plot is practically indistinguishable from that of Fig. 11. As can be seen in the figure, the HCN/CO ratio correlates strongly with the H\({}_{2}\) column density over the more than two orders of magnitude covered by this parameter. In addition, the correlation seems to be the same in the three clouds, an impression confirmed by FF tests of the three possible cloud pairs, which return p-values between 0.28 and 0.76. Combining the data from the three clouds, we estimate a Pearson's coefficient of 0.84 (in log-log scale), and using a least squares fit we derive a relation of the form Footnote 4: \(\Sigma_{\rm mol}\) [M\({}_{\odot}\) pc\({}^{-2}\)] \(=2.25\times 10^{-20}\)\(N\)(H\({}_{2}\)) [cm\({}^{-2}\)] assuming a standard solar abundance (Asplund et al. 2021). \[\log_{10}\left(\frac{I_{\rm HCN}}{I_{\rm CO}}\right)=(-3.3\pm 0.8)+(0.71\pm 0. 03)\ \log_{10}\left(\frac{\Sigma_{\rm mol}}{\rm M_{\odot}\ pc^{-2}}\right), \tag{6}\] where the intensities refer to the \(J\)=1-0 transition of both HCN and CO. This fit is represented in the figure by a black solid line. The correlation between HCN/CO and \(N\)(H\({}_{2}\)) found in our three clouds is remarkably similar to that seen at kiloparsec scales in external galaxies by Gallagher et al. (2018) and Jimenez-Donaire et al. (2019). These extragalactic observations cover similar ranges of \(N\)(H\({}_{2}\)) as our cloud data (\(10^{2}\)-\(10^{3}\) M\({}_{\odot}\) pc\({}^{-2}\) for Gallagher et al. 2018 and 10-300 M\({}_{\odot}\) pc\({}^{-2}\) for Jimenez-Donaire et al. 2019, although these values include the contribution of the filling factor of the clouds), and are illustrated in Fig. 11 using red and blue dotted lines. As the plot shows, our galactic fit, with a slope of \(0.71\pm 0.03\), is intermediate between the fits obtained by Gallagher et al. (2018) (slope \(0.81\pm 0.09\)) and Jimenez-Donaire et al. (2019) (slope \(0.5\pm 0.1\)), who used different assumptions to estimate the molecular surface density. Further work is needed to better connect the galactic and extragalactic results, both in terms of the disparate spatial scales that they sample (subparsec and 1-2 kpc, respectively) and the different methods used to derive the H\({}_{2}\) column density, which in the extragalactic case relies on indirect uses of the CO emission (Gallagher et al. 2018; Jimenez-Donaire et al. 2019). Assuming that the different estimates are truly comparable, the most natural interpretation of the similar behavior of the HCN/CO ratio is that the extragalactic correlation reflects the internal properties of the individual unresolved clouds. To explore how these properties Figure 11: HCN(1–0)/CO(1–0) intensity ratio as a function of H\({}_{2}\) column density for California, Perseus, and Orion A, color-coded as in Fig. 1. The solid and dashed black lines represent the two fits discussed in the text. The dashed red and blue lines represent, respectively, the fits derived by Gallagher et al. (2018) and Jiménez-Donaire et al. (2019) from extragalactic data. could give rise to the HCN/CO versus \(N\)(H\({}_{2}\)) correlation seen in our data set, we needed to use the results from the radiative transfer model presented in Paper I to reproduce the Perseus data. The similar behavior of the HCN/CO ratio in the three clouds of our sample suggests that the excitation mechanism responsible for the Perseus correlation is likely also responsible for the California and Orion A correlations. According to Paper I, the intensity of multiple transitions, including CO(1-0) and HCN(1-0), can be reproduced with a model that assumes that the gas physical and chemical properties depend on \(N\)(H\({}_{2}\)) (see Table 3 in Paper I). Of particular interest for the HCN/CO correlation is the relation between the volume density and the column density, which was found to have the form \(n\)(H\({}_{2}\)) \(\geq 2\times 10^{4}\)cm\({}^{-3}\) (\(N\)(H\({}_{2}\))\(/10^{22}\)cm\({}^{-2}\))\({}^{0.75}\). Using this relation, the radiative transfer model of Paper I showed that both CO(1-0) and HCN(1-0) must be optically thick over most of the cloud, and that while CO(1-0) is thermalized at all column densities, HCN(1-0) remains sub-thermal with an excitation temperature strongly dependent on \(N\)(H\({}_{2}\)) (see Fig. 13 in Paper I). This excitation behavior makes the intensity of HCN(1-0) rapidly increase with \(N\)(H\({}_{2}\)) (through its density dependence), while the intensity of CO(1-0) stays approximately constant. As a result, the HCN/CO ratio systematically increases with \(N\)(H\({}_{2}\)), in agreement with the observed behavior. To next interpret the HCN/CO ratio as an indicator of the gas volume density, we combined the relation between volume and column densities derived in Paper I with our fit of the HCN/CO data. As seen in Eq. 6, the HCN/CO ratio depends on \(N\)(H\({}_{2}\)) with a slope of \(0.71\pm 0.03\), which differs by only 1.3 \(\sigma\) from the 0.75 value determined for the density relation with \(N\)(H\({}_{2}\)). Taking this similarity of values as an indication of an approximate equality, we re-fitted the HCN/CO-\(N\)(H\({}_{2}\)) correlation using a fixed value of 0.75. The result is represented in Fig. 11 with a dashed line, and is practically indistinguishable from the original best fit inside the range of values covered by the observations. Using this new fit (which has an intercept of -3.4), we derive a relation between gas density and the HCN/CO ratio of the form \[n\rm(H_{2})=8.7\times 10^{5}~{}cm^{-3}~{}\frac{\it{H}_{\rm HCN}}{\it{I}_{ \rm CO}}. \tag{7}\] This volume density should be interpreted as a mean value along the line of sight where the HCN/CO ratio has been measured, and since its derivation uses the radiative transfer model of Paper I, the mean has been weighted by the emission of the CS and HCN. No meaningful error bar could be estimated for this density due to the difficulty in quantifying the model assumptions, but given the quality of the model fits, it is likely that the uncertainty lies within a factor of 2. It is probably premature to extrapolate our derived relation between volume density and HCN/CO ratio to extragalactic data, although the strong similarity between the galactic and extragalactic correlations of HCN/CO with \(N\)(H\({}_{2}\)) suggests that this is likely to be the case. If so, the HCN/CO ratio should be thought not so much as an indicator of the dense gas fraction but as an estimator of the gas volume density averaged over the line of sight and the observing beam. Such an estimator presents the advantage over single-line tracers that is less sensitive to gas temperature variations given the approximate cancellation between the dependences of HCN and CO. Further characterization of the HCN and CO emission from galactic clouds is needed to study the general properties of the line ratio and to better calibrate its dependence on \(N\)(H\({}_{2}\)). For this investigation, the stratified random sampling technique presented here appears to be a suitable tool. ## 5 Conclusions We sampled the 3 mm wavelength emission of the California and Orion A clouds using the IRAM 30 m radio telescope. We selected a set of target positions using the stratified random sampling technique previously used in Tafalla et al. (2021) to study the emission from the Perseus cloud. This technique divides the cloud into multiple bins of H\({}_{2}\) column density and randomly selects a number of cloud positions in each bin to carry out the molecular-line observations. We combined the new results from California and Orion A with the Perseus cloud data to investigate the main gas parameters that control the line emission of the CO isotopologues and the main dense-gas tracers, and to compare the emission of these species in three clouds whose star-formation rates span more than one order of magnitude. The main results from our study are the following: 1. In the three target clouds, the intensity of the studied molecular lines correlates strongly with the value of the H\({}_{2}\) column density even if the positions are separated by distances of tens of parsecs. This strong correlation with \(N\)(H\({}_{2}\)) shows that this parameter is the main predictor of the line intensity and supports its use in the stratified random sampling technique. 2. The observations of Orion A, which presents gas temperature variations across its components, show that the intensity of most molecular lines also depends on the gas temperature. We used a cloud radiative transfer model to determine the expected change in the intensity of all target lines as a function of gas temperature, and used the model results to simulate the emission expected if our target clouds were isothermal. The temperature-corrected intensities present a lower level of dispersion and a better agreement between the three target clouds than the uncorrected intensities. 3. We find that the temperature-corrected intensity of the CO lines has a flatter-than-linear dependence on \(N\)(H\({}_{2}\)), while the intensity of traditional dense-gas tracers such as HCN(1-0), CS(2-1), HCO\({}^{+}\)(1-0), and HNC(1-0) scales almost linearly with \(N\)(H\({}_{2}\)) over the two orders of magnitude covered by the observations (\(\approx 10^{21}\)-\(10^{23}~{}cm^{-2}\)). 4. In contrast with the traditional dense-gas tracers, the intensity of N\({}_{2}\)H\({}^{+}\)(1-0) does not correlate linearly with \(N\)(H\({}_{2}\)) over the full column density range. It correlates almost linearly at high column densities, but it drops by more than one order of magnitude between \(2\times 10^{22}\) cm\({}^{-2}\) and \(10^{22}\) cm\({}^{-2}\), and remains undetected at lower column densities. This behavior, which is similar in the three clouds, makes N\({}_{2}\)H\({}^{+}\) the only selective tracer of the cloud cold dense component. 5. In addition to affecting the molecular excitation, the gas kinetic temperature changes the abundance of some species. Using the intensity distribution of rare isotopologues, we find that the abundance of HCN and CS is systematically enhanced with increasing gas temperature, while the abundance of HCO\({}^{+}\) and HNC remains approximately constant between 10 and 100 K. In contrast with the classical dense-gas tracers, N\({}_{2}\)H\({}^{+}\) decreases in abundance with temperature, most likely due to the release of CO from the grains as the temperature increases. 6. The stratified random sampling data can also be used to estimate cloud-integrated luminosities of the different molecular lines. We compared our estimated luminosities with literature values (mostly from CO isotopologs) and find an agreement typically at the 25% level, which is remarkable because the comparison involves very different telescopes and calibration schemes. We used our sampling data to estimate luminosities of the main survey lines in California, Perseus, and Orion A. 7. The systematic emission patterns found in our survey suggest that the target molecular clouds share a common chemical structure. This structure is characterized by abundance variations as a function of column density and can be approximately understood as consisting of three main chemical regimes. Between the photodissociation boundary (\(\sim 10^{21}\) cm\({}^{-2}\)) and \(\sim 10^{22}\) cm\({}^{-2}\), most species maintain a close-to-constant abundance that is likely determined by gas-phase reactions. In this regime, N\({}_{2}\)H\({}^{+}\) remains undetected due to the high CO abundance in the gas phase. From \(\sim 10^{22}\) cm\({}^{-2}\) to \(\sim 10^{23}\) cm\({}^{-2}\), the abundance of most species decreases due to freeze-out onto grains, while N\({}_{2}\)H\({}^{+}\) is enhanced as a result of a decrease in the gas-phase CO abundance. At elevated temperatures and column densities higher than \(\sim 10^{23}\) cm\({}^{-2}\), which in our sample are only reached toward Orion A, high-mass star-formation feedback disturbs the gas chemical composition, enhancing species such as HCN and CS and destroying N\({}_{2}\)H\({}^{+}\). 8. We used our survey data to study the relation between the HCN(1-0) emission and the cloud gas mass. We explored the clues provided by two possible definitions of the HCN conversion factor previously used in the literature. The "global" definition compares the cloud-integrated line luminosity with the amount of "dense" gas and shows variations of more than a factor of 3 between California, Perseus, and Orion A. These variations mostly arise from the different contribution to the HCN emission of the external layers of the cloud, which tend to dominate the luminosity due to their large surface area. A "local" definition of the \(\alpha\)(HCN) factor compares the HCN(1-0) intensity with the total H\({}_{2}\) column density and can be measured at each cloud position. This factor displays a strong dependence on the gas kinetic temperature, which seems to result from a combination of excitation and abundance effects. A dependence of the \(\alpha\)(HCN) factor on the gas temperature may help explain the diversity of values seen by galactic and extragalactic observers. 9. We also used our survey data to study the correlation between the HCN(1-0)/CO(1-0) intensity ratio and the gas column density, which has recently been studied using extragalactic observations. Our data show a similar relation in terms of both the range of parameters and slope. Using the results of a cloud radiative transfer model, we show that the HCN(1-0)/CO(1-0) ratio can be used to estimate the mean gas volume density, and that the correlation with \(N\)(H\({}_{2}\)) observed in our clouds results from the gradual increase in the HCN(1-0) sub-thermal excitation with H\({}_{2}\) column density. The above results illustrate the great potential of the stratified sampling technique to characterize the molecular emission from star-forming clouds. Given its relatively low cost in terms of telescope observing time, it should be possible to expand its application to a larger number of clouds and obtain a more complete view of their intrinsic diversity that can further our understanding of star formation and can serve as a template to analyze extragalactic observations. ###### Acknowledgements. We thank our referee, Neal Evans, for a thorough and critical review of the manuscript that helped us improve the analysis and the presentation, and for information on the results of Yun et al. (2021). We thank Toshikazu Onishi for valuable information on the calibration scale of the Osaka 18.5m telescope, MT and AU acknowledge partial support from project PID2019-108765GB-I00 funded by MCIN/AE/10.13095/01100011033 and by "EDRC A way of making Europe". This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 851433) This work is based on IRAM 30 m telescope observations carried out under project numbers 034-17, 104-17, 033-18, 008-19, and 116-20. IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain). This research has made use of NASA's Astrophysics Data System Bibliographic Services and the SIMBAD database, operated at CDS, Strasbourg, France.
2309.09323
Answering Causal Queries at Layer 3 with DiscoSCMs-Embracing Heterogeneity
In the realm of causal inference, Potential Outcomes (PO) and Structural Causal Models (SCM) are recognized as the principal frameworks.However, when it comes to Layer 3 valuations -- counterfactual queries deeply entwined with individual-level semantics -- both frameworks encounter limitations due to the degenerative issues brought forth by the consistency rule. This paper advocates for the Distribution-consistency Structural Causal Models (DiscoSCM) framework as a pioneering approach to counterfactual inference, skillfully integrating the strengths of both PO and SCM. The DiscoSCM framework distinctively incorporates a unit selection variable $U$ and embraces the concept of uncontrollable exogenous noise realization. Through personalized incentive scenarios, we demonstrate the inadequacies of PO and SCM frameworks in representing the probability of a user being a complier (a Layer 3 event) without degeneration, an issue adeptly resolved by adopting the assumption of independent counterfactual noises within DiscoSCM. This innovative assumption broadens the foundational counterfactual theory, facilitating the extension of numerous theoretical results regarding the probability of causation to an individual granularity level and leading to a comprehensive set of theories on heterogeneous counterfactual bounds. Ultimately, our paper posits that if one acknowledges and wishes to leverage the ubiquitous heterogeneity, understanding causality as invariance across heterogeneous units, then DiscoSCM stands as a significant advancement in the methodology of counterfactual inference.
Heyang Gong
2023-09-17T17:01:05Z
http://arxiv.org/abs/2309.09323v3
# Answering Layer 3 queries with DiscoSCMs ###### Abstract Addressing causal queries across the Pearl Causal Hierarchy (PCH) (i.e., associational, interventional and counterfactual), which is formalized as Layer Valuations, is a central task in contemporary causal inference research. Counterfactual questions, in particular, pose a significant challenge as they often necessitate a complete knowledge of structural equations. This paper identifies the degeneracy problem caused by the consistency rule. To tackle this, the _Distribution-consistency Structural Causal Models_ (DiscoSCMs) is introduced, which extends both the structural causal models (SCM) and the potential outcome framework. The correlation pattern of potential outcomes in personalized incentive scenarios, described by \(P(y_{\cdot\cdot\cdot},y_{\cdot^{\prime}\cdot}^{\prime})\), is used as a case study for elucidation. Although counterfactuals are no longer degenerate, they remain indeterminate. As a result, the condition of independent potential noise is incorporated into DiscoSCM. It is found that by adepthly using homogeneity, counterfactuals can be identified. Furthermore, more refined results are achieved in the unit problem scenario. In simpler terms, when modeling counterfactuals, one should contemplate: "Consider a person with average ability who takes a test and, due to good luck, achieves an exceptionally high score. If this person were to retake the test under identical external conditions, what score will be obtain? An exceptionally high score or an average score?" If your choose is predicting an average score, then you are essentially choosing DiscoSCM over the traditional frameworks based on the consistency rule. Causal Inference, DiscoSCM, Counterfactual, Personalizing ## I Introduction In the realm of causal inference, causal queries can be organized by the Pearl Causal Hierarchy (PCH) (i.e., associational, interventional and counterfactual) [1, 2, 3], and can be mathematically formalized as Layer Valuations [4]. There are two primary frameworks for causal modeling: Potential Outcomes (PO) [5, 6] and Structural Causal Models (SCMs). Both frameworks are anchored in the consistency rule and have mathematical equivalence [2, 7]. However, when addressing counterfactual queries, which are essentially vital for analyzing the _cause of effect_[8, 9, 10], these frameworks often exhibit practical limitations due to the necessity for complete knowledge of structural equations [2]. Counterfactual analysis deals with the behavior of specific individuals, thus inherently pertaining to individual-level semantics. Henceforth, a corresponding examination of counterfactuals formalized by Layer 3 valuations reveals its essence as a certain reduction of the joint distribution of potential outcomes. This examination identifies **the degeneracy problem** caused by the consistency rule. Subsequently, a novel causal modeling framework is introduced: the Distribution-consistency Structural Causal Model (DiscoSCM). This framework replaces the traditional consistency rule with a distribution consistency assumption. The difference between traditional methods and DiscoSCM can be illustrated with a simple question: "If an individual with average ability takes a test and achieves an exceptionally high score due to good luck, what score would they obtain if they retake the test under identical conditions? An exceptionally high score or an average score?" The choice here fundamentally determines whether to employ the traditional causal modeling framework or DiscoSCM. The paper is organized as follows: Section 2 briefly introduce basics for causal modeling within a unit selection setting and identifies the degeneracy problem; Section 3 proposes the DiscoSCMs to address the aforementioned problem. However, in Section 4, it is pointed out that there are issues with indeterminable counterfactuals; Section 5 suggests that by choosing DiscoSCM with independent potential noise, the aforementioned issues can be resolved. Section 6 showcases improved theoretical results under the novel framework, i.e., refined bounds for counterfactuals. Lastly, Section 7 offers a succinct summary and discussion. ## II The Degeneracy Problem The consistency rule forms the cornerstone for mainstream causal modeling frameworks 1[2, 5]. Its role is to connect potential outcomes to observed data. In the PO framework, it is treated as an assumption, whereas in the SCM framework, it is considered a theorem. To illustrate, consider a running example involving an individual with a binary treatment \(T\), pre-treatment features \(X\), and outcome \(Y\). The consistency assumption posits that if a hypothetical condition materializes for a user \(i\), their potential outcome under that condition will precisely match their observed outcome. Formally, consistency can be expressed as: Footnote 1: Details on traditional causal modeling frameworks are provided in the Appendix \[T_{i}=t\Rightarrow Y_{i}(t)=Y_{i} \tag{1}\] Utilizing the consistency rule as a mathematical tool allows for the formulation of equations for the identification of causal quantities. For instance, \(P(Y(t)=y|X=x,T=t)=P(Y=y|X=x,T=t)\). In other words, it enables the translation of probability expressions involving counterfactuals into expressions that involve the ordinary conditional probabilities of observed variables. This paper argues that this condition is overly stringent. It is posited that the observed outcome merely needs to be a sample of their potential outcome under that condition, i.e., the **Distribution-consistency Assumption2**, Footnote 2: It reflects that an empirically minded scientist might prefer to maximize the likelihood of the observed values of variables in the data, rather than imposing equality constraints on them. Therefore, the observed value of potential outcomes in the data should be consistent with their distribution rather than being mathematically identical. \[T_{i}=t\Rightarrow Y_{i}(t)=_{d}Y_{i} \tag{2}\] It is evident from the aforementioned instance that the formula still holds. In fact, in the subsequent sections of this paper, it will be demonstrated that under the Layer 1/2 valuations, this assumption is equivalent to consistency. In the past, the majority of causal research primarily focused on estimating the _effect of cause_, specifically Layer 2 valuation. This area has achieved remarkable success, and the consistency assumption has been deemed adequate. However, in recent years, with the increasing emphasis on modeling the _cause of effect_ in both academia and industry, there has been a growing demand to address counterfactual questions, i.e., Layer 3 valuation. Despite this, progress in this specific aspect has been relatively limited. For example, the challenge of selecting units that exhibit a desired response pattern is a widespread concern across various sectors, including industry, marketing, and health science [10, 11]. This challenge fundamentally requires the computation of specific counterfactuals. This becomes particularly significant when offering personalized incentives, such as Amazon coupons or Uber discounts, to target users--a strategy extensively adopted by online platforms to boost user engagement and platform revenue [12, 13, 14]. In scenarios characterized by binary treatment and outcome, users can be segmented into four behavioral categories: compliers, always-takers, never-takers, and defiers3. The categorization of a user hinges on the realization of \((Y(0),Y(1))\). The joint distribution of \((Y(0),Y(1))\) embodies the mathematical essence of Layer 3 valuation. As such, estimating the likelihood of counterfactual events becomes crucial. Footnote 3: Compliers respond positively if encouraged and negatively if not. Always-takers consistently respond positively, irrespective of encouragement. Never-takers invariably respond negatively, regardless of encouragement. Defiers exhibit a response contrary to the encouragement received. A widely adopted approach to tackle the unit selection problem mentioned earlier is uplift modeling. This method primarily focuses on estimating the expected difference between potential outcomes with and without incentives for subpopulations characterized by their pre-treatment attributes. In essence, it calculates the conditional average treatment effect (CATE, \(\tau(x)=E[Y(1)-Y(0)|X=x]\)) and subsequently selects users based on their \(\tau\) values. This approach can be perceived as an attempt to estimate specific parameters of the joint distribution of potential outcomes rather than modeling the distribution directly. Although real-world applications might introduce additional challenges, such as budgetary constraints, uplift modeling, which is fundamentally based on A/B-test methodologies, relies on ad-hoc heuristics that neglect the counterfactual essence of the desired behavior. Recognizing this limitation, recent research efforts have estimated bounds for counterfactual parameters and utilized the midpoint of these bounds as a selection criterion [10, 11], offering an indirect heuristic method that taps into counterfactual information. This approach raises a pertinent question: Why hasn't the joint distribution of potential outcomes \((Y(0),Y(1))\) been modeled directly? As far as we are aware, existing literature has yet to explore this direct approach. Our response to this conundrum is that, within the mainstream causal modeling frameworks grounded on the consistency assumption, it is fundamentally impractical to directly model the joint distribution in a meaningful manner. To elucidate this point, it's pivotal to understand that counterfactual assertions, such as "Suppose a person took a drug and subsequently died. Would this person have survived if he hadn't taken the drug?", inherently operate at an individual level. Thus, for a specific user \(i\) with realization \(t_{i}\) and \(y_{i}\), the joint distribution of \(Y_{i}(0)\) and \(Y_{i}(1)\) mandates that either \(Y_{i}(0)\) or \(Y_{i}(1)\) aligns with a constant \(y_{i}\). Which of these aligns with \(y_{i}\) hinges on whether \(t_{i}\) is 0 or 1, as per the consistency assumption. This leads to a scenario where \((Y_{i}(0),Y_{i}(1))\) represents a degenerate distribution. We term this issue with the joint distribution of potential outcomes as **the degeneracy problem** for individual-level counterfactuals4. Stemming from the degeneracy problem, it becomes evident that any given user deterministically belongs to one of the categories: compliers, always-takers, never-takers, or defiers. This resonates with the deterministic philosophical viewpoint on natural phenomena, a perspective that Pearl adhered to while developing the SCM framework [2]. The PO framework, being mathematically synonymous with the SCM, posits that the randomness of \(Y(t)\) arises from the choices of different individuals \(i\). Footnote 4: It’s crucial to note that the above discourse relies on an implicit assumption. As articulated in [15] on page 3, the vector \(U=u\) can be perceived as an experimental “unit” representing an individual subject \(i\). Every instantiation \(U=u\) of the exogenous variables uniquely determines domain variable values \(t_{i},y_{i}\) for each user \(i\). This implicit assumption essentially posits that the premise for discussing counterfactuals is the existence of a ascertainable fact, observed or otherwise. Some might argue that the degeneracy problem does not exist and that counterfactuals can be derived through a three-step algorithm: abduction, action, and prediction within the SCM framework. It's crucial to note that this counterfactual algorithm is designed for population-level counterfactuals. The right interpretation for the aforementioned drug example is: among all the individuals who took a drug and subsequently died, what proportion would have survived if they had taken the drug? Additionally, it's important to highlight that this counterfactual algorithm has limited practical use. The main limitation is its dependency on a complete knowledge of the structural equations for all domain variables, a situation rarely seen in real-world scenarios. Therefore, as the consistency rule leads to the degeneration of the joint distribution, posing a challenge to our modeling, a natural question arises: can we simply abandon it? Indeed, to build model for practically addressing counterfactual questions, this paper will introduce a novel causal modeling framework based on the _distribution-consistency assumption_ in the subsequent section. To make it simple and clear, consistency represents a modeling prior that can be examined through a motivating question: Consider an individual of average ability who takes a test and, due to good fortune, achieves an exceptionally high score. If this individual were to retake the test under identical external conditions, what score would they obtain? Would it be more likely to remain exceptionally high, or would it regress to an average score? In our perspective, this question does not have a definitive answer, and the subtlety lies in the interpretation of "under identical external conditions." Current causal modeling frameworks treat good fortune as an external condition and would predict an exceptionally high score by the consistency rule. If one's choice is to believe that good fortune does not necessarily reoccur, upon reverting to the past to retake the test, this individual would most likely achieve an average score due to their average ability, in this case, abandoning the consistency rule aligns with that choice. ## III Distribution-consistency Structural Causal Models Recent years have witnessed a growing emphasis on modeling counterfactual questions. Within the context of industrial personalized incentives, for a user who has shown high retention after receiving a high subsidy, if time were turned back and the same high subsidy offered, would a high retention outcome be guaranteed? Conventional causal models would predict a resounding "yes," with a 100% likelihood of high retention. While not asserting that the consistency-based model assumptions are erroneous, their underlying premise suggests that, if one could recreate the past, all conditions, including elements of luck, could be controlled. This diminishes the model's practical utility. A more pragmatic stance might be that while external conditions can be controlled, elements of luck remain unpredictable. Such a perspective equips causal models with the enhanced capability to provide counterfactual predictions that account for the variability of outcomes, such as low scores or low retention. Therefore, we introduce a corresponding innovative causal modeling framework. On one hand, it can be conceptualized as an enhanced PO framework where consistency is replaced by distribution-consistency. On the other hand, to formalize Layer Valuations, we extend the SCM as delineated below: **Definition 1** (Distribution Consistency Structural Counterfactual Model (DiscoSCM)).: A DiscoSCM \(\mathcal{M}\) comprises a 4-tuple \((\mathbf{U},\mathbf{V},\mathcal{F},P(\mathbf{u}))\). Here, \(\mathbf{U}\) represents a set of exogenous variables (or "latents") determined by external factors; \(\mathbf{V}\) is a set \(\{V_{1},V_{2},\ldots,V_{n}\}\) of endogenous variables contingent upon other variables within the model, specifically, in \(\mathbf{U}\cup\mathbf{V}\); \(\mathcal{F}\) is a set of functions \(\{f_{V_{i}},f_{V_{2}},\ldots,f_{V_{n}}\}\) such that each \(f_{i}\) maps from (the respective domains of) \(\mathbf{U}_{V_{i}}\cup\mathbf{P}_{V_{i}}\) to \(V_{i}\), where \(\mathbf{U}_{V_{i}}\subseteq\mathbf{U}\) and \(\mathbf{P}_{\mathbf{A}_{V_{i}}}\subseteq\mathbf{V}\setminus V_{i}\). The entire set \(\mathcal{F}\) constitutes a mapping from \(\mathbf{U}\) to \(\mathbf{V}\). For \(i=1,\ldots,n\), each \(f_{i}\in\mathcal{F}\) is defined as \(v_{i}\gets f_{V_{i}}(\mathbf{p}\mathbf{a}_{V_{i}},\mathbf{u}_{V_{i}})\). Additionally, \(P(\mathbf{u})\) is a probability function over the domain of \(\mathbf{U}\). A mathematical operator, denoted as \(do(\mathbf{x})\)5, modifies the set of structural equations \(\mathcal{F}\) to \(\mathcal{F}_{\mathbf{x}}:=\{f_{V_{i}}:V_{i}\in\mathbf{V}\setminus\mathbf{X} \}\cup\{f_{X}\gets x:X\in\mathbf{X}\}\), while preserving the same endogenous uncertainty as \(\mathbf{U}\), thereby inducing a submodel \(\langle\mathbf{U}(\mathbf{x}),\mathbf{V},\mathcal{F}_{\mathbf{x}},P(\mathbf{u })\rangle\). Footnote 5: To maintain notation alignment with SCM, \(X\) is used here instead of \(T\) to represent the interventional variable. It is evident that the major formulation distinction between DiscoSCM and SCM lies in the construction of the submodel induced by the _do_-operator, which changes \(\mathbf{U}\) to \(\mathbf{U}(x)\)6. This modification enables the prediction of an average score for re-taking a test for a specific individual who, with average ability, takes a test and achieves an exceptionally high score due to good luck. For DiscoSCM, units or individuals serve as the direct subjects of interventions/actions, positioning individual-level quantities as primitives. henceforth, interventions and counterfactuals are defined in the following: Footnote 6: The term \(\mathbf{U}(x)\) is denoted as the “potential noise”. **Definition 2** (Layer 1, 2, 3 Valuation).: A DiscoSCM \(\mathcal{M}=\langle\mathbf{U},\mathbf{V},\mathcal{F},P(\mathbf{U})\rangle\) induces a family of joint distributions over potential outcomes \(\mathbf{Y}_{\mathbf{x}},\ldots,\mathbf{Z}_{\mathbf{w}}\), for any \(\mathbf{Y},\mathbf{Z},\ldots,\mathbf{X},\mathbf{W}\subseteq\mathbf{V}\): \[P^{\mathcal{M}}(\mathbf{y})=\sum_{\begin{subarray}{c}\{\mathbf{u}\mid\mathbf{ Y}(\mathbf{u})=\mathbf{y}\}\\ \end{subarray}}P(\mathbf{u}), \tag{3}\] \[P^{\mathcal{M}}(\mathbf{y}_{\mathbf{x}})=\sum_{\begin{subarray}{c} \{\mathbf{u}_{\mathbf{x}}\mid\mathbf{Y}_{\mathbf{x}}(\mathbf{u}_{\mathbf{x}} )=\mathbf{y}\}\\ \ldots,\mathbf{Z}_{\mathbf{w}}(\mathbf{u}_{\mathbf{w}})=\mathbf{z}\end{subarray}}P (\mathbf{u}_{\mathbf{x}}), \tag{4}\] By definition, it's clear that both Layer 1 and Layer 2 valuations in the SCM and DiscoSCM frameworks are identical. However, distinctions arise in Layer 3 valuations. These variances become apparent when examining the counterfactual parameter, e.g., PNS 7, at the individual level. Within the SCM framework, this parameter degenerates to either 0 or 1. In contrast, in the DiscoSCM framework, it can assume any value between 0 and 1 for a specific unit \(i\). Indeed, the probability of causation parameters that degenerate in SCM maintain their non-dengeneracy in DiscoSCM. This characteristic facilitates the use and modeling of these parameters, as exemplified by the introduction of a novel parameter in DiscoSCM: Footnote 7: It is a probability of causation parameter, refer to Appendix Footnote 8: To maintain notation alignment with SCM, \(X\) is used here instead of \(T\) to represent the interventional variable. Footnote 9: The term \(\mathbf{U}(x)\) is denoted as the “potential noise”. Footnote 10: It is a probability of causation parameter, refer to Appendix **Definition 3** (Probability of Consistency (PC)).: For treatment \(X\) and outcome \(Y\) with corresponding observed value \(x\) and y: \[P(x\Rightarrow y)=P(Y(x)=y|Y=y,X=x) \tag{6}\] For any unit \(i\), the PC degenerates to constant 1 in the SCM framework and is thus a parameter that only holds significance within the DiscoSCM framework. Taking a specific example to illustrate: **Example 1**.: Consider a causal model containing four variables, \(Z_{i}\), \(X_{i}\), \(T_{i}\), and \(Y_{i}\) in the personalized incentives scenario, featuring the causal structure depicted in Figure 1. The causal mechanism operates as follows: 1. \(Z_{i}\): Represents user \(i\) belonging to one of three experiment groups: random (\(Z_{i}=0\)), pure strategy (\(Z_{i}=1\)), or mixed strategy (\(Z_{i}=2\)). 2. \(X_{i}\): Denotes the pre-treatment features of user \(i\), influence both the treatment and outcome. 3. \(T_{i}\): Indicates the treatment variable, signifying the presence or absence of an incentive. For users in the random group, \(T_{i}\) is determined with equal probability between 0 and 1. Users in the pure strategy group follow a deterministic value for \(T_{i}\) based on their features, while users in the mixed strategy group have a value for \(T_{i}\) influenced by their features with some randomness. 4. \(Y_{i}\): Represents a the outcome variable for user \(i\), exemplified by a conversion (e.g., purchase or engagement). In mainstream causal modeling frameworks, the probability of consistency \(P(z_{i}\Rightarrow t_{i}),\forall i\) invariably equals 1, due to the consistency rule. However, within the DiscoSCM framework, the probabilities can be computed reasonably as 8: Footnote 8: According to the generation mechanism of \(T_{i}\), for users with \(z_{i}=0\) in the random group, PC equals to 0.5. In the pure strategy group where \(z_{i}=1\), the value of \(T_{i}\) is determined by user characteristics, so the PC equals to 1. Finally, for the mixed strategy group, PC falls within the range \((0,1)\), depicting a non-deterministic yet non-random relationship. \[P(z_{i}\Rightarrow t_{i})=\begin{cases}0.5&\text{if }z_{i}=0\\ 1&\text{if }z_{i}=1\\ \theta(t_{i},x_{i})\in(0,1)&\text{if }z_{i}=2\end{cases} \tag{7}\] Since individual-level valuations are primitives and population-level valuations are derivatives, we hence propose the following procedure for the population-level valuations: **Theorem 1** (Population-Level Valuations).: _Consider a population where \(A\) represents a counterfactual event (such as being a complier), and \(c\) represents observed-variable conditions (e.g., observed \(T=t\), \(Y=y\)). Then, the population-level valuations of the form \(P(A|c)\) can be computed via the following three-step algorithm:_ _Step 1 (Abduction): From the context \(c\), derive a individual selector \(S\) to define a population with a distribution \(P^{\prime}\). A sample selector can typically be defined by the posterior over the index-set of samples given the context \(c\) and a uniform prior._ _Step 2 (Valuation): Compute \(P(A_{i})\) as Layer valuations in Def. 2 for each individual \(i\)._ _Step 3 (Reduction): Obtain the population-level \(P(A)\) by summing over all individuals, which can be expressed as follows:_ \[P(A|c)=\sum_{i}P(A_{i})P^{\prime}(i) \tag{8}\] At this stage, if \(P(A_{i})\) can be easily computed, especially by learning a model from data to compute \(P(A_{i})\) practically, then the challenge of answering counterfactual questions would be largely addressed. However, this ideal scenario is far from reality, and determining \(P(A_{i})\) is indeed very challenging as illustrated in detail in the next section. ## IV Indeterminable Counterfactuals Consider a randomized experiment with 8 users, with observed outcomes as shown in Table I. Given the assumption of homogeneity, it becomes evident that irrespective of the volume of data, whether it encompasses 8 or 8 billion analogous data points, no statistical methodology can ascertain the counterfactuals, which essentially are parameters of the joint distribution \((Y(0),Y(1))\) for a specific unit. This phenomenon, wherein individual-level counterfactual information remains elusive solely based on data (even if it represents the most comprehensive and ideal dataset), is termed as _Indeterminable Counterfactuals_. SCM relies on complete knowledge of the structural equation to address this issue, rendering it devoid of practicality. DiscoSCM, devoid of the degeneracy problem, possesses a non-degenerate counterfactual distribution, making the use of statistical methods feasible. To further clarify, returning to the unit selection problem setting, let's start from the simplest DiscoSCM: **Example 2**.: Consider a DiscoSCM model for the outcome \(Y\) with binary feature \(X\) and treatment \(T\), \[Y =\epsilon\] \[Y(t) =\epsilon_{X}(t)\] where \(\epsilon\sim N(0,1)\) and \(\epsilon_{X}(t)=_{d}\epsilon\). This DiscoSCM has a simple solution9, as depicted in Fig. 2, wherein every bivariate normal distribution for \((Y(0),Y(1))\) has all marginal distributions as standard normal distributions. It is evident that the counterfactuals related to \((Y(0),Y(1))\) are Fig. 1: Illustration of the causal model incorporating group tag \(Z_{i}\), incentive \(T_{i}\), pre-treatment features \(X_{i}\), and outcome \(Y_{i}\) for user \(i\). determined by the correlation between potential noise \(\epsilon_{X}(0)\) and \(\epsilon_{X}(1)\). In fact, assuming \(\epsilon_{X}(t)\equiv\epsilon\) would lead to an SCM where the correlation between \(Y(0)\) and \(Y(1)\), as shown on the left side of Fig. 2, degenerates to a constant value of 1. Generally, \(Y(0)\) and \(Y(1)\) exhibit a certain correlation, manifesting heterogeneous correlation patterns contingent upon the values of \(X\), as illustrated in the middle subgraph of Fig. 2. Unfortunately, counterfactuals, with such individual-level correlation pattern depicted by the middle of Fig. 2, remain indeterminable since no correlation information resides in the data due to the fundamental problem of causal inference10 as demonstrated in Table I. So, what could we do to overcome such limitations? Footnote 10: That is, at most one potential outcome can be observed, with all the others missing [6]. When dealing with individual-level counterfactual tasks, the correlation pattern can be categorized into three scenarios: fully correlated, partially correlated, and independent. The first scenario essentially equates to an SCM model faces the degeneration problem, while the second faces the indeterminable nature brought by unobservable potential noise pattern in DiscoSCM. Fortunately, the third scenario, as depicted by the right subgraph of Fig. 2, computationally overcomes the inherent disadvantages of the first two, making DiscoSCM with independent potential noises the main subject of our following section. ## V Layer 3 Valuations Formally, the DiscoSCM with independent potential noises is defined as follows: **Definition 4**.: A DiscoSCM \(\mathcal{M}=\langle\mathbf{U},\mathbf{V},\mathcal{F},P(\mathbf{U})\rangle\) with independent potential noises induces a family of joint distributions over potential outcomes \(\mathbf{Y_{x}},\ldots,\mathbf{Z_{w}}\), for any \(\mathbf{Y},\mathbf{Z},\ldots,\mathbf{X},\mathbf{W}\subseteq\mathbf{V}\), satisfying: \[P(\mathbf{u_{x}},...,\mathbf{u_{w}})=P(\mathbf{u_{x}})\cdots P(\mathbf{u_{w}}) \tag{9}\] The term "independent potential noises" refers to the independence among the exogenous noises across different counterfactual worlds. Combined with Eq. (5), the following theorem for individual-level Layer 3 valuations can be derived: **Theorem 2**.: _For potential outcomes \(\mathbf{Y_{x}},\ldots,\mathbf{Z_{w}}\) in a DiscoSCM \(\mathcal{M}=\langle\mathbf{U},\mathbf{V},\mathcal{F},P(\mathbf{U})\rangle\) with independent potential noises:_ \[P^{\mathcal{M}}(\mathbf{y_{x}},\ldots,\mathbf{z_{w}})=P^{\mathcal{M}}( \mathbf{y_{x}})\cdots P^{\mathcal{M}}(\mathbf{z_{w}}) \tag{10}\] This theorem elegantly reduces Layer 3 valuations to Layer 2 valuations, enabling the identification of individual-level counterfactuals even without the knowledge of structural equations, a prerequisite in the traditional SCM framework. Such a result seems almost too good to be true, prompting the subsequent sections of this paper to delve deeply into a comprehensive analysis, ensuring the soundness and efficacy of this model. An illustrative example follows: **Example 3**.: Consider a DiscoSCM for the outcome \(Y\) with features \(X_{0},X_{1},X_{2}\) and binary treatment \(T\): \[Y =0.5I[X_{0}=1]\cdot(T+1)+0.1X_{2}\cdot\epsilon\] \[Y(t) =0.5I[X_{0}=1]\cdot(t+1)+0.1X_{2}\cdot\epsilon(t)\] where \(\epsilon,\epsilon(t)\sim N(0,1),t=0,1\) denote the noise and potential noises respectively. As an extension of Example 2, Fig. 3 showcases an RCT dataset produced by this DiscoSCM ( code). The features \(X_{0},X_{1},X_{2}\) respectively govern heterogeneous causal effects, potential noise correlations, and consistency probabilities. The top row of the figure displays three unique DiscoSCM models with identical Layer 1 and Layer 2 valuations. Rows two to four illustrate the diverse Layer 3 valuations stemming from different potential noise correlation patterns. The preceding example effectively showcased the augmented capability of DiscoSCM in capturing causal patterns. Transitioning to a theoretical lens, let's explore how Theorem 4 facilitates the identification of counterfactual parameters. Within the setting of a DiscoSCM with independent potential noises and under the assumption of homogeneous individuals for all \(t\neq t^{\prime}\), the following population-level relationship can be derived: \[P(Y(t^{\prime})=y^{\prime}|T=t,Y=y)\quad\text{(using Theorem 1)}\] \[=P(Y_{i}(t^{\prime})=y^{\prime}|T_{i}=t,Y_{i}=y)\quad\text{(by homogeneity)}\] \[=P(Y_{i}(t^{\prime})=y^{\prime})\quad\text{(by Theorem 2)}\] \[=P(Y_{i}=y^{\prime}|T_{i}=t^{\prime})\quad\text{(by Distribution-consistency)}\] \[=P(Y=y^{\prime}|T=t^{\prime})\quad\text{(due to homogeneity)} \tag{11}\] The formula presented above typifies a parameter related to the probability of causation. It can address questions such as, "Among all individuals who took a drug and subsequently died, what proportion would have survived had they not taken the drug?" This demonstrates that within the DiscoSCM framework, by adeptly leveraging homogeneity, it becomes feasible to identify counterfactual parameters directly from data. This challenges the prevailing belief that identifying counterfactuals necessitates a complete knowledge of structural equations. Due to space constraints, the nuances of practical modeling and addressing counterfactual questions Fig. 2: The simplistic DiscoSCM from Example 2 presenting three different correlation patterns. will not be further explored. The emphasis will instead shift to elucidating theoretical findings within the DiscoSCM framework. ## VI Bounds of Counterfactuals Recently, a series of studies have set forth many results on bounds for population-level counterfactuals and applied these findings to the unit selection problem [7]. Using DiscoSCM, bounds at the individual level can be further elucidated. 11. Before delving deeper, the following essential lemma is presented: Footnote 11: By Theorem 1, those individual-level theorems can directly yield a corresponding (sub-)population-level version, thus positioning it as a refined conclusion. **Lemma 1**.: _In a DiscoSCM, for any individual \(i\) with a binary treatment \(T_{i}\) and outcome \(Y_{i}\):_ \[P(y)=P(y_{t})P(t)+P(y_{t^{\prime}})P(t^{\prime}) \tag{12}\] By probability calculus and distribution-consistency: \[P(y)\triangleq P(Y_{i}=y)\] \[=P(Y_{i}=y|T_{i}=t)P(T_{i}=t)+P(Y_{i}=y|T_{i}=t^{\prime})P(T_{i}= t^{\prime})\] \[=P(Y_{i}(t)=y)P(T_{i}=t)+P(Y_{i}(t^{\prime})=y)P(T_{i}=t^{\prime})\] \[=P(y_{t})P(t)+P(y_{t^{\prime}})P(t^{\prime})\] The following result can then be derived : **Theorem 3**.: _In a DiscoSCM, for any individual \(i\) with an observed binary treatment \(T\), outcome \(Y\)12:_ Footnote 12: Preliminaries on probability of causation parameters PNS and PN are provided in the Appendix. \[\max\left\{\begin{array}{c}0\\ P(y_{t})-P(y_{t^{\prime}})\\ P(y)-P(y_{t^{\prime}})\\ P(y_{t})-P(y)\end{array}\right\}\leq\textit{PNS} \tag{13}\] \[\textit{PNS}\leq\min\left\{\begin{array}{c}P(y_{t})\\ P(y_{t^{\prime}})\\ P(t,y)+P(t^{\prime},y^{\prime})\\ P(y_{t})-P(y_{t^{\prime}})+\\ +P(t,y^{\prime})+P(t^{\prime},y)\end{array}\right\} \tag{14}\] \[\max\left\{\begin{array}{c}0\\ \frac{P(y)-P(y_{t^{\prime}})}{P(t,y)}\end{array}\right\}\leq\textit{PN} \tag{15}\] \[\textit{PN}\leq\min\left\{\begin{array}{c}1\\ \frac{P(y_{t^{\prime}}^{\prime})-P(t^{\prime},y^{\prime})}{P(t,y)}\end{array}\right\} \tag{16}\] Proof.: The first part of Eq. (13) is trivial. The second part is \[P(y_{t},y_{t^{\prime}}^{\prime})\geq P(y_{t})-P(y_{t^{\prime}})\] \[\Leftrightarrow P(y_{t^{\prime}})\geq P(y_{t})-P(y_{t},y_{t^{ \prime}}^{\prime})\] \[\Leftrightarrow P(y_{t^{\prime}})\geq P(y_{t},y_{t^{\prime}})\] This proves the second part. The third part of Eq. (13) is \[P(y_{t},y_{t^{\prime}}^{\prime})\geq P(y)-P(y_{t^{\prime}})\] \[\Leftrightarrow P(y_{t},y_{t^{\prime}}^{\prime})\geq P(y_{t})P(t)+P(y_{t^ {\prime}})P(t^{\prime})-P(y_{t^{\prime}})\] \[\Leftrightarrow P(y_{t},y_{t^{\prime}}^{\prime})\geq P(t)(P(y_{t})-P(y_{t^ {\prime}}))\] Fig. 3: DiscoSCM with heterogeneous causal effects, potential noise correlations, and consistency probabilities. The type of this DiscoSCM depends on the correlation pattern among potential noises \(\epsilon(t)\), Fig. 2: if the correlation coefficient is 1, it is an ordinary SCM; if there is some correlation, it is a general DiscoSCM with indeterminable counterfactuals; if the potential noises are independent, it becomes a DiscoSCM where Layer 3 individual-level counterfactuals can be reduced to Layer 2 valuations. Specifically, for individual counterfactual parameters \(corr(Y_{i}(0),Y_{i}(1))\), the second row of Fig. 3 shows that it is always 1 in the SCM, while the third row reveals that its value lies between 0 and 1 in the general DiscoSCM, showing heterogeneity according to \(X_{1}\). The fourth row of Fig. 3 demonstrates that this parameter is always 0 in a DiscoSCM with independent potential noise. To summarize, when the correlation between potential noises is 1, as is the case in SCM, knowledge of all structural equations is required to solve for counterfactuals. When potential noises exhibit some correlation, neither randomized controlled trial (RCT) or observational data can help recover related counterfactual parameters. Conversely, when potential noises are independent, Layer 3 valuation can be reduced to Layer 2, allowing them to typically be identified from the data. Using the conclusion from the second part of Eq. (13) and the condition \(P(t)\leq 1\), we can prove the third part. The fourth part of Eq. (13) is \[P(y_{t},y^{\prime}_{t^{\prime}})\geq P(y_{t})-P(y)\] \[\Leftrightarrow P(y_{t},y^{\prime}_{t^{\prime}})\geq P(y_{t})-P(y_{t})P(t)- P(y_{t^{\prime}})P(t^{\prime})\] \[\Leftrightarrow P(y_{t},y^{\prime}_{t^{\prime}})\geq P(t^{\prime})(P(y_{t} )-P(y_{t^{\prime}}))\] This can also be proven using the conclusion from the second part of Eq. (13) and the condition \(P(t^{\prime})\leq 1\). This concludes the proof of Eq. (13) of the theorem. The first two parts of Eq. (14) is rival. The third part is: \[P(y_{t},y^{\prime}_{t^{\prime}})\leq P(t,y)+P(t^{\prime},y^{ \prime})\] \[\Leftrightarrow P(y_{t^{\prime}})\leq P(y|t)P(t)+P(y^{\prime}|t^{ \prime})P(t^{\prime})\] \[\Leftrightarrow P(y_{t},y^{\prime}_{t^{\prime}})\leq P(t)P(y_{t} )+P(t^{\prime})P(y^{\prime}_{t^{\prime}})\] It's evident given the first two parts of Eq. (14). The fourth part of Eq. (14) is \[P(y_{t},y^{\prime}_{t^{\prime}})\leq P(y_{t})-P(y_{t^{\prime}})+ P(t,y^{\prime})+P(t^{\prime},y)\] \[\Leftrightarrow P(y_{t^{\prime}})\leq P(y_{t})-P(y_{t^{\prime}})+ P(y^{\prime}|t)P(t)+P(y|t^{\prime})P(t^{\prime})\] \[\Leftrightarrow P(y_{t^{\prime}})\leq P(y_{t})-P(y_{t^{\prime}})+ P(y^{\prime}_{t})P(t)+P(y_{t^{\prime}})P(t^{\prime})\] \[\Leftrightarrow P(y_{t},y^{\prime}_{t^{\prime}})\leq P(t^{\prime})P(y_{t} )+P(t)P(y^{\prime}_{t^{\prime}})\] This can also be proven using the first two parts of Eq. (14). This concludes the proof of Eq. (14) of the theorem. The first part of Eq. 15 is trivial. The second part is \[P(y^{\prime}_{t^{\prime}}|t,y)\geq\frac{P(y)-P(y_{t^{\prime}})}{ P(t,y)}\] \[\Leftrightarrow P(y^{\prime}_{t^{\prime}})P(t,y)\geq P(y)-P(y_{t^{ \prime}})\] \[\Leftrightarrow P(y^{\prime}_{t^{\prime}})P(y_{t})P(t)+P(y_{t^{ \prime}})\geq P(y_{t})P(t)+P(y_{t^{\prime}})P(t^{\prime})\] \[\Leftrightarrow P(y_{t^{\prime}})P(t)\geq P(y_{t})P(t)P(y_{t^{ \prime}})\] \[\Leftrightarrow 1\geq P(y_{t})\] This proves the second part of Eq. (15). The first part of Eq. 16 is trivial. The second part is \[P(y^{\prime}_{t^{\prime}}|t,y)\leq\frac{P(y^{\prime}_{t^{\prime} })-P(t^{\prime},y^{\prime})}{P(t,y)}\] \[\Leftrightarrow P(y^{\prime}_{t^{\prime}})P(t,y)\leq P(y^{\prime}_{t^{ \prime}})-P(t^{\prime},y^{\prime})\] \[\Leftrightarrow P(y^{\prime}_{t^{\prime}})P(y_{t})P(t)\leq P(y^{ \prime}_{t^{\prime}})-P(y^{\prime}|t^{\prime})P(t^{\prime})\] \[\Leftrightarrow P(y^{\prime}_{t^{\prime}})P(y_{t})P(t)\leq P(y^{ \prime}_{t^{\prime}})P(t)\] \[\Leftrightarrow 1\leq P(y_{t})\] This proves the second part of Eq. (16). When additional structural information is available, tighter bounds can also be derived. **Theorem 4**.: _In a DiscoSCM with independent potential noises, for any individual \(i\) with a observed binary treatment \(T\), outcome \(Y\), and a partial mediator \(Z\):_ \[\max\left\{\begin{array}{c}0,\\ P(y_{t})-P(y_{t^{\prime}}),\\ P(y)-P(y_{t^{\prime}}),\\ P(y_{t})-P(y)\end{array}\right\}\leq PNS \tag{17}\] \[\min\left\{\begin{array}{c}P(y_{t}),\\ P(y^{\prime}_{t^{\prime}}),\\ P(y,t)+P(y^{\prime},t^{\prime}),\\ P(y_{t})-P(y_{t^{\prime}})+P(y,t^{\prime})+P(y^{\prime},t),\\ \sum_{i}\sum_{s^{\prime}}\min\{P(y|z,t),P(y^{\prime}|z^{\prime},t^{\prime})\} \\ \times\min\{P(z_{t}),P(z^{\prime}_{t^{\prime}})\}\end{array}\right\}\geq PNS \tag{18}\] Proof.: Given the previously established Theorem 3, it suffices to prove the following equation: \[P(y_{t},y^{\prime}_{t^{\prime}})\leq\sum_{z}\sum_{z^{\prime}} \min\{P(y|z,t),P(y^{\prime}|z^{\prime},t^{\prime})\}\times\min\{P(z_{t}),P(z^{ \prime}_{t^{\prime}})\}\] \[\Leftrightarrow\sum_{z}\sum_{z^{\prime}}P(y_{t},y^{\prime}_{t^{ \prime}},z_{t},z^{\prime}_{t^{\prime}})\] \[\leq\sum_{z}\sum_{z^{\prime}}\min\{P(y|z,t),P(y^{\prime}|z^{ \prime},t^{\prime})\}\times\min\{P(z_{t}),P(z^{\prime}_{t^{\prime}})\}\] \[\Leftrightarrow\sum_{z}\sum_{z^{\prime}}P(y_{t},y^{\prime}_{t^{ \prime}}|z_{t},z^{\prime}_{t^{\prime}})P(z_{t},z^{\prime}_{t^{\prime}})\] \[\leq\sum_{z}\sum_{z^{\prime}}\min\{P(y|z,t),P(y^{\prime}|z^{ \prime},t^{\prime})\}\times\min\{P(z_{t}),P(z^{\prime}_{t^{\prime}})\}\] \[\Leftrightarrow\sum_{z}\sum_{z^{\prime}}P(y_{t,z},y^{\prime}_{t^{ \prime},z^{\prime}})P(z_{t},z^{\prime}_{t^{\prime}})\] \[\leq\sum_{z}\sum_{z^{\prime}}\min\{P(y_{z,t}),P(y^{\prime}_{t^{ \prime},t^{\prime}})\}\times\min\{P(z_{t}),P(z^{\prime}_{t^{\prime}})\}\] It thus left to prove: 1. \(P(y_{x,z},y^{\prime}_{t^{\prime},z^{\prime}})\leq\min\{P(y_{z,t}),P(y^{\prime}_{t^{ \prime},t^{\prime}})\}\), and 2. \(P(z_{t},z^{\prime}_{t^{\prime}})\leq\min\{P(z_{t}),P(z^{\prime}_{t^{\prime}})\}\). Both are evidently true by probability formula. ## VII Conclusion and Disscussion Answering counterfactual questions, termed as Layer 3 valuation, poses a significant challenge. Traditional frameworks exhibit inherent weaknesses due to the degeneracy problem admitting counterfactuals essentially reside at individual-level. To address the issue, DiscoSCM is introduced by incorporating a distribution-consistency assumption. It seamlessly merges the individual semantics of the former with the Layer valuations of the latter, addressing counterfactual questions based on the principle of "individual-level valuation as primitives and population-level valuations as derivatives." Practically learning a model from data for Layer 3 valuations is essential. DiscoSCM suggests that prior strategies concentrated on structural equations [16] and overlooked user/unit representation, might have been misdirected.
2309.04388
Dimension formulas for spaces of vector-valued Siegel modular forms of degree two and level two
Using a description of the cohomology of local systems on the moduli space of abelian surfaces with a full level two structure, together with a computation of Euler characteristics we find the isotypical decomposition, under the symmetric group on 6 letters, of spaces of vector-valued Siegel modular forms of degree two and level two.
Jonas Bergström, Fabien Cléry
2023-09-08T15:40:31Z
http://arxiv.org/abs/2309.04388v1
# Dimension formulas for spaces of vector-valued Siegel modular forms of degree two and level two ###### Abstract. Using a description of the cohomology of local systems on the moduli space of abelian surfaces with a full level two structure, together with a computation of Euler characteristics we find the isotypical decomposition, under the symmetric group on \(6\) letters, of spaces of vector-valued Siegel modular forms of degree two and level two. ## 1. Introduction In this paper, we refine the previously known dimension formulas for spaces \(M_{k,j}(\Gamma[2])\) of vector-valued Siegel modular forms of degree \(2\) and level \(2\), by determining their isotypical decomposition under the action of \(\operatorname{Sp}(4,F_{2})\cong\mathfrak{S}_{6}\). This extends previous work, see for instance [18, 1, 28, 29, 16, 17, 30, 12]. In particular, Tsushima gave in [29, Theorems 2, 3] a formula for the dimension of the space \(S_{k,j}(\Gamma[N])\) for any \(N\) under the conditions \(j\geqslant 1\) and \(k\geqslant 5\) or \(j=0\) and \(k\geqslant 4\). The ranges for \(j\) and \(k\) in Tsushima's dimension formula for \(N=2\) have been slightly extended in [12, Theorem 12.1]. There is an overview of dimension formulas such as these on the webpage [26]. In [4], there is a conjectural description of the motivic Euler characteristic, with its isotypical decomposition under the action of \(\mathfrak{S}_{6}\), of any symplectic local system on \(\mathcal{A}_{2}[2]\) the moduli space of abelian surfaces with a full level two structure. These conjectures were later proven in [23]. In particular, this gives us the integer-valued Euler characteristic of an isotypical component under \(\mathfrak{S}_{6}\) of any local system on \(\mathcal{A}_{2}[2]\) as a sum of a well-known value (in terms of dimensions of spaces of elliptic modular cusp forms) plus four times the dimension of the (isotypical component of the) vector space of Siegel modular cusp forms of degree \(2\) and level \(2\), see Theorem 5.3. In Section 4 we then find an effective formula to compute these integer-valued Euler characteristics. This is achieved by stratifying the moduli space \(\mathcal{A}_{2}[2]\) in terms of the automorphism groups of principally polarized abelian surfaces, which are Jacobians of smooth projective curves of genus \(2\), or products of elliptic curves. By computing the action of these automorphism groups and of \(\mathfrak{S}_{6}\), on the first cohomology group of the corresponding abelian surfaces, we can find a formula for the integer-valued Euler characteristic, see Equation (9). This is a method previously used for instance in [15]. In Section 2 we give an overview of the Siegel modular forms we are interested in, together with a short description of the Arthur packets for \(\operatorname{GSp}(4)\). Then, in Section 3 and Section 5 we include isotypical decompositions of the spaces of Siegel modular forms of degree \(2\) and level \(2\) to give a comprehensive reference for these results. Computer programs, written in Sage, which compute all results of this paper, are provided on a GitHub repository [3]. Tables with some of the results of this paper can also be found on the webpage [6]. ## 2. Siegel modular forms The level \(2\) congruence subgroups we are concerned with are the following ones \[\Gamma[2]=\left\{\gamma\in\Gamma:\gamma\equiv 1_{4}\bmod 2\right\},\quad \Gamma_{1}[2]=\left\{\gamma\in\Gamma:\gamma\equiv\left(\begin{smallmatrix}1_{ 2}&*\\ 0&1_{2}\end{smallmatrix}\right)\bmod 2\right\},\quad\Gamma_{0}[2]=\left\{\left( \begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in\Gamma:c\equiv 0\bmod 2\right\},\] where \(\Gamma=\operatorname{Sp}(4,\mathbb{Z})=\left\{\gamma\in\operatorname{GL}(4, \mathbb{Z}):\gamma^{t}J\gamma=J\right\}\) with \(J=\left(\begin{smallmatrix}0&1_{2}\\ -1_{2}&0\end{smallmatrix}\right)\) and \(1_{n}\) the identity matrix of size \(n\). We clearly have the following inclusions \(\Gamma[2]<\Gamma_{1}[2]<\Gamma_{0}[2]<\Gamma\) and the successive quotients can be identified as follows \[\Gamma_{1}[2]/\Gamma[2]\cong(\mathbb{Z}/2\mathbb{Z})^{3},\quad\Gamma_{0}[2]/ \Gamma[2]\cong\mathbb{Z}/2\mathbb{Z}\times\mathfrak{S}_{4},\quad\Gamma_{0}[2 ]/\Gamma_{1}[2]\cong\mathfrak{S}_{3},\quad\Gamma/\Gamma[2]\cong\mathfrak{S} _{6},\] with \(\mathfrak{S}_{n}\) the symmetric group on \(n\) letters. As usual, thes groups act on the Siegel upper half space \(\mathfrak{H}_{2}\) of degree \(2\): \(\mathfrak{H}_{2}=\{\tau\in\operatorname{Mat}(2\times 2,\mathbb{C}):\tau^{t}=\tau, \operatorname{Im}(\tau)>0\}\) via \(\tau\mapsto\gamma\tau=(a\tau+b)(c\tau+d)^{-1}\). More details on the orbifolds of the action of the previous groups can be found in [12, Section 2]. We let \(G\) be one of the groups \(\Gamma[2],\Gamma_{1}[2],\Gamma_{0}[2]\) or \(\Gamma\). For any integer \(k\) and non-negative integer \(j\), the space of modular forms of weight \((k,j)\) on \(G\) is denoted by \(M_{k,j}(G)\) and is defined by \[M_{k,j}(G)=\{f:\mathfrak{H}_{2}\to\mathbb{C}^{j+1}\operatorname{holomorphic}|\] \[f((a\tau+b)(c\tau+d)^{-1})=\operatorname{Sym}^{j}(c\tau+d) \otimes\det(c\tau+d)^{k}f(\tau)\,\text{ for all }\gamma=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\in G\}.\] The subspace of cusp forms of \(M_{k,j}(G)\) will be denoted by \(S_{k,j}(G)\), this is the kernel of the (global) Siegel \(\Phi\)-operator. Let us make a couple of easily verified remarks. Firstly, since \(-1_{4}\) belongs to the group \(\Gamma[2]\), we have that \(M_{k,j}(\Gamma[2])=\{0\}\) if \(j\) is odd. This can be directly read off from the functional equation satisfied by any element of \(M_{k,j}(\Gamma[2])\). Therefore, from now on, we assume that \(j\)_is even._ Secondly, if \(k\) is odd then \(M_{k,j}(\Gamma[2])=S_{k,j}(\Gamma[2])\): let \(\Gamma(2)\) be the principal congruence subgroup of level \(2\) of \(\operatorname{SL}(2,\mathbb{Z})\), the (global) Siegel \(\Phi\)-operator maps \(M_{k,j}(\Gamma[2])\) to \(M_{j+k}(\Gamma(2))^{\oplus 15}\) (note 15 is the number of \(1\)-dimensional cusps of the group \(\Gamma[2]\)) and since \(-1_{2}\) belongs to \(\Gamma(2)\), the space \(M_{j+k}(\Gamma(2))\) reduces to \(0\) when \(k\) is odd (\(j\) is even). These two facts also hold for the groups \(\Gamma_{1}[2],\Gamma_{0}[2]\) and \(\Gamma\). A less easy fact is the generalisation of the Koecher principle to vector-valued Siegel modular forms, see [13, Satz 1], which implies that \[M_{k,j}(\Gamma[2])=\{0\}\,\text{ for any }k<0\text{ for any }j.\] The Petersson inner product provides an orthogonal decomposition \[M_{k,j}(G)=E_{k,j}(G)\oplus S_{k,j}(G). \tag{1}\] We call \(E_{k,j}(G)\) the space of Eisenstein series. The decomposition (1) can be refined according to the classification of automorphic representations of \(\operatorname{GSp}(4)\): Arthur packets. There are six different types of Arthur packets (see [2], or [25, pp. 3088-3089]): * **(G)**: general type. They can only appear in \(S_{k,j}(G)\). * **(Y)**: Yoshida type. They can only appear in \(S_{k,j}(G)\). Modular forms of this type are also called **Yoshida lifts**. * **(Q)**: Soudry type (Klingen parabolic). They can only appear in \(E_{k,j}(G)\). Modular forms of this type are also called type are also called **Klingen-Eisenstein series**. * **(P)**: Saito-Kurokawa type (Siegel parabolic). They can only appear in \(S_{k,0}(G)\), i.e. they are scalar-valued cusp forms. Modular forms of this type are also called **Saito-Kurokawa lifts**. * **(B)**: Howe-Piatetski-Shapiro type (Borel parabolic). They can only appear in the space of cusp forms, but they do not appear in \(S_{k,j}(G)\). * **(F)**: Finite type. They can only appear in \(E_{k,j}(G)\). Modular forms of this type are also called **Siegel-Eisenstein series**. So we have \[E_{k,j}(G)=E_{k,j}^{(\mathbf{P})}(G)\oplus E_{k,j}^{(\mathbf{Q})}(G)\quad \text{and}\quad S_{k,j}(G)=S_{k,j}^{(\mathbf{G})}(G)\oplus S_{k,j}^{(\mathbf{ P})}(G)\oplus S_{k,j}^{(\mathbf{Y})}(G). \tag{2}\] Since the group \(\Gamma[2]\) is a normal subgroup of \(\Gamma\) (it is the kernel of the reduction modulo \(2\)), we get an action of \(\Gamma\) on the space \(M_{k,j}(\Gamma[2])\) \[\begin{array}{c}\operatorname{Sp}(4,2)\times M_{k,j}(\Gamma[2])\to M_{k,j}( \Gamma[2])\\ (\gamma,f)\end{array}\] From this action, we deduce a group homomorphism \[\begin{array}{c}\Gamma\to\operatorname{GL}(M_{k,j}(\Gamma[2]))\\ \gamma\to\left(\begin{smallmatrix}M_{k,j}(\Gamma[2])&-&M_{k,j}(\Gamma[2])\\ f&-&f_{1k,j}\gamma^{-1}\end{smallmatrix}\right)\end{array}\] whose kernel obviously contains the group \(\Gamma[2]\). So the previous homomorphism factors through the group \(\Gamma[2]\) and we obtain a group homomorphism \[\Gamma/\Gamma[2]\cong\operatorname{Sp}(4,\mathbb{F}_{2})\cong\mathfrak{S}_{6} \to\operatorname{GL}(M_{k,j}(\Gamma[2]))\] i.e. a representation of the group \(\mathfrak{S}_{6}\) on the space \(M_{k,j}(\Gamma[2])\). Note that the second isomorphism is ambiguous due to the outer automorphism of \(\mathfrak{S}_{6}\) so we need to fix this isomorphism. We fix this isomorphism as follows: \(\mathfrak{S}_{6}=\langle(12),(123456)\rangle\) and as in [12, Equation (3.2)] (see also [18, pp. 398-399]), we set \[(12)\mapsto\left(\begin{smallmatrix}1&0&1&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{smallmatrix}\right)\bmod 2\quad\text{and}\quad(123456)\mapsto \left(\begin{smallmatrix}0&1&0&1\\ 1&0&1&0\\ 1&0&1&1\\ -1&1&0&1\end{smallmatrix}\right)\bmod 2.\] The irreducible representations of \(\mathfrak{S}_{n}\) correspond bijectively with the partitions of \(n\). The representation of the symmetric group \(\mathfrak{S}_{n}\) corresponding with the partition \(\bar{\omega}\) will be denoted by \(s[\bar{\omega}]\), with \(s[n]\) the trivial one and \(s[1^{n}]\) the alternating one. For a representation \(V\) of the symmetric group \(\mathfrak{S}_{n}\) we write \[\dim_{\mathfrak{S}_{n}}V=\sum_{\bar{\omega}}m_{s[\bar{\omega}]}(V)\cdot s[ \bar{\omega}]\in\mathbb{Z}[\mathfrak{S}_{n}], \tag{3}\] where \(\mathbb{Z}[\mathfrak{S}_{n}]\) is the representation ring and \(m_{\bar{\omega}}(V)\) is the multiplicity of the representation \(s[\bar{\omega}]\) appearing in \(V\). We call the right hand side of (3) the isotypical decomposition of \(V\). Knowing the isotypical decomposition of a space \(M_{k,j}(\Gamma[2])\) gives us all the information we want about the spaces \(M_{k,j}(\Gamma_{1}[2])\), \(M_{k,j}(\Gamma_{0}[2])\) and \(M_{k,j}(\Gamma)\) by representation theory: to an isotypical decomposition \(\dim_{\mathfrak{S}_{6}}M_{k,j}(\Gamma[2])=m_{s[6]}\,s[6]+m_{s[5,1]}\,s[5,1]+ \cdots+m_{s[1^{5}]}\,s[1^{6}]\) contributes \[\dim_{\mathfrak{S}_{3}}M_{k,j}(\Gamma_{1}[2])=(m_{s[6]}+m_{s[4,2]}+m_{s[2^{2} ]})s[3]+(m_{s[5,1]}+m_{s[4,2]}+m_{s[3,2,1]})s[2,1]+(m_{s[4,1^{2}]}+m_{s[3^{2}] })s[1^{3}],\] \[\dim M_{k,j}(\Gamma_{0}[2])=m_{s[6]}+m_{s[4,2]}+m_{s[2^{3}]},\] \[\dim M_{k,j}(\Gamma)=\dim M_{k,j}(\Gamma[2])^{s[6]}=m_{s[6]},\] \[\dim M_{k,j}(\Gamma,\varepsilon)=\dim M_{k,j}(\Gamma[2])^{s[1^{6}]}=m_{s[1^{6 }]}.\] Here, \(\varepsilon\) denotes the unique non-trivial character of \(\Gamma\), see [10, Section 12] for a brief description of this character. Therefore we focus on the spaces \(M_{k,j}(\Gamma[2])\) in the sequel. In the case of scalar-valued modular forms, the previous decompositions allow us to recover the results given in [24, Appendix A] for the groups \(\Gamma,\Gamma_{0}[2]\) and \(\Gamma[2]\). In the case of scalar-valued modular forms we write \(M_{k}(\Gamma[2])=M_{k,0}(\Gamma[2])\) and \(S_{k}(\Gamma[2])=S_{k,0}(\Gamma[2])\). ## 3. Isotypical decompositions in the scalar-valued case By the Koecher principle, we know that \(M_{k}(\Gamma[2])=\{0\}\) for \(k<0\). The following theorem is due to Igusa, see [18, p.398]. **Theorem 3.1** (Igusa).: _We have_ \[\dim M_{k}(\Gamma[2])=\begin{cases}\frac{(k+1)(k^{2}+2k+12)}{12}&\text{ if }k \geqslant 0\text{ even}\\ \dim M_{k-5}(\Gamma[2])&\text{ if }k\geqslant 1\text{ odd}.\end{cases}\] For \(k\) odd, the last equality comes from \(M_{k}(\Gamma[2])=S_{k}(\Gamma[2])=\chi_{5}\cdot M_{k-5}(\Gamma[2])\) where \(\chi_{5}\) denotes the unique cusp form, up to a multiplicative constant, generating the space \(S_{5}(\Gamma[2])\). In fact, Igusa did more than computing the dimension of the spaces \(M_{k}(\Gamma[2])\). He also computed the characters of \(\mathfrak{S}_{6}\) on the space \(M_{k}(\Gamma[2])\) (see [18, Theorem 2]) and he showed that as \(\mathfrak{S}_{6}\)-representation for \(k\) even we have \[\dim_{\mathfrak{S}_{6}}M_{k}(\Gamma[2])=\operatorname{Sym}^{k/2}(s[2^{3}])- \begin{cases}0&\text{ if }k\in\{0,2,4,6\}\\ \operatorname{Sym}^{k/2-4}(s[2^{3}])&\text{ if }k\geqslant 8,\end{cases}\] where we put \(\operatorname{Sym}^{0}(s[\bar{\omega}])=s[n]\) for any irreducible representation \(s[\bar{\omega}]\) of \(\mathfrak{S}_{n}\). Note that the relation appearing in weight \(8\) defines the Igusa quartic. From the results of Igusa, we deduce the generating series for the multiplicity of the irreducible representations of \(\mathfrak{S}_{6}\) in \(M_{k}(\Gamma[2])\): \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \([\bar{\omega}]\) & [6] & \([5,1]\) & \([4,2]\) & \([4,1^{2}]\) & \([3^{2}]\) & \([3,2,1]\) & \([3,1^{3}]\) & \([2^{3}]\) & \([2^{2},1^{2}]\) & \([2,1^{4}]\) & \([1^{6}]\) \\ \hline \(\dim s[\bar{\omega}]\) & \(1\) & \(5\) & \(9\) & \(10\) & \(5\) & \(16\) & \(10\) & \(5\) & \(9\) & \(5\) & \(1\) \\ \end{tabular} \end{table} Table 1. Irreducible representations of \(\mathfrak{S}_{6}\) and their dimensions **Remark 3.2**.: A couple of sanity checks. Firstly, \[\sum_{k\geqslant 0}\sum_{s\{\alpha\}}\dim(s[\alpha])m_{s[\alpha]}(M_{k}(\Gamma[2] ))\,t^{k}=\sum_{k\geqslant 0}\dim M_{k}(\Gamma[2])\,t^{k}=\frac{(1+t^{2})(1+t^{4})(1+t^{5} )}{(1-t^{2})^{4}},\] which is in agreement with Theorem 3.1. Secondly, the generating series for the dimension of spaces of modular forms on \(\Gamma_{0}[2]\) is given by (see [24, Appendix A.1] and the references therein) \[\sum_{k\geqslant 0}\dim M_{k}(\Gamma_{0}[2])\,t^{k}=\frac{1+t^{19}}{(1-t^{2})(1 -t^{4})^{2}(1-t^{6})}.\] We checked that by adding the generating series for the multiplicities of the irreducible representations \(s[6],s[4,2]\) and \(s[2^{3}]\) we recover this formula. Let us give the first few isotypical decompositions of \(M_{k}(\Gamma[2])\), we put \(d=\dim M_{k}(\Gamma[2])\) \[\begin{array}{c|c|c|c|c|c|c|c|c|c|c|c}s[\partial]&s[6]&s[5,1]&s[4,2]&s[4, 2^{1}]&s[3^{2}]&s[3,2,1]&s[3,1^{3}]&s[2^{3}]&s[2^{2},1^{2}]&s[2,1^{4}]&s[1^{6} ]\\ \dim s[\partial]&1&5&9&10&5&16&10&5&9&5&1\\ \hline\hline k&&&&&&&&&&&&d\\ \hline 0&1&0&0&0&0&0&0&0&0&0&0&0&1\\ 1&0&0&0&0&0&0&0&0&0&0&0&0\\ 2&0&0&0&0&0&0&0&1&0&0&0&5\\ 3&0&0&0&0&0&0&0&0&0&0&0&0\\ 4&1&0&1&0&0&0&0&1&0&0&0&15\\ 5&0&0&0&0&0&0&0&0&0&0&1&1\\ 6&1&0&1&0&0&0&1&2&0&1&0&35\\ 7&0&0&0&0&1&0&0&0&0&0&0&5\\ 8&1&0&3&0&0&1&1&3&0&0&0&69\\ 9&0&0&0&0&1&0&0&0&1&0&1&15\\ 10&2&0&3&0&0&2&3&4&0&2&0&121\\ 11&0&1&0&1&2&0&0&0&1&0&1&35\\ \end{array}\] Next, we give the dimension of the various pieces of the space \(M_{k}(\Gamma[2])\) as in (1) and (2) and also their isotypical decomposition. We distinguish two cases according to the parity of \(k\). ### Isotypical decomposition of \(M_{2k+1}(\Gamma[2])\) We already have seen that \(M_{2k+1}(\Gamma[2])=S_{2k+1}(\Gamma[2])\) and from Theorem 3.1 we deduce \(\dim S_{1}(\Gamma[2])=\dim S_{3}(\Gamma[2])=0\) and \[\dim S_{2k+1}(\Gamma[2])=(2k^{3}-9k^{2}+19k-15)/3\quad\text{for}\quad k \geqslant 2.\] Therefore the generating series for \(\dim S_{2k+1}(\Gamma[2])\) is given by \[\sum_{k\geqslant 0}\dim S_{2k+1}(\Gamma[2])\,t^{2k+1}=\frac{t^{5}(1+t^{2}+t^{4} +t^{6})}{(1-t^{2})^{4}}.\] We have seen that for \(k\geqslant 0\) we have \(M_{2k+1}(\Gamma[2])=S_{2k+1}(\Gamma[2])=\chi_{5}\cdot M_{2k-4}(\Gamma[2])\) and since the cusp form \(\chi_{5}\) is \(\mathfrak{S}_{6}\)-anti-invariant (i.e. it occurs in the alternating representation \(s[1^{6}]\)) we get \[\dim_{\mathfrak{S}_{6}}M_{2k+1}(\Gamma[2])=\dim_{\mathfrak{S}_{6}}S_{2k+1}( \Gamma[2])=s[1^{6}]\otimes\dim_{\mathfrak{S}_{6}}M_{2k-4}(\Gamma[2]).\] From the generating series of the multiplicities of the irreducible representations of \(\mathfrak{S}_{6}\) previously given, we deduce \[\begin{array}{c|c}s[\alpha]&\sum_{k\geqslant 0}m_{s[\alpha]}(S_{2k+1}( \Gamma[2]))\;t^{2k+1}\\ \hline s[6]&\frac{t^{35}}{(1-t^{4})(1-t^{6})(1-t^{10})(1-t^{2k^{3}})}\\ \hline s[5,1]&\frac{t^{3}}{(1-(t^{2})(1-t^{6}))^{2}}\\ \hline s[4,2]&\frac{t^{10}}{(1-t^{2})(1-t^{2})(1-t^{10})}\\ \hline s[4,1^{2}]&\frac{t^{11}(1+t^{4})}{(1-t^{2})(1-t^{6})(1-t^{6})(1-t^{12} )}\\ \hline s[3^{2}]&\frac{t^{2}}{(1-t^{2})(1-t^{2})(1-t^{6})(1-t^{12})}\\ \hline s[3,2,1]&\frac{t^{2}}{(1-t^{2})(1-t^{2})(1-t^{2})}(1-t^{10})\\ \hline s[3,1^{3}]&\frac{t^{2}}{(1-t^{2})(1-t^{3})(1-t^{6})(1-t^{12})}\\ \hline s[2^{3}]&\frac{t^{2}}{(1-t^{2})(1-t^{4})(1-t^{6})(1-t^{12})}\\ \hline s[2^{2},1^{2}]&\frac{t^{2}}{(1-t^{2})(1-t^{2})(1-t^{10})}\\ \hline s[2,1^{4}]&\frac{t^{17}}{(1-t^{4})(1-t^{6}))^{2}}\\ \hline s[1^{6}]&\frac{t^{2}}{(1-t^{4})(1-t^{6})(1-t^{10})(1-t^{12})}\\ \end{array}\] Let us give the first few isotypical decompositions of \(S_{2k+1}(\Gamma[2])\), we put \(d=\dim S_{2k+1}(\Gamma[2])\). \[\begin{array}{c| For \(k>2\), the dimensions of \(S_{k}^{\pm}(\Gamma_{0}(2))^{\text{new}}\) are given by (see [20, Theorem 2.2]) \[d_{2,k}^{\pm}=\left\{\begin{array}{ccc}(d_{2,k}\pm 1)/2&\text{if}&k\equiv 0\mod 8 \\ (d_{2,k}\mp 1)/2&\text{if}&k\equiv 2\mod 8\\ d_{2,k}/2&\text{if}&k\equiv 4,6\mod 8.\end{array}\right.\] So the generating series for the multiplicities of the irreducible representations \(s[5,1]\), \(s[3^{2}]\) and \(s[1^{6}]\) in \(S_{2k+1}^{(\mathbf{P})}(\Gamma[2])\) for \(k\geqslant 0\) are given by \[\frac{s[\alpha]}{\sum_{k\geqslant 0}m_{s[\alpha]}(S_{2k+1}^{(\mathbf{P})}( \Gamma[2]))\,t^{2k+1}}\,\,\frac{t^{11}}{(1-t^{4})(1-t^{6})}\,\,\frac{t^{7}}{(1- t^{2})(1-t^{6})}\,\,\frac{t^{5}}{(1-t^{4})(1-t^{6})}\] Keeping in mind that \(\dim s[5,1]=\dim s[3^{2}]=5\) and \(\dim s[1^{6}]=1\), we get \[\sum_{k\geqslant 0}\dim S_{2k+1}^{(\mathbf{P})}(\Gamma[2])\,t^{2k+1}=\frac{t^{ 5}+5t^{7}+5t^{9}+5t^{11}}{(1-t^{4})(1-t^{6})}.\] From this we deduce the generating series for \(\dim S_{2k+1}^{(\mathbf{G})}(\Gamma[2])\) \[\sum_{k\geqslant 0}\dim S_{2k+1}^{(\mathbf{G})}(\Gamma[2])\,t^{2k+1}=\sum_{k \geqslant 0}\left(\dim S_{2k+1}(\Gamma[2])-\dim S_{2k+1}^{(\mathbf{P})}( \Gamma[2])\right)\,t^{2k+1}=\frac{t^{9}(t^{8}-2t^{6}+10t^{4}+6t^{2}+9)}{(1-t^{ 2})^{3}(1-t^{6})(1+t^{2})}.\] We also deduce the generating series for the multiplicities of the irreducible representations in \(S_{2k+1}^{(\mathbf{G})}(\Gamma[2])\) for \(k\geqslant 0\) * for \(s[\alpha]\in\{s[6],s[4,2],s[4,1^{2}],s[3,2,1],s[3,1^{3}],s[2^{3}],s[2^{2},1^{2} ],s[2,1^{4}]\}\) we have \[m_{s[\alpha]}(S_{2k+1}^{(\mathbf{G})}(\Gamma[2]))=m_{s[\alpha]}(S_{2k+1}( \Gamma[2]))\] * for \(s[\alpha]\in\{s[5,1],s[3^{2}],s[1^{6}]\}\) we have \[m_{s[\alpha]}(S_{2k+1}^{(\mathbf{G})}(\Gamma[2]))=m_{s[\alpha]}(S_{2k+1}( \Gamma[2]))-m_{s[\alpha]}(S_{2k+1}^{(\mathbf{P})}(\Gamma[2]))\] so \[\frac{s[\alpha]}{\sum_{k\geqslant 0}m_{s[\alpha]}(S_{2k+1}^{(\mathbf{G})}( \Gamma[2]))\,t^{2k+1}}\,\,\frac{t^{15}(1+t^{2}-t^{2})}{((1-t^{2})(1-t^{3})(1-t^ {6})(1-t^{2})}\,\,\frac{t^{15}(1+t^{2}-t^{12})}{(1-t^{3})(1-t^{6})(1-t^{6})(1-t ^{12})}\] The next table gives the first few isotypical decompositions of \(S_{2k+1}(\Gamma[2])\), we indicate the multiplicities of Saito-Kurakawa lifts in blue and those of the general type in black, we put \(d_{P}=\dim S_{k}^{(\mathbf{P})}(\Gamma[2])\) and \(d_{G}=\dim S_{k}^{(\mathbf{G})}(\Gamma[2])\) [MISSING_PAGE_POST] \[\begin{array}{c| **Remark 3.3**.: As a sanity check, we verify \[\sum_{k\geqslant 0}\sum_{s[\omega]}\dim(s[\omega])\,m_{s[\omega]}(M_{2k}( \Gamma[2]))\,t^{2k}=\frac{(1+t^{2})(1+t^{4})}{(1-t^{2})^{4}}=\sum_{k\geqslant 0 }\dim M_{2k}(\Gamma[2])\,t^{2k}\] in agreement with Theorem 3.1. Let us give the first few isotypical decompositions of \(M_{2k}(\Gamma[2])\), we put \(d=\dim M_{2k}(\Gamma[2])\) \[\begin{array}{ From the previous table, Theorem 3.1 and the isotypical decomposition of \(E^{(\mathbf{P})}_{2k}(\Gamma[2])\), we deduce that \(\dim_{\mathfrak{S}_{6}}E^{(\mathbf{Q})}_{0}(\Gamma[2])=\dim_{\mathfrak{S}_{6}}E ^{(\mathbf{Q})}_{2}(\Gamma[2])=0\). For \(k\geqslant 2\), Proposition 13.1 of [12] gives \[\dim_{\mathfrak{S}_{6}}E^{(\mathbf{Q})}_{2k}(\Gamma[2])=\operatorname{Ind}^{ \mathfrak{S}_{6}}_{H}\big{(}\dim_{\mathfrak{S}_{3}}S_{2k}(\Gamma(2))\big{)}\] where \(H\) denotes the stabiliser in \(\mathfrak{S}_{6}\) of one of the \(1\)-dimensional boundary components of the Satake compactification of \(\Gamma[2]\backslash\mathfrak{H}_{2}\). Note that \(H\) is of order \(48\) and recall (for more details see [12, Section 2]) that we have \[\frac{s[\partial]}{\operatorname{Ind}^{\mathfrak{S}_{6}}_{H}(s[\partial])} \quad s[6]\oplus s[5,1]\oplus s[4,2]\quad s[4,2]\oplus s[3,2,1]\oplus s[2^{3} ]\quad s[3,1^{3}]\oplus s[2,1^{4}]\] Therefore, for \(k\geqslant 2\), the isotypical decomposition of the space \(E^{(\mathbf{Q})}_{2k}(\Gamma[2])\) is as follows \[\dim_{\mathfrak{S}_{6}}E^{(\mathbf{Q})}_{2k}(\Gamma[2])= d_{1,2k}(s[6]+s[5,1])+(2d_{1,2k}+d_{2,2k})s[4,2]+(d_{1,2k}+d_{2,2k})(s[3,2,1]+s[2^{3}])\] \[+d_{4,2k}(s[3,1^{3}]+s[2,1^{4}]).\] Putting this together we get the generating series for the multiplicities of the irreducible representations of \(\mathfrak{S}_{6}\) in \(E^{(\mathbf{P})}_{2k}(\Gamma[2])\) and \(E^{(\mathbf{Q})}_{2k}(\Gamma[2])\) \[\begin{array}{c|c|c}s[\partial]&\sum_{k\geqslant 0}m_{s[\partial]}(E^{( \mathbf{P})}_{2k}(\Gamma[2]))\;t^{2k}&\sum_{k\geqslant 0}m_{s[\partial]}(E^{( \mathbf{Q})}_{2k}(\Gamma[2]))\;t^{2k}\\ \hline s[6]&\frac{1-t^{2}+t^{2}}{1-t^{2}}&\frac{1(-t^{2})(1-t^{2})}{t^{2}}\\ \hline s[5,1]&0&\frac{t^{2}}{(1-t^{2})(1-t^{2})}\\ \hline s[4,2]&\frac{t^{4}}{1-t^{2}}&\frac{t^{2}}{(1-t^{2})(1-t^{4})}\\ \hline s[3,2,1]&0&\frac{t^{2}}{(1-t^{2})(1-t^{2})}\\ \hline s[3,1^{3}]&0&\frac{t^{2}}{(1-t^{4})(1-t^{4})}\\ \hline s[2^{3}]&\frac{t^{2}}{1-t^{2}}&\frac{t^{2}}{(1-t^{2})(1-t^{4})}\\ \hline s[2,1^{4}]&0&\frac{t^{4}}{(1-t^{4})(1-t^{6})}\\ \end{array}\] and \(0\) for \(s[4,1^{2}],s[3^{2}],s[2^{2},1^{2}]\) and \(s[1^{6}]\). **Remark 3.4**.: The isotypical decomposition of the space \(S_{2k}(\Gamma(2))\) in [12, Proposition 13.1] was written as \[\dim_{\mathfrak{S}_{3}}S_{2k}(\Gamma(2))=\operatorname{Sym}^{k}(s[2,1])- \left\{\begin{array}{cc}s[2,1]&\text{if}&k=1\\ s[3]+s[2,1]&\text{if}&k\geqslant 2.\end{array}\right.\] This directly gives \(\dim S_{4k}(\Gamma(2))=2(k-1)\) for \(k\geqslant 1\), this can also be checked by using (5), (6) and \(\dim S_{4k}(\operatorname{SL}(2,\mathbb{Z}))=\lfloor k/3\rfloor\). As a sanity check, we verify \[\sum_{k\geqslant 0}\sum_{s[\partial]}\dim(s[\partial])\,m_{s[\partial]}(E^{( \mathbf{Q})}_{2k}(\Gamma[2]))\,t^{2k}=15\frac{t^{6}}{(1-t^{2})^{2}}=15\sum_{k \geqslant 0}\dim S_{2k}(\Gamma(2))\,t^{2k}\] in agreement with \(E^{(\mathbf{Q})}_{2k}(\Gamma[2])\cong S_{2k}(\Gamma(2))^{\oplus 15}\). The next table gives the first few isotypical decompositions of \(E_{2k}(\Gamma[2])\) where we indicate the multiplicities of the Siegel-Eisenstein part in blue and those of the Klingen-Eisenstein part in black, we put \(d_{F}=\dim E^{(\mathbf{P})}_{k}(\Gamma[2])\) and \(d_{Q}=\dim E^{(\mathbf{Q})}_{k}(\Gamma[2])\) \[\begin{array}{c|c|c|c|c|c|c|c|c|c|c|c|c|c}s[\partial]&s[6]&s[5,1]&s[4,2]&s[4, 1^{2}]&s[3^{2}]&s[3,2,1]&s[3,1^{3}]&s[2^{3}]&s[2^{2},1^{2}]&s[2,1^{4}]&s[1^{6}] \\ \dim s[\partial]&1&5&9&10&5&16&10&5&9&5&1\\ \hline\hline k&&&&&&&&&&&&&&&&d=d_{F}+d_{Q}\\ \hline 0&1&0&0&0&0&0&0&0&0&0&0&0&1+0\\ 2&0&0&0&0&0&0&1&0&0&0&5+0\\ 4&1&0&1&0&0&0&1&0&0&0&15+0\\ 6&1&0&1&0&0&0&1&1&0&1&0&15+15\\ 8&1&0&1+1&0&0&1&0&1+1&0&0&0&15+30\\ 10&1&0&1+1&0&0&1&1&1+1&0&1&0&15+45\\ 12&1+1&1&1+2&0&0&1&1&1+1&0&1&0&15+60\\ 14&1&0&1+2&0&0&2&1&1+2&0&1&0&15+75\\ \end{array}\] #### 3.2.2. **Isotypical decomposition of \(S_{2k}(\Gamma[2])\)** For \(k\geqslant 2\), we know, see [28, pp. 882-883], that \[\dim S_{2k}(\Gamma[2])=\dim M_{2k}(\Gamma[2])-15(k-2)-15=(k-2)(2k^{2}+7k-24)/3. \tag{7}\] We also know \(S_{0}(\Gamma[2])=S_{2}(\Gamma[2])=\{0\}\). The generating series for the dimension of the spaces \(S_{2k}(\Gamma[2])\) is therefore given by \[\sum_{k\geqslant 0}\dim S_{2k}(\Gamma[2])\;t^{2k}=\frac{t^{6}(5+4t^{2}-5t^{4})} {(1-t^{2})^{4}}.\] By definition of cusp forms, for \(k\geqslant 0\) we have \[m_{s[\alpha]}(S_{2k}(\Gamma[2]))=m_{s[\alpha]}(M_{2k}(\Gamma[2]))-\Big{(}m_{s[ \alpha]}(E_{2k}^{(\textbf{P})}(\Gamma[2]))+m_{s[\alpha]}(E_{2k}^{(\textbf{Q} )}(\Gamma[2]))\Big{)}.\] So the generating series for the multiplicities of the irreducible representations of \(\mathfrak{S}_{6}\) in \(S_{2k}(\Gamma[2])\) are given by \[\begin{array}{c|c}s[\bar{\omega}]&\sum_{k\geqslant 0}m_{s[\alpha]}(S_{2k}( \Gamma[2]))\;t^{2k}\\ \hline s[6]&\frac{t^{10}(1+t^{2}-t^{12})}{(1-t^{2})(1-t^{6})(1-t^{10})(1-t^{12 })}\\ \hline s[5,1]&\frac{t^{16}(1+t^{2}-t^{6})}{(1-t^{1})(1-t^{10})(1-t^{10})}\\ \hline s[4,2]&\frac{t^{10}(1-t^{2}+t^{10})}{(1-t^{2})(1-t^{10})(1-t^{10})}\\ \hline s[4,1^{2}]&\frac{t^{12}(1+t^{4})}{(1-t^{2})(1-t^{10})(1-t^{12})}\\ \hline s[3^{2}]&\frac{t^{20}(1-t^{2})}{(1-t^{2})(1-t^{10})(1-t^{12})}\\ \hline s[3,2,1]&\frac{t^{10}(1+t^{2}+t^{2}+t^{2}-t^{10})}{(1-t^{2})(1-t^{10})( 1-t^{10})(1-t^{12})}\\ \hline s[3,1^{3}]&\frac{t^{10}(1+t^{2}+t^{2}-t^{10})}{(1-t^{2})(1-t^{10})(1-t^{ 10})(1-t^{12})}\\ \hline s[2^{3}]&\frac{t^{10}(1-t^{2}+t^{2}-t^{10})}{(1-t^{2})(1-t^{10})(1-t^{ 12})}\\ \hline s[2^{2},1^{2}]&\frac{t^{14}}{(1-t^{2})(1-t^{2})(1-t^{10})}\\ \hline s[2,1^{4}]&\frac{t^{10}(1-t^{2}-t^{6})}{(1-t^{1})(1-t^{10})(1-t^{6})(1-t^ {6})^{2}}\\ \hline s[1^{6}]&\frac{t^{20}}{(1-t^{2})(1-t^{6})(1-t^{10})(1-t^{12})}\\ \hline\end{array}\] Again, Conjecture 6.6 of [4] now proved by Rosner (see [23, Section 5]) tells us that only the Arthur packets **(P)** and **(G)** can occur in \(S_{2k}(\Gamma[2])\) and so \[S_{2k}(\Gamma[2])=S_{2k}^{(\textbf{G})}(\Gamma[2])\oplus S_{2k}^{(\textbf{P} )}(\Gamma[2]).\] Moreover Conjecture 6.6 of [4] gives the isotypical decomposition of \(S_{2k}^{(\textbf{P})}(\Gamma[2])\): \[\dim_{\mathfrak{S}_{6}}S_{2k}^{(\textbf{P})}(\Gamma[2])=d_{1,4k-2}\,s[6]+(d_{ 1,4k-2}+d_{2,4k-2}^{+})\,s[4,2]+(d_{2,4k-2}+d_{2,4k-2}^{-})\,s[2^{3}]\] where the integers \(d_{N,k}\) and \(d_{N,k}^{\pm}\) are defined as in (4). The generating series for the multiplicity of the irreducible representations \(s[6]\), \(s[4,2]\) and \(s[2^{3}]\) in \(S_{2k}^{(\textbf{P})}(\Gamma[2])\) for \(k\geqslant 0\) are therefore given by \[\frac{s[\bar{\omega}]}{\sum_{k\geqslant 0}m_{s[\alpha]}(S_{2k}^{(\textbf{P})}( \Gamma[2]))\;t^{2k}}\;\frac{t^{10}}{(1-t^{2})(1-t^{6})}\;\frac{t^{8}}{(1-t^{2 })(1-t^{4})}\;\frac{t^{6}}{(1-t^{2})(1-t^{4})}\] Keeping in mind that \(\dim s[6]=1\), \(\dim s[4,2]=9\) and \(\dim s[2^{3}]=5\), we deduce \[\sum_{k\geqslant 0}\dim S_{2k}^{(\textbf{P})}(\Gamma[2])\,t^{2k}=\frac{t^{6}(5+14t ^{2}+15t^{4}+10t^{6})}{(1-t^{4})(1-t^{6})}.\] From this we get the generating series for \(\dim S_{2k}^{(\textbf{G})}(\Gamma[2])\): \[\sum_{k\geqslant 0}\dim S_{2k}^{(\textbf{G})}(\Gamma[2])\,t^{2k}=\sum_{k \geqslant 0}\big{(}\dim S_{2k}(\Gamma[2])-\dim S_{2k}^{(\textbf{P})}(\Gamma[2] )\big{)}\,t^{2k}=\frac{t^{8}(10+21t^{2}+9t^{4}-t^{6}-15t^{8})}{(1-t^{2})^{2}(1 -t^{4})(1-t^{6})}\] and also the generating series for the multiplicity of the irreducible representations in \(S_{2k}^{(\textbf{G})}(\Gamma[2])\) for \(k\geqslant 0\) * for \(s[\omega]\in\{s[5,1],s[4,1^{2}],s[3^{2}],s[3,2,1],s[3,1^{3}],s[2^{2},1^{2}],s[2,1^{4 }],s[1^{6}]\}\) we have \[m_{s[\omega]}(S^{(\mathbf{G})}_{2k}(\Gamma[2]))=m_{s[\omega]}(S_{2k}(\Gamma[2]))\] * for \(s[\omega]\in\{s[6],s[4,2],s[2^{3}]\}\) we have \[m_{s[\omega]}(S^{(\mathbf{G})}_{2k}(\Gamma[2]))=m_{s[\omega]}(S_{2k}(\Gamma[2] ))-m_{s[\omega]}(S^{(\mathbf{G})}_{2k}(\Gamma[2]))\] so \[\frac{s[\omega]}{\sum_{k\geqslant 0}m_{s[\omega]}(S^{(\mathbf{G})}_{2k}( \Gamma[2]))\,t^{2k}}\,\,\frac{t^{20}(1+t^{2}+t^{4}-t^{12}-t^{14})}{(1-t^{5})(1 -t^{6})(1-t^{10})(1-t^{12})}\,\,\frac{t^{12}(1+t^{2}-t^{10})}{(1-t^{2})(1-t^{2 })(1-t^{2})^{2}(1-t^{10})}\,\,\frac{t^{12}(1+t^{2}-t^{12})}{(1-t^{2})(1-t^{4})( 1-t^{6})(1-t^{12})}\] ## 4. Euler characteristics of local systems Let \(\mathcal{A}_{2}[2]\) be the moduli space of principally polarized abelian surfaces equipped with a full level two structure. This is a smooth Deligne-Mumford stack defined over \(\mathrm{Spec}(\mathbb{Z}[1/2])\). The space \(\mathcal{A}_{2}[2]\) comes equipped with a natural action of the symmetric group \(\mathfrak{S}_{6}\cong\mathrm{GSp}(4,\mathbb{Z}/2)\). Let \(\pi:\mathcal{X}\to\mathcal{A}_{2}[2]\) denote the universal object and define the local system \(\mathbb{V}=R^{1}\pi_{*}\mathbb{C}\) on \((\mathcal{A}_{2}[2])_{\mathbb{C}}\). To each pair of integers \((l,m)\), with \(l\geq m\geq 0\), we get a local system \(\mathbb{V}_{l,m}\) from the corresponding irreducible representation of \(\mathrm{GSp}(4)\). The moduli space \(\mathcal{A}_{2}[2]\) can be identified with the union of the moduli space of tuples \((C,r_{1},\dots,r_{6})\) where \(C\) is either a genus \(2\) curve or an unordered pair of elliptic curves (note that curves are assumed to be projective, irreducible and smooth) intersecting in the point at infinity, and where \((r_{1},\dots,r_{6})\) is a \(6\)-tuple of marked Weierstrass points (distinct from infinity, in the elliptic curve case). Denote the strata respectively by \(\mathcal{A}_{2}[2]\) and \(\mathcal{A}_{1,1}[2]\). With this identification, the action of \(\mathfrak{S}_{6}\) is by permutation of the marked Weierstrass points. For more details about the above cf. [4]. Let \(\psi:(\mathcal{A}_{2}[2])_{\mathbb{C}}\to\mathcal{A}_{2}[2]\) denote the coarse moduli space, respectively \(M_{2}[2]\) and \(A_{1,1}[2]\), and put \(V_{l,m}=\psi_{*}\mathbb{V}_{l,m}\). There is an induced action of \(\mathfrak{S}_{6}\) on the compactly supported Betti cohomology groups \(H^{i}_{c}\) of these spaces with coefficients in \(V_{l,m}\). For any partition \(\omega=[6^{\omega_{6}},\dots,1^{\omega_{1}}]\) of \(6\), put \(h^{i}_{c,\omega}=m_{s[\omega]}(H^{i}_{c})\). We will now identify the representation ring \(\mathbb{Z}[\mathfrak{S}_{6}]\) of \(\mathfrak{S}_{6}\) with the ring of symmetric polynomials. With this interpretation, for a partition \(\omega\) of \(6\), \(s[\omega]\) equals the corresponding Schur polynomial. Let also \(p_{i}\) denote the \(i\)th power sum polynomial and put \(p_{\omega}=p_{1}^{\omega_{1}}\cdots p_{6}^{\omega_{6}}\). Moreover, for any \(\lambda=(l,m)\), let \(s_{<\lambda>}\) denote the symplectic Schur polynomial in four variables associated to \(\lambda\), see [14, Appendix A]. ### Formulas for the Euler characteristics The aim of this section is to give a formula, for any \(\lambda=(l,m)\), of the \(\mathfrak{S}_{6}\)-equivariant Euler characteristic, \[E_{c}(A_{2}[2],V_{\lambda})=\sum_{\omega\in-6}E_{c,\omega}(A_{2}[2],V_{ \lambda})s[\omega]\in\mathbb{Z}[\mathfrak{S}_{6}],\] where \[E_{c,\omega}(A_{2}[2],V_{\lambda})=\sum_{i=0}^{4}(-1)^{i}h^{i}_{c,\omega}(A_{ 2}[2],V_{\lambda})\in\mathbb{Z}.\] Stratify the spaces \(X_{1}=M_{2}[2]\) and \(X_{2}=A_{1,1}[2]\) (or equivalently \(M_{2}\) and \(A_{1,1}\), the corresponding coarse moduli spaces without a level two structure), into strata \(\Sigma_{i}(G)\), for \(G\) a finite group, consisting of the curves corresponding to points of \(X_{i}\) whose automorphism group equals \(G\). Let \(E_{c}(\Sigma_{i}(G))\) denote the Euler characteristic of \(\Sigma_{i}(G)\). Say that \(g\in G\) has eigenvalues \(\xi_{1}(g)\), \(\xi_{2}(g)\), \(\xi_{3}(g)\) and \(\xi_{4}(g)\) when acting on \(H^{1}(C,\mathbb{C})\) of a curve \(C\in\Sigma_{i}(G)\). Say furthermore that the induced action of \(g\in G\) on the six Weierstrass points of a curve \(C\in\Sigma_{i}(G)\) has \(\mu_{j}\) cycles of length \(j\) for \(j=1,\dots,6\), giving a partition \(\mu(g,G,i)\). Note that this data will be constant on the strata, i.e. independent of the choice of \(C\in\Sigma_{i}(G)\). On a strata \(\Sigma_{i}(G)\) the Euler characteristic \(E_{c}(\Sigma_{i}(G),V_{\lambda})=E_{c}(\Sigma_{i}(G))\cdot\dim V_{\lambda}^{G}\) and hence \[E_{c}(A_{2},V_{\lambda})=\sum_{i=1}^{2}\sum_{G}\frac{E_{c}\Big{(}\Sigma_{i}(G) \Big{)}}{|G|}\sum_{g\in G}s_{<\lambda>}\big{(}\xi_{1}(g),\xi_{2}(g),\xi_{3}(g ),\xi_{4}(g)\big{)}\in\mathbb{Z}. \tag{8}\] This method was used in [15] to find a formula for \(E_{c}(M_{2},V_{\lambda})\) for any \(\lambda\). Adding the level two structure we need to take the action of \(\mathfrak{S}_{6}\) on the Weierstrass points into account and one finds that, \[E_{c}(A_{2}[2],V_{\lambda})=\sum_{i=1}^{2}\sum_{G}\frac{E_{c}(\Sigma_{i}(G))}{|G |}\sum_{g\in G}s_{<\lambda>}\big{(}\xi_{1}(g),\xi_{2}(g),\xi_{3}(g),\xi_{4}(g) \big{)}p_{\mu(g,G,i)}\in\mathbb{Z}[\mathfrak{S}_{6}]. \tag{9}\] This formula can be compared to the one in [7, Section 9]. In the two following sections, we will describe how to find the necessary information to compute (9) for any \(\lambda\). ### Smooth curves of genus two The stratification by automorphism group \(G\) for \(M_{2}\) was found by Bolza [9], see below. We follow the description in [15, Section 4]. Curves \(C\) of genus \(2\) are described by equations \(C_{f}:y^{2}-f(x)=0\), where \(f\) square-free polynomial of degree \(5\) or \(6\). The automorphism group \(G_{f}\) of a curve \(C_{f}\) is equal to the subgroup of \(\operatorname{SL}(2,\mathbb{C})\times\mathbb{C}^{\times}\) consisting of elements \[(\gamma,u)=\Big{(}\begin{pmatrix}a&b\\ c&d\end{pmatrix},u\Big{)}\in\operatorname{SL}(2,\mathbb{C})\times\mathbb{C}^{ \times}\quad\text{such that}\quad f(x)=(\gamma,u)\cdot f(x)=\frac{(cx+d)^{6}}{u^ {2}}f\Big{(}\frac{ax+b}{cx+d}\Big{)}\] quotiented by the subgroup generated by the element \((-\mathrm{id},-1)\in\operatorname{SL}(2,\mathbb{C})\times\mathbb{C}^{\times}\). These groups will be given as pairs \((\Gamma_{f},\rho_{f})\), where \(\Gamma_{f}\) is a subgroup of \(\operatorname{SL}(2,\mathbb{C})\) that preserves the set of roots of \(f\) and \(\rho_{f}\) is a character of \(\Gamma_{f}\) such that \(G_{f}\cong\Gamma_{f}(\rho_{f})/<(-\mathrm{id},-1)>\) where \[\Gamma_{f}(\rho_{f})=\{(\gamma,u)\in\operatorname{SL}(2,\mathbb{C})\times \mathbb{C}^{\times}:u^{2}=\rho_{f}(\gamma)\}.\] There is an isomorphism \(H^{1}(C_{f},\mathbb{C})\cong H^{0}(C_{f},\Omega)\oplus H^{0}(C_{f},\Omega)^{\vee}\) and \(H^{0}(C_{f},\Omega)\) has a basis consisting of the differentials \(\omega_{0}=dx/y\), \(\omega_{1}=xdx/y\). The action of \((\gamma,u)\in\Gamma_{f}(\rho_{f})\) on the basis \((\omega_{0},\omega_{1})\) equals \[(\gamma,u)(\omega_{0},\omega_{1})=(u^{-1}(c\omega_{1}+d\omega_{0}),u^{-1}(a \omega_{1}+b\omega_{0})),\] see [15, Proposition 2]. This tells us that if \(\lambda_{\gamma}\) is an eigenvalues of \(\gamma\in\Gamma_{f}\) then \(\xi_{1}=\lambda_{\gamma}u^{-1}\), \(\xi_{2}=\lambda_{\gamma}^{-1}u^{-1}\), \(\xi_{3}=\lambda_{\gamma}^{-1}u\) and \(\xi_{4}=\lambda_{\gamma}u\), are the eigenvalues of \((\gamma,u)\in G_{f}\) acting on \(H^{1}(C_{f},\mathbb{C})\). Finally, we need to determine the action of every \(\gamma\in\Gamma_{f}\) on the roots of \(f\), together with the point at infinity in the case that the degree of \(f\) equals five. We will choose an ordering of the roots of \(f\) (and possibly infinity) and denote the induced permutation by \(\sigma_{\gamma}(f)\). There are seven strata for \(M_{2}[2]\) corresponding to the different automorphism groups \((\Gamma,\rho)\): \((C_{2},\mathrm{id})\), \((C_{4},\chi^{2})\), \((Q_{8},\chi_{0})\), \((Q_{12},\chi_{0})\), \((O,\chi)\), \((Q_{24},\chi_{+})\) and \((C_{10},\chi^{6})\). Here \(C_{n}\) denotes the cyclic group with \(n\) elements, \(Q_{4n}\) the quaternionic group with \(4n\) elements and \(O\) is the binary octahedral group with \(48\) elements. The characters are defined as in [15, pp. 124-125]. The groups \(\Gamma\subset\operatorname{SL}(2,\mathbb{C})\) can be generated by one element \(S\in\operatorname{SL}(2,\mathbb{C})\) in the abelian case, and two elements \(S\) and \(U=\big{(}\begin{smallmatrix}0\\ -1\end{smallmatrix}\big{)}\) in the non-abelian case. Put \(\epsilon_{n}=\mathrm{e}^{2\pi i/n}\). For further descriptions of these groups and characters, together with the computation of the Euler characteristics of the different strata, we refer to [15, Section 2]. The information in the following table can be gotten from straightforward computations. \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \((\Gamma,\rho)\) & \(f\in\Sigma(G)\) & \(E_{c}\) & \(S\) & \(\rho(S)\) & \(\sigma_{S}\) & \(\rho(U)\) & \(\sigma_{U}\) \\ \hline \((C_{2},\mathrm{id})\) & & \(-1\) & \(\mathrm{diag}(-1,-1)\) & \(1\) & \(\mathrm{id}\) & & \\ \((C_{4},\chi^{2})\) & \(x^{6}+\alpha x^{4}+\beta x^{2}+1\) & \(3\) & \(\mathrm{diag}(\epsilon_{4},\epsilon_{4}^{-1})\) & \(-1\) & \((12)(34)(56)\) & & \\ \((Q_{8},\chi_{0})\) & \(x(x^{4}+\alpha x^{2}+1)\) & \(-2\) & \(\mathrm{diag}(\epsilon_{4},\epsilon_{4}^{-1})\) & \(1\) & \((23)(45)\) & \(-1\) & \((16)(24)(35)\) \\ \((Q_{12},\chi_{0})\) & \(x^{6}+\alpha x^{3}-1\) & \(-2\) & \(\mathrm{diag}(\epsilon_{6},\epsilon_{6}^{-1})\) & \(1\) & \((123)(456)\) & \(-1\) & \((14)(25)(36)\) \\ \((O,\chi)\) & \(x(x^{4}+1)\) & \(1\) & \(\frac{-1}{\sqrt{2}}\big{(}\begin{smallmatrix}1&\epsilon_{8}\\ \epsilon_{8}^{3}&1\\ \epsilon_{8}^{3}&1\\ \epsilon_{8}^{3}&1\\ \epsilon_{8}^{4}&1\\ \epsilon_{12},\epsilon_{12}^{-1}\big{)}\) & \(-1\) & \((123456)\) & \(1\) & \((16)(25)(34)\) \\ \((C_{10},\chi^{6})\) & \(x(x^{5}-1)\) & \(1\) & \(\mathrm{diag}(\epsilon_{10},\epsilon_{10}^{-1})\) & \(e_{10}^{6}\) & \((23456)\) & & \\ \hline \end{tabular} The table above provides sufficient information to compute the contribution of \(\Sigma_{1}(G)\) to (9), for all abelian groups \(G\). For the non-abelian groups \(G\), the information that is missing is an eigenvalue \(\lambda_{\gamma}\) for all \(\gamma\in G\). This problem is solved for the quaternionic groups \(Q_{4n}\) by noting that it consists of the matrices \(\pm S^{j}\) and \(\pm US^{j}\) for \(j=1,\ldots,n\), and the latter all have eigenvalues \(\epsilon_{4},-\epsilon_{4}\). Eigenvalues for the elements of the binary octahedral group can be gotten from straightforward computation. ### Pairs of elliptic curves The stratification by automorphism group for \(A_{1}\), the moduli space of elliptic curves, is given by the three groups \(C_{2}\), \(C_{4}\) and \(C_{6}\). The corresponding strata have Euler characteristics \(-1\), \(1\) and \(1\) respectively. The two latter strata are points which can be represented by the curves \(y^{2}=x(x^{2}-1)\) and \(y^{2}=x^{3}-1\) respectively. The automorphism group is generated by the element \(y\mapsto-y\) for \(C_{2}\), by \(y\mapsto\epsilon_{4}y,x\mapsto-x\) for \(C_{4}\) and \(y\mapsto-y,x\mapsto\epsilon_{3}x\) for \(C_{6}\). For all elliptic curves of the form \(y^{2}=f(x)\), \(H^{0}(C_{f},\Omega)\) has a basis consisting of the differential \(\omega_{0}=dx/y\). The eigenvalues of the induced action on \(H^{1}(C_{f},\mathbb{C})\) of the generators of the automorphism groups given above then equals \(-1,-1\) for \(C_{2}\), \(\epsilon_{4},-\epsilon_{4}\) for \(C_{4}\) and \(\epsilon_{6},-\epsilon_{6}\) for \(C_{6}\). The induced action of the generators on the Weierstrass points (after choosing an ordering), which correspond to the roots of \(f(x)\) (together with infinity) equals id, \((12)\) and \((123)\) respectively. Consider now \(A_{1,1}\cong(A_{1}\times A_{1})/\mathfrak{S}_{2}\), which is the moduli space of unordered pairs of elliptic curves. This has the consequence that a pair of equal (or isomorphic) elliptic curves \(E\times E\) will have an extra automorphism that sends \((p_{1},p_{2})\in E\times E\) to \((p_{2},p_{1})\in E\times E\). There will therefore be seven possible automorphism groups for \(A_{1,1}\), namely \(C_{2}\times C_{2}\), \(C_{2}\wr\mathfrak{S}_{2}\), \(C_{2}\times C_{4}\), \(C_{2}\times C_{6}\), \(C_{4}\wr\mathfrak{S}_{2}\), \(C_{4}\times C_{6}\) and \(C_{6}\wr\mathfrak{S}_{2}\), where \(\wr\) denotes the wreath product. The Euler characteristics of the corresponding strata are directly found to be \(1,-1,-1,-1,1,1\) and \(1\), respectively. Take any two elliptic curves \(E_{1}\) and \(E_{2}\) with automorphism groups \(G_{1}\) and \(G_{2}\). Since \(H^{1}(E_{1}\times E_{2},\mathbb{C})\cong H^{1}(E_{1},\mathbb{C})\oplus H^{1}(E _{2},\mathbb{C})\) it is straightforward, using the information for \(A_{1}\) above, to compute the action of \(G_{1}\times G_{2}\) if \(E_{1}\) and \(E_{2}\) are not isomorphic, and of \(G_{1}\wr\mathfrak{S}_{2}\) if \(E_{1}\) and \(E_{2}\) are isomorphic. The action of \(G_{1}\times G_{2}\) (and of \(G_{1}\wr\mathfrak{S}_{2}\)) on the six WeierstraS point that are distinct from infinity on both elliptic curves is also straightforward. ## 5. Isotypical decomposition in the vector-valued case In this section we assume that \(j>0\), so we are dealing with vector-valued Siegel modular forms. As a consequence of [8, Proposition 1], we have \[M_{0,j}(\Gamma[2])=\{0\}\,\,\text{for any $j>0$}.\] Theorem A.5 by G. Chenevier in [11] tells us that \[M_{1,j}(\Gamma[2])=S_{1,j}(\Gamma[2])=\{0\}\,\,\text{for any $j>0$}.\] For \(k=2\), there is no dimension formula for the space \(M_{2,j}(\Gamma[2])\) in general. A conjectural description of the isotypical decomposition of the space \(S_{2,j}(\Gamma[2])\) is given in [11, Conjecture 1.2]. As this conjecture has only been verified for \(j<12\) we decided to not implement the isotypical decomposition of the space \(M_{2,j}(\Gamma[2])\) in our code. For \(k=3\), the situation is also still conjectural but with more evidence. In fact only the isotypical decomposition of \(E_{c,\mathrm{Eis}}(A_{2}[2],V_{j,0})\) (see Theorem 5.3 to understand how this part contributes to the isotypical decomposition of the space \(S_{3,j}(\Gamma[2])\), and then further in Remark 5.4) is still conjectural so we decided to implement the isotypical decomposition of the space \(S_{3,j}(\Gamma[2])\) in our code. Evidence towards this conjecture are given for example by the results of Petersen in [22] or in Section 6.6 of [5]. We start by recalling the dimension formula for the space \(M_{k,j}(\Gamma[2])\) which can also be used to check its conjectural isotypical decomposition for \(k=3\). **Theorem 5.1** ([29, Theorems 2 and 3] and [12, Theorem 12.1]).: _For \(k\geqslant 3\) odd and \(j\geqslant 2\) even we have_ \[\dim M_{k,j}(\Gamma[2])=\dim S_{k,j}(\Gamma[2])=\frac{1}{24}\big{(}2(j+1)k^{3 }+3(j^{2}-2j-8)k^{2}+(j^{3}-9j^{2}-42j+118)\,k\\ -2j^{3}-9j^{2}+152j-216\big{)}.\] _For \(k\geqslant 4\) even and \(j\geqslant 2\) even we have_ \[\dim M_{k,j}(\Gamma[2])=\frac{1}{24}\big{(}2(j+1)k^{3}+3(j^{2}-2j+2)k^{2}+(j^{ 3}-9j^{2}-12j+28)k-2j^{3}-9j^{2}+182j-336\big{)}.\] **Remark 5.2**.: For \(k=3\), this formula is rather pretty \[\dim S_{3,j}(\Gamma[2])=(j-2)(j-3)(j-4)/24.\] ### Isotypical decomposition of \(M_{k,j}(\Gamma[2])\) For \(k\geqslant 0\), the results of Rosner (see [23, Section 5]) and those of [12, Section 13] tell us \[M_{k,j}(\Gamma[2])=E_{k,j}(\Gamma[2])\oplus S_{k,j}(\Gamma[2])=E_{k,j}^{( \mathbf{Q})}(\Gamma[2])\oplus S_{k,j}^{(\mathbf{Y})}(\Gamma[2])\oplus S_{k,j}^ {(\mathbf{G})}(\Gamma[2])\] Theorem 5.13 and Remark 5.14 in [23] which prove Conjecture 6.4 of [4] give us the isotypical decomposition of the space \(S_{k,j}^{(\mathbf{Y})}(\Gamma[2])\) for \(k\geqslant 3\) and \(j>0\) \[\dim_{\mathfrak{S}_{6}}S_{k,j}^{(\mathbf{Y})}(\Gamma[2])=\mu_{1} \,s[2^{3}]+\mu_{2}\,s[2,1^{4}]+\mu_{3}\,s[1^{6}]\quad\text{with} \quad\mu_{1}=d_{2,j+2k-2}^{+}d_{2,j+2}^{+}+d_{2,j+2k-2}^{-}d_{2,j+2}^{-}\] \[\quad\mu_{2}=d_{4,j+2k-2}d_{4,j+2}^{-}\] \[\quad\mu_{3}=d_{2,j+2k-2}^{+}d_{2,j+2}^{-}+d_{2,j+2k-2}^{-}d_{2,j +2}^{+}\] where the integers \(d_{N,k}\) and \(d_{N,k}^{\pm}\) are defined as in (4). Proposition 13.1 in [12] gives us \(E_{k,j}^{(\mathbf{Q})}(\Gamma[2])=0\) for \(k\) odd. For \(k\geqslant 2\), this proposition (note that there is a typo in [12]) tells us that \[\dim_{\mathfrak{S}_{6}}E_{2k,j}^{(\mathbf{Q})}(\Gamma[2])= \mathrm{Ind}_{H}^{\mathfrak{S}_{6}}\left(\mathrm{Sym}^{j/2+k} \big{(}s[2,1]\big{)}-s[3]-s[2,1]\right)=\mathrm{Ind}_{H}^{\mathfrak{S}_{6}} \left(S_{2k+j}(\Gamma(2))\right)\] \[= d_{1,2k+j}(s[6]+s[5,1])+(2d_{1,2k+j}+d_{2,2k+j})s[4,2]+(d_{1,2k+j }+d_{2,2k+j})(s[3,2,1]+s[2^{3}])\] \[+d_{4,2k+j}(s[3,1^{3}]+s[2,1^{4}])\] where the last identity followed from Section 3.2.1. So to get the isotypical decomposition of the space \(M_{k,j}(\Gamma[2])\) it remains to determine it for the space \(S_{k,j}^{(\mathbf{G})}(\Gamma[2])\). This is done in the next section. ### An isotypical dimension formula for \(S_{k,j}^{(\mathbf{G})}(\Gamma[2])\) First we introduce some notation from [4]. Let \[\begin{array}{ll}A=s[6]\oplus s[5,1]+s[4,2],&A^{\prime}=s[6]\oplus s[4,2] \oplus s[2^{3}],\\ B=s[4,2]\oplus s[3,2,1]+s[2^{3}],&B^{\prime}=s[5,1]\oplus s[4,2]\oplus s[3,2,1 ],\\ C=s[3,1^{3}]\oplus s[2,1^{4}],&C^{\prime}=s[4,1^{2}]\oplus s[3^{2}].\end{array}\] For any \(l,m\), with \(l>m>0\), put \(n=l+m+4\), \(n^{\prime}=l-m+2\) and define \[E_{c,\mathrm{Eis}}(A_{2}[2],V_{l,m})= (d_{1,n^{\prime}}-d_{1,n})\,(A^{\prime}+B^{\prime})+(d_{2,n^{ \prime}}-d_{2,n})\,B^{\prime}+(d_{4,n^{\prime}}-d_{4,n})\,C^{\prime}+\frac{1} {2}\big{(}1+(-1)^{m}\big{)}\,(A+B)\] \[+2\big{(}(d_{1,m+2}-d_{1,l+3})\,(A+B)+(d_{2,m+2}-d_{2,l+3})\,B+(d_ {4,m+2}-d_{4,l+3})\,C\big{)} \tag{10}\] and \[E_{c,\mathrm{endo}}(A_{2}[2],V_{l,m})= -2\Big{(}d_{4,n^{\prime}}\,\big{(}d_{4,n}\,s[3,1^{3}]+d_{1,n}\,s[3 ^{2}]+(d_{1,n}+d_{2,n})\,s[4,1^{2}]\big{)}\] \[+d_{2,n^{\prime}}\,\big{(}(d_{1,n}+d_{2,n})\,s[3,2,1]+d_{4,n}\,s[ 4,1^{2}]+d_{1,n}\,s[4,2]+d_{1,n}\,s[5,1]\big{)}\] \[+d_{2,n^{\prime}}^{+}\,\big{(}d_{2,n}^{+}\,s[4,2]+d_{2,n}^{-}\,s[5,1]\big{)}+d_{2,n^{\prime}}^{-}\,\big{(}d_{2,n}^{-}\,s[4,2]+d_{2,n}^{+}\,s[5,1] \big{)}\] \[+d_{1,n^{\prime}}\,\big{(}d_{1,n}\,(A^{\prime}+B^{\prime})+d_{2,n} \,B^{\prime}+d_{4,n}\,C^{\prime}\big{)}\Big{)}\] as elements of the representation ring \(\mathbb{Z}[\mathfrak{S}_{6}]\). **Theorem 5.3**.: _For any \(k\geq 4\) and \(j>0\), put \(l=j+k-3\) and \(m=k-3\). Then_ \[\dim_{\mathfrak{S}_{6}}S_{k,j}^{(\mathbf{G})}(\Gamma[2])=-\frac{1}{4}\Big{(}E_{ c}(A_{2}[2],V_{l,m})-E_{c,\mathrm{Eis}}(A_{2}[2],V_{l,m})-E_{c,\mathrm{endo}}(A_{2}[2],V_{l,m})+2\dim_{\mathfrak{S}_{6}}S_{k,j}^ {(\mathbf{\Omega})}(\Gamma[2])\Big{)}.\] Proof.: In [4], the compactly supported \(\ell\)-adic Euler characteristic of local systems \(\mathbb{V}_{l,m}\) on \(\mathcal{A}_{2}[2]\) taking values in the Grothendieck group of (absolute) Galois representations is decomposed into the following pieces, \[e_{c}(\mathcal{A}_{2}[2],\mathbb{V}_{l,m})=e_{c,\mathrm{Eis}}(\mathcal{A}_{2}[2], \mathbb{V}_{l,m})+e_{c,\mathrm{endo}}(\mathcal{A}_{2}[2],\mathbb{V}_{l,m})-S[l-m,m +3,\Gamma[2]].\] The formula for \(E_{c,\mathrm{Eis}}(A_{2}[2],V_{l,m})\) (respectively \(E_{c,\mathrm{endo}}(A_{2}[2],V_{l,m})\)) is found by taking dimensions in the formula for \(e_{c,\mathrm{Eis}}(\mathcal{A}_{2}[2],\mathbb{V}_{l,m})\) (respectively \(e_{c,\mathrm{endo}}(\mathcal{A}_{2}[2],\mathbb{V}_{l,m})\)) in [4, Theorem 4.4] (respectively [4, Conjecture 7.1]). The representation \(S[l-m,m+3,\Gamma[2]]\) should conjecturally consist of 2-dimensional pieces for each Hecke eigenvector in \(S^{(\mathbf{G})}_{k,j}(\Gamma[2])\), with isotypic decomposition given in [4, Conjecture 6.4], and 4-dimensional pieces for each Hecke eigenvector in \(S^{(\mathbf{G})}_{k,j}(\Gamma[2])\). The conjectural description in [4] described above, has been proven in [23]. Conjectures 7.1 and 6.4 of [4] are proved by Theorem 5.13 of [23], see Remark 5.14 of [23]. The result then follows from [23, Corollary 5.20]. **Remark 5.4**.: In [4], directly after Theorem 4.4, it is conjectured that \(E_{c,\mathrm{Eis}}(A_{2}[2],V_{I,0})\) for any \(I>0\) is given by (10), with the difference that one needs to put \(d_{1,2}=-1\). If we assume this conjecture to be true, and we define \(E_{c,\mathrm{endo}}(A_{2}[2],V_{I,0})\) for any \(I>0\) using the formula above, then Theorem 5.3 also holds for \(k=3\) and \(j>0\) using the same proof (and the same results of [23]). ## Acknowledgement The second author was supported by the Simons Foundation Award 546235 at the Institute for Computational and Experimental Research in Mathematics at Brown University. We thank Eran Assaf and Gerard van der Geer for useful discussions.
2309.03381
Active shooter detection and robust tracking utilizing supplemental synthetic data
The increasing concern surrounding gun violence in the United States has led to a focus on developing systems to improve public safety. One approach to developing such a system is to detect and track shooters, which would help prevent or mitigate the impact of violent incidents. In this paper, we proposed detecting shooters as a whole, rather than just guns, which would allow for improved tracking robustness, as obscuring the gun would no longer cause the system to lose sight of the threat. However, publicly available data on shooters is much more limited and challenging to create than a gun dataset alone. Therefore, we explore the use of domain randomization and transfer learning to improve the effectiveness of training with synthetic data obtained from Unreal Engine environments. This enables the model to be trained on a wider range of data, increasing its ability to generalize to different situations. Using these techniques with YOLOv8 and Deep OC-SORT, we implemented an initial version of a shooter tracking system capable of running on edge hardware, including both a Raspberry Pi and a Jetson Nano.
Joshua R. Waite, Jiale Feng, Riley Tavassoli, Laura Harris, Sin Yong Tan, Subhadeep Chakraborty, Soumik Sarkar
2023-09-06T21:58:58Z
http://arxiv.org/abs/2309.03381v1
# Active shooter detection and robust tracking utilizing supplemental synthetic data ###### Abstract The increasing concern surrounding gun violence in the United States has led to a focus on developing systems to improve public safety. One approach to developing such a system is to detect and track shooters, which would help prevent or mitigate the impact of violent incidents. In this paper, we proposed detecting shooters as a whole, rather than just guns, which would allow for improved tracking robustness, as obscuring the gun would no longer cause the system to lose sight of the threat. However, publicly available data on shooters is much more limited and challenging to create than a gun dataset alone. Therefore, we explore the use of domain randomization and transfer learning to improve the effectiveness of training with synthetic data obtained from Unreal Engine environments. This enables the model to be trained on a wider range of data, increasing its ability to generalize to different situations. Using these techniques with YOLOv8 and Deep OC-SORT, we implemented an initial version of a shooter tracking system capable of running on edge hardware, including both a Raspberry Pi and a Jetson Nano. ## Introduction In 2005, the Federal Bureau of Investigation (FBI) and leading criminologists defined a'mass shooting' as an attack in a public place where four or more victims were killed. Using this definition, there have been at least 149 public mass shootings across the United States since 1982[1]. While mass shootings are relatively rare, the impact they can have on a community, especially in the case of a school shooting, is more than enough reason to strive for improving public safety[2]. However, the exact approach to improving school safety is heavily debated, often over concerns of invasion of privacy or degrading the quality of the student learning environment. On top of that, some of the common approaches, including metal detectors, armed school resource officers, and backpack searches, have shown varying effectiveness in different schools[3]. A possible explanation for this is differences in implementations and resource availability. School resource officers can vary significantly from one school to another, sometimes serving a purely disciplinary role and other times serving a more supportive role[4]. Besides the common approaches mentioned above, there are also recent works that utilize advanced technology in detecting shooters. One approach to shooter detection is using acoustic sensors in gunshot detection technology[5, 6]. This type of system would simply alert law enforcement agencies when a gunshot is detected. However, this approach may have limitations in distinguishing between gunshots and other loud noises, such as fireworks or car backfires, and inaccurately capturing the exact location of a shooter moving or shooting from a distance Without visual information, this kind of shooter detection cannot provide details such as the number of shooters or their targets. Additionally, it requires the installation of additional specialized devices, which can be a disadvantage. Our approach aims to minimize invasion of privacy as it would unobtrusively leverage video from existing security cameras. Additionally, our approach would automatically detect threats instead of relying on security personnel to monitor a surveillance system. As a result, the time taken to relay information about the shooter would be reduced. This can improve the effectiveness of law enforcement responding to the scene and be used to evacuate civilians when it is safe to move from cover. While schools are often the most discussed setting for this topic, our system could also be used in any public space where there is a risk of a shooting, such as hospitals, shopping malls, and airports. While there is existing work and datasets focusing on the detection of guns or other weapons[7, 8, 9, 10, 11, 12, 13, 14, 15, 16], the tracking of shooters, however, has not been extensively explored. Detecting the entire shooter has the potential to improve tracking robustness, as their location would not be as easily lost if the vision of the gun is obstructed. The challenge with this approach is that gathering good-quality data on shooters has proven time-consuming and difficult. Most publicly available data comprises poor-quality surveillance videos, often split into short, discontinuous clips unsuitable for training. Additionally, places where such a system would reasonably be implemented would likely already have good-quality security cameras. Thus, the data used to train should be of at least a reasonable baseline quality to avoid making the already difficult task of detecting guns more challenging than it needs to be. Another common type of video source used for this task is movies; however, the camera perspectives in movies are rarely similar to those that a security camera would capture. This is important to note because the appearance of a gun can change significantly, especially in the case of handguns, where they can appear to be nothing but a rectangle, similar to a smartphone. There are also privacy concerns with conducting experiments to record videos for the dataset, both for the individuals participating and the public buildings that would be used to simulate a shooting event. As a result, we have explored the use of synthetic data generated with Unreal Engine to supplement the limited availability of real data. However, a well-known limitation of model training with synthetic data (even with the semi-realistic textures) is that the model does not directly transfer well to inference on real data. To improve the efficacy of training with synthetic data, we utilize domain randomization, which is a domain adaptation technique that aims to generalize a model through training with highly variable synthetic data [17, 18]. Besides that, transfer learning [19] also allows us to use various combinations of textured images of the Unreal Engine environment, masked images with random colors, and real data by sequentially training the detection model. We also augment the textured synthetic data with camera sensor effects to further help bridge the gap between synthetic and real data [20]. These effects include noise, blur, chromatic aberration, exposure, and color shift, which are randomly applied with varying strengths. An overview of our system can be seen in Fig. 1. It first shows the three types of data, real, textured synthetic, and masked synthetic, that comprise our shooter dataset. Various amounts of each type of data, the specifics of which are discussed later, are used sequentially to train YOLOv8n. The best performing YOLOv8n model is used with Deep Observation-Centric (OC)-SORT to track shooters in sources such as security videos, which can then be used to enable more informed law enforcement responses. In terms of deployment, the use of edge devices, or small, relatively inexpensive computers, can also increase privacy by only transmitting necessary information, such as whether a threat is detected or not (binary), rather than the entire camera frames/images. On top of that, this also decreases the network bandwidth used for the system. Additionally, decentralized systems like this are generally more scalable and robust. While passing all video frames to a central server to be processed Figure 1: An overview of the entire system. We train YOLOv8n using a combination of synthetic and real data. The best model is used for inference with Deep OC-SORT tracking to localize a shooter, enabling a faster and more informed response for law enforcement. would be functionally the same, it would be more complicated to expand in the future, and the failure of the server would bring the whole system down. For example, if a building wanted to add additional cameras after the initial installation, they may be required to upgrade their entire server to handle the increased computational demand. On the other hand, since the edge devices can handle the computations in this system, they would only need additional devices to pair with the new cameras, providing a more straightforward cost estimation for expanding the system. For the task of detecting and tracking shooters in public places, we make the following contributions: * The creation of a publicly available dataset (synthetic and real) with annotations for gun and shooter classes will allow for further exploration of the detection and tracking of shooters. * Development of a robust tracking system utilizing gun detection-based shooter confirmation to reduce false positives while being more likely to keep track of a threat through occlusions. * Evaluation on implementing the proposed system on edge hardware, such as a Jetson Nano, and the considerations required. ## Results Our results are broken into four primary subsections. We first present detection performance for when You Only Look Once v8 nano (YOLOv8n) models are trained with varying combinations of real, textured synthetic, and masked synthetic data. Next, we present the tracking performance using Deep OC-SORT with Omni-Scale Network (OSNET) Re-Identification (ReID) with and without gun confirmation for shooter IDs. We also analyze the system-level performance in a more realistic context rather than just using standard metrics. Lastly, we report the system's performance on edge devices such as a Jetson Nano and Rasberry Pi 4. ### Detection with YOLOv8n After obtaining the models by fine-tuning the pretrained YOLOv8n model on 71 different data combinations, we evaluate the performance by testing them on a dataset of 100 real images. We set the batch size to 1, the object confidence threshold for detection to 0.001, and IoU to 0.5 for the testing. The result of the shooter class detection is shown in Fig. 2 and the result of combine shooter and gun detection is shown in Supplementary Fig. S1. We can see that by combining with Unreal Engine 4 (UE4) data and with the help of augmentation, the performance is improved compared to only using Unreal Engine 5 (UE5) data for training. Precision (P), recall (R), and mean average precision (mAP) results for the shooter and gun classes can be found in Table 1 and Supplementary Table S1, respectively. ### Tracking with Deep OC-SORT and OSNET ReID Evaluating tracking performance is much more tedious for custom data than detection performance. Rather than just frames, entire videos must be annotated frame by frame with consistent IDs throughout. Our preliminary tracking evaluation is limited to six real videos we annotated. Rather than only include videos with at least one shooter in it, we also included a video of a busy mall to challenge the false positive rate of the models. All 71 detection models were tried with individually varying confidence thresholds for the gun and shooter classes. Additionally, we ran each configuration for both tracking a shooter alone and tracking a shooter with our gun-based confirmation. The overall best-performing data combination combines UE4 and UE5 data with augmented textured synthetic data, shown in Fig. 3. Results for the other data types can be seen in Supplementary Fig. S2. The tracking results for the _AugCTextured_CMasked_Real_S_, with a gun confidence threshold of 0.8 and a shooter confidence threshold of 0.6, can be seen in Table 2. This table includes many of the standard multi-object tracking metrics, such as ID F1 score (IDf1), precision (IDP), and recall (IDR), regular recall (R) and precision (P), true positives (TP), false positives (FP), false negatives (FN), ID switches (IDSW), and multi-object tracking accuracy (MOTA). Metrics with an ID prefix correspond to the performance of maintaining the correct IDs. ### System-level performance Standard performance metrics alone do not comprehensively measure the system's performance in real-world use. To account for this, we also consider varying windows of frames around the ground truth where a bounding box would be considered. The intuition behind this is that while a constant track may not be achievable, consistent updates can provide valuable information. The number of frames in the window varies from 1 to 60, where the videos are 30 or 50 frames per second. ### Edge-device performance For this system to be useful, it must be able to run at high enough fps to capture quick movement through a sometimes relatively narrow field of view, such as running across a hallway. We measured the time required for each component's computation on a Raspberry Pi 4 (RPi4) with 4GB of RAM and Jetson Nano. The inferencing time for YOLOv8n and Deep OC-SORT are also tabulated in Table 3. For both of these devices, a consumer \(1920\times 1080\) webcam was used as the video source, which was then padded and resized to \(640\times 640\) before being processed by YOLOv8 and Deep OC-SORT. ## Discussion After training 71 YOLOv8n models with the various data combinations, a few trends are noticeable. First of all, gun detection performance is inferior compared to shooter detection performance. There are a few potential causes, such as the task simply being more difficult than detecting shooters due to the smaller size of guns. It may also be due to not strict enough filtering of the synthetic data using a bounding box size threshold. Removing some of the very small instances of gun labels in the training set may allow the models to learn more effectively. Another approach would be to merge our data with an existing gun dataset. By merging the existing gun dataset in our model training, it provides greater data variability and potentially improves the detection performance and robustness of the model in detecting different types of guns. Another trend is that training with some synthetic followed by real data consistently performs better than training with only real data. For example, by comparing the performance of _Real_M_ with _Textured_Masked_Real_M_, we can see that the mAP increased by 16.3%. Figure 2: The detection testing results (PR curves) of different YOLOv8n models trained on different data combinations for the shooter class. It's difficult to determine which model is best for our application based only on the detection results, so we also tested all 71 models with varying confidence thresholds with tracking on six videos, one of which is of a busy mall with no shooters. We \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline **Combination** & \multicolumn{3}{c}{**UE5**} & \multicolumn{3}{c}{**UE4 \& UE5**} & \multicolumn{3}{c}{**UE5 Augmented**} & \multicolumn{3}{c}{**UE4 \& UE5 Augmented**} \\ \hline **ID** & **precision** & **recall** & **mAP50** & **precision** & **recall** & **mAP50** & **precision** & **recall** & **mAP50** & **precision** & **recall** & **mAP50** \\ \hline [MISSING_PAGE_POST] ** & 0.389 & 0.6 & 0.443 & 0.468 & 0.2 & 0.265 & 0.513 & 0.27 & 0.331 & 0.55 & 0.49 & **0.522** \\ \hline \hline \end{tabular} \end{table} Table 1: YOLOv8n testing results for the shooter class. Combination ID represented the order of training. The maximum amount of synthetic data was used for all sequential training scenarios, and S, M, and L amounts of real data correspond to 100, 300, and 500 images, respectively. For example, Textured_Masked_Real_M is first trained on textured synthetic images, followed by masked synthetic images, and finished with 300 real images. The column labels represent the type of data used, signifying if UE5 or both UE4 and UE5 data is used and whether the textured synthetic data is augmented with camera sensor effects. Figure 3: The testing results (PR curves) for (a) tracking with only shooter detections and (b) tracking with shooter detections confirmed with a gun detection. Confidence thresholds for the gun and shooter detections were varied individually from 0.1 to 0.9. Only combined augmented models are shown here, the other data combinations are shown in Supplementary Fig. S2. included the video of the mall to challenge the false positive rate of the models. Tracking also allows the use of our gun-based shooter confirmation system, which is discussed in more detail later. As expected, this system is very dependent on the quality of gun detection, which, while it does improve the performance of some models, it also collapses the performance of others. The overall best data combination for tracking is augmented UE4+UE5 and real data, with _AugCTxtured_CMasked_Real_S_ being the best model for both tracking with only the shooter and tracking with gun-based confirmation. Interestingly, a model trained with only 100 real samples, rather than 300 or 500, is the best performing. This may be due to the method of training where different data types are used sequentially, which could imply that the synthetic data better generalizes the models compared to training with a larger amount of the limited real data. The tracking results for the other data combinations can also be seen in Fig. 3; however, these models seem particularly impacted by low gun detection performance. Regardless, some other trends are apparent. Training with UE5 and Real data performs relatively well, but adding UE4 data or augmenting UE5 data slightly decreases performance. However, adding _augmented_ UE4 data and _augmented_ UE5 improves performance. The obvious thing to try would be to add augmented UE4 data to unaugmented UE5 data. Lastly, the current system seems prone to ID switches and has difficulty maintaining a constant track. Regardless, tracking in its current form still reduces false positives, thus increasing robustness. Another benefit of tracking rather than just detection is that it allows the further processing of a window of frames. This helps give a better sense of the real-world performance of the system where consistent, but not necessarily constant, updates can provide valuable information about a shooting event in real-time. Higher FPS and resolution for a security system would always be ideal, but in practice, the additional cost of just the higher-quality cameras, not to mention the more powerful computing hardware that would be required, would make such a system hard to adopt. That being said, anecdotally, it seems that around 4 FPS or higher is sufficient to adequately capture the speeds at which a person can move through the view of a security camera, although more thorough testing would need to be done to verify this. Another consideration for tracking at lower FPS is approximations used to predict motion. The error associated with assuming constant velocity will increase as the time between frames increases. Figure 4: System-level performance (precision, recall, and F1 score) of the AugCTxtured_CMasked_Real_S model with a varying window of frames to consider for bounding box matches and with and without gun-based shooter confirmation. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**Device**} & \multicolumn{2}{c|}{**Computation Time per Image (ms)**} & \multirow{2}{*}{**Total (FPS)**} \\ \cline{2-3} \cline{5-5} & **Detection** & & **Tracking** & **Total** \\ \hline **Raspberry Pi 4** & 650 & 250 & 900 & 1.11 \\ \hline **Jetson Nano** & 125 & 55 & 180 & 5.56 \\ \hline \end{tabular} \end{table} Table 3: The inference speed results for running detection and tracking on the selected edge devices. ## Methods Our implementation of shooter tracking consists of three primary stages: (i) synthetic data generation, (ii) training YOLOv8, and (iii) tracking with Deep OC-SORT using OSNet ReID with gun detection-based shooter confirmation. ### Synthetic data generation We generate synthetic data and perform domain randomization using Unreal Engine 4 and Unreal Engine 5[21] environments. The individual aspects of synthetic data generation will be discussed further in the subsections below. #### Unreal Engine environment Unreal Engine 4 was used to simulate an active shooter's movements and those of evacuees, and the shooter was holding an assault rifle to differentiate them visually from the evacuees. The shooter did not fire their weapon during the simulation. Three sections of a hospital building were used, an open room, a hallway, and a staircase. Nodes were designated to correspond to specific locations inside the simulated environment, such as a junction point or an endpoint. Sample images from the environment can be seen in Fig. 5. All actors, shooters, and evacuees were designed to move from one node to another. Evacuees were programmed to reach the nearest exit, and the shooter was programmed to reach a target node. The movement of the actors was facilitated through a navigation mesh, and the shooter was a dynamic obstacle on the mesh, so the evacuees tried to avoid the shooter. Cameras were placed strategically in the building to observe the shooter and evacuees. With proper camera placements, we could capture various movement interactions between the actors and the camera, such as the actors moving toward the camera, away from the camera, and perpendicular to the camera. The UnrealCV plugin [22] was used to allow a Python script to modify the Unreal Engine 4 environment. This plugin enables us to modify the simulation settings, such as the position (location and rotation) and color of objects, along with the camera position and view type. There are two view types that we used in the simulation, the _image_ and _object mask_ view types, which represent the default textures and solid-colored segmentation masks, respectively. These capabilities allow us to randomize scenes with a shooter, evacuees, and multiple camera locations. On top of that, the images from both textured and masked images can be saved for further processing. We also used Unreal Engine 5 to simulate an active shooter with civilians in various higher-fidelity environments. These environments consist of a school, a supermarket, and a bank; in which we recorded data from three locations in both the school and supermarket and four locations in the bank. The shooter can hold either a handgun or a rifle to increase the variety in the data. We also vary the number of civilians to have low- and high-density scenes. The civilian models are randomly generated from a pool of assets with adjustable parameters. Rather than limit the possible character positions through creating a realistic shooting simulation, we choose to have the civilians move in essentially random paths along with randomly placing the shooter. This creates more challenging scenarios where the shooter is partially occluded. However, this data is only useful for training detection since there is no continuity between frames. Sample images from the environments can be seen in Fig. 5. #### Domain randomization While synthetic data is an amazing tool for deep learning, some care needs to be taken when using it to train models. There are domain differences between real and synthetic data; thus, when synthetic data is used for training, it generally cannot be expected to work directly for inference on real data. Closing this gap between domains is called domain adaptation, for which many different techniques exist. Probably the most intuitive technique is to have a high-fidelity simulation to appear as realistic as possible. While we try to have fairly high-fidelity synthetic data, especially in the case of the UE5 environments, we also choose to use domain randomization due to the level of control achievable with the Unreal Engine environments. This approach allows for simple yet effective implementation where we randomly sample positions for the shooter and evacuees within bounded areas and randomly sample colors for everything in the environment. The synthetic data generation process with Unreal Engine 4 can be seen in Fig. 5. The environment initializes with the actual textures of the environment. The first step is to update the position of the actors, which includes the evacuees and the shooter. We achieved this by randomly sampling positions within the bounded area (shown in green). The camera positions and viewing angles are randomized within smaller valid regions (shown in red). From here, textured synthetic (TS) images are exported before randomizing the colors of everything in the environment. The colors are changed by switching to the object mask view and randomly setting the red, green, and blue (RGB) channel values to between 0 and 255 for each object using UnrealCV. The resulting scene is exported as domain-randomized masked synthetic (MS) images. Then, the selected colors for the shooter and guns are used to easily threshold the segmented images to generate tight bounding box annotations. We found this extra step to manually create precise bounding boxes necessary because bounding boxes created within Unreal Engine 4 would often have a hand or foot of the shooter partially outside. Finally, the view is switched back to default with actual textures, and the randomization loop restarts from the beginning. This configuration allows us to easily make adjustments to the overall process. One such adjustment is that instead of randomizing the position of the actors in the UE4 TS data, we freeze and unfreeze the simulation to capture time-continuous data. Additionally, the TS output images don't necessarily need to be saved when generating MS data. However, the MS images are still temporarily required to generate the bounding box annotations for the TS images. The generation process with Unreal Engine 5 can be seen in Fig. 5 and consists of running a short simulation in each desired area. The cameras are positioned to emulate the view of actual security cameras. However, to increase variety in the data, we slightly perturb the position and viewing angle for each image. Textured and masked data are captured separately, and bounding box annotations are generated directly using Unreal Engine 5. Instead of varying the colors of everything in Unreal Engine, we keep constant mask colors throughout the simulation and randomize the colors in a separate Python script using simple thresholding based on the already masked images. This significantly speeds up the data generation process because Unreal Engine would need to cycle through every object in the environment rather than just the visible masks. #### Camera sensor effects We augment the textured synthetic data using camera sensor effect modeling [20]. Rather than applying all the effects on all the images uniformly, they are applied at random levels. The objective of augmenting the textured synthetic data is to help blend the differences between synthetic and real data by making it more similar to real data. We chose to augment only textured data rather than the masked data to limit the already large number of combinations. It's possible that augmenting the masked data would further improve its ability to transfer to real data. This is done by breaking up aspects of synthetic data, such as extremely well-defined edges. Adding noise mimics the normal noise found in real images introduced by limitations in the camera's sensor. Blurring the images helps make the edge less defined. Chromatic aberration mimics an effect along the edges of objects caused by camera lenses. Adjusting exposure helps account for varying camera quality and lighting changes throughout the day. Lastly, color shift helps account for different camera sensors, where one may be more sensitive to certain colors than another. These camera sensor effects can be seen in Fig. 5. ### Training YOLOv8 YOLOv8 [23] is one of the latest versions of the popular you only look once (YOLO) object detection framework. It has made several advancements upon previous versions that significantly improve its performance with small models, making it well-suited for edge devices. Figure 5: Complete synthetic data generation process. We utilize the UnrealCV plugin for UE4 to generate textured and masked synthetic data to obtain accurate bounding boxes by thresholding the masked images. Textured and masked data are generated separately in UE5 as we are able to extract accurate bounding boxes directly. Both UE4 and UE5 textured images are augmented with camera sensor effects. The classes we aim to detect are shooter and gun, where the shooter class bounding boxes contain both the person and the visible portion of the gun. If the gun becomes completely obscured, we still label the person as a shooter based on prior knowledge and subtle posture hints, such as hunched shoulders. While we are primarily concerned with tracking the shooter, including gun detection as confirmation of a new shooter may help reduce false positives. We have two groups of image data: real images extracted from videos publicly available online and synthetic images created using the Unreal Engine environments. The synthetic data can be further divided into two subgroups: semi-realistic textured synthetic images (TS) and masked synthetic (MS) images. Both sets of synthetic data are created from scenes where the characters' positions are randomized for each frame, with animations updating their appearance as if they were moving. Our dataset includes 700 real images, split into 500 training, 100 validation, and 100 testing images. Synthetic data with semi-realistic textures consists of 1,567 images from UE5 and 978 images from UE4, and synthetic data with masked textures includes 7,415 images from UE5 and 5,283 images from UE4. All synthetic data is used only for training. We also explore the use of camera sensor effect augmentation for the TS data. All synthetic data is used only for training. The object detection ground truth contains the \(x\) and \(y\) coordinates of the center of the bounding box localizing the shooter or gun, as well as the width and height of the bounding box. The training image size is the default value of \(640\times 640\), so the images are resized and padded before training. Although not exhaustive, we conducted a series of training experiments to find the best combination of these three image data types: real (R), textured synthetic (TS), and masked synthetic (MS). For each combination, different numbers of images are used. We train four sets of 23 different data combinations, as seen in the combination column of Table 1. The four sets include data generated with UE5, data generated with UE4 and UE5, augmented data generated with UE5, and augmented data generated with UE4 and UE5. Rather than starting training from scratch, we use the YOLOv8n weights pre-trained on the COCO dataset [24]. These weights can be used to detect 80 classes of objects found in the COCO dataset. We run training for the default 100 epochs with early stopping and patience value of 50 epochs.When multiple data types are being used, the first type of data starts with the pre-trained weights and trains for 100 epochs, then the following type of data resumes the training with the weights obtained from the previous training phase and trains for another 100 epochs. ### Tracking with Deep OC-SORT, OSNET RelD, and shooter confirmation with gun detections The YOLO tracking toolbox [25] was immensely helpful when exploring the performance of different tracking algorithms on a Jetson Nano. We chose to use Deep OC-SORT [26] with OSNET [27] using x0 25 MSMT17 weights for the re-identification model for the balance of speed and accuracy, even at a low framerate. As its name suggests, Deep OC-SORT builds upon OC-SORT [28], which addresses some limitations of SORT [29]. One of the limitations of SORT is that it is a purely motion-based tracker. This means that when an object is lost, the estimated location from the Kalman filter will likely deviate from the actual location as time continues. This means that even when the object is detected again, it will not be part of the same track. Observation-centric Re-Update (ORU) reduces the accumulated error by backchecking and updating the parameters of the Kalman filter when an object is detected again. This re-update is based on virtual trajectories of the untracked period. Observation-Centric Momentum (OCM) aims to consider the size of \(\Delta\)t used to estimate the velocity. While a small \(\Delta\)t is necessary for the linear-motion assumption, a large \(\Delta\)t also increases noise in the measurement; thus, the choice to increase it comes at a trade-off. Lastly, Observation-Centric Recovery (OCR) starts a second association attempt between the last unmatched observations and tracks. This can handle objects stopping or being occluded for a short duration. Deep OC-SORT improves upon OC-SORT in a few ways. Firstly, they implement camera motion compensation to adjust predicted bounding boxes based on the camera's movement. However, we disable camera motion compensation to slightly improve inference speed because the cameras in our application are stationary. Secondly, their implementation of appearance association is dynamic and linearly scales based on the confidence of the detection. Lastly, adaptive weighting is used to increase the weight of appearance features depending on the discriminativeness of the embeddings. This is used to boost track-box scores for cases where there is a high degree of similarity. While detecting and tracking shooters as a whole allows for more robust tracking when compared to only guns, it also increases the likelihood of falsely detecting a regular person as a shooter. To address this, we require that a gun detection overlaps with a shooter detection before we begin tracking that person as a shooter. We are able to do this because our shooter class contains both the person and the gun when it is visible. After this confirmation step, we no longer require gun detection to continue tracking the shooter as long as the ReID model associates them with the same ID. If a new potential shooter ID is introduced, a gun detection will be required before that ID is labeled as a shooter. A new ID doesn't necessarily mean a new shooter since the ReID model can be confused by cases where the appearance of a shooter changes significantly between frames. This system is implemented for the track initialization stage of Deep OC-SORT; as such, a shooter ID can only be introduced in that stage. However, once the ID exists, it can be used in the first and second association stages of Deep OC-SORT to keep track of the shooter through occlusions or other situations where the detection accuracy decreases. This process can be seen in Fig. 6. ## Availability of materials and data The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
2309.13192
Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation
Fine-tuning is the most effective way of adapting pre-trained large language models (LLMs) to downstream applications. With the fast growth of LLM-enabled AI applications and democratization of open-souced LLMs, fine-tuning has become possible for non-expert individuals, but intensively performed LLM fine-tuning worldwide could result in significantly high energy consumption and carbon footprint, which may bring large environmental impact. Mitigating such environmental impact towards Green AI directly correlates to reducing the FLOPs of fine-tuning, but existing techniques on efficient LLM fine-tuning can only achieve limited reduction of such FLOPs, due to their ignorance of the backpropagation cost in fine-tuning. To address this limitation, in this paper we present GreenTrainer, a new LLM fine-tuning technique that adaptively evaluates different tensors' backpropagation costs and contributions to the fine-tuned model accuracy, to minimize the fine-tuning cost by selecting the most appropriate set of tensors in training. Such selection in GreenTrainer is made based on a given objective of FLOPs reduction, which can flexibly adapt to the carbon footprint in energy supply and the need in Green AI. Experiment results over multiple open-sourced LLM models and abstractive summarization datasets show that, compared to fine-tuning the whole LLM model, GreenTrainer can save up to 64% FLOPs in fine-tuning without any noticeable model accuracy loss. Compared to the existing fine-tuning techniques such as LoRa, GreenTrainer can achieve up to 4% improvement on model accuracy with on-par FLOPs reduction.
Kai Huang, Hanyun Yin, Heng Huang, Wei Gao
2023-09-22T21:55:18Z
http://arxiv.org/abs/2309.13192v2
# Towards Green AI in Fine-tuning Large Language Models via Adaptive Backpropagation ###### Abstract Fine-tuning is the most effective way of adapting pre-trained large language models (LLMs) to downstream applications. With the fast growth of LLM-enabled AI applications and democratization of open-souced LLMs, fine-tuning has become possible for non-expert individuals, but intensively performed LLM fine-tuning worldwide could result in significantly high energy consumption and carbon footprint, which may bring large environmental impact. Mitigating such environmental impact towards Green AI directly correlates to reducing the FLOPs of fine-tuning, but existing techniques on efficient LLM fine-tuning can only achieve limited reduction of such FLOPs, due to their ignorance of the backpropagation cost in fine-tuning. To address this limitation, in this paper we present _GreenTrainer_, a new LLM fine-tuning technique that adaptively evaluates different tensors' backpropagation costs and contributions to the fine-tuned model accuracy, to minimize the fine-tuning cost by selecting the most appropriate set of tensors in training. Such selection in GreenTrainer is made based on a given objective of FLOPs reduction, which can flexibly adapt to the carbon footprint in energy supply and the need in Green AI. Experiment results over multiple open-soourced LLM models and abstractive summarization datasets show that, compared to fine-tuning the whole LLM model, GreenTrainer can save up to 64% FLOPs in fine-tuning without any noticeable model accuracy loss. Compared to the existing fine-tuning techniques such as LoRa, GreenTrainer can achieve up to 4% improvement on model accuracy with on-par FLOPs reduction. GreenTrainer has been open-sourced at: [https://github.com/pittisl/GreenTrainer](https://github.com/pittisl/GreenTrainer). ## 1 Introduction Large language models (LLMs), being pre-trained on large-scale text data, have been used as foundational tools in generative AI for natural language generations. The most effective way of adapting LLMs to downstream applications, such as personal chat bots and podcast summarizes, is to fine-tune a generic LLM using the specific application data [11]. Intuitively, fine-tuning is less computationally expensive than pre-training due to the smaller amount of training data, but it may result in significantly high energy consumption and carbon footprint when being intensively performed worldwide and hence bring large environmental impact. In particular, enabled by the democratization of open-sourced LLMs [8] and convenient APIs of operating these LLMs [32; 41], even non-expert individuals can easily fine-tune LLMs using a few lines of Figure 1: GreenTrainer adaptively selects the trainable portion mance enhancement or model personalization [37]. For example, when a LLaMA-13B model [39] is fine-tuned by 10k users using A100-80GB GPUs, such fine-tuning consumes 6.9\(\times\) more GPU hours than pre-training a GPT-3 model [6] with 175B parameters. The amount of energy being consumed by such fine-tuning, correspondingly, is comparable to those consumed by small towns or even some underdeveloped countries, and the amount of emitted carbon dioxide is equivalent to 500\(\times\) of that produced by a New York-San Francisco round-trip flight [1]. Mitigating such environmental impact towards Green AI directly correlates to reducing the number of floating operations (FLOPs) of fine-tuning, as FLOPs is a fundamental measure that represents the amount of computational operations and hence energy consumption in training [36]. Most existing techniques, however, are limited to optimizing LLM fine-tuning for lower memory consumption rather than FLOPs reduction [29, 24]. Some other methods reduce the amount of computations by only fine-tuning specific types of model parameters such as bias [42], LayerNorm and output layer weights [28], but they significantly impair the model's expressivity and are only applicable to simple non-generative learning tasks. Instead, researchers suggested keeping the original model parameters frozen but injecting additional trainable parameters either to the input [21, 26] or internal layers [23, 17]. Recent LoRA-based methods [16, 43] further reduce the overhead of computing weight updates for these injected parameters via low-rank approximation. These methods can achieve comparable accuracy on generative tasks with full fine-tuning. However, they still need to compute the activation gradients through the whole model, and their FLOPs reduction is hence limited to computations of weight updates, which are only 25%-33% of the total training FLOPs. Besides computing weight updates, FLOPs in training are also produced in i) forward propagation and ii) backward propagation of activation gradients. Since complete forward propagation is essential to calculate the training loss, we envision that the key to effective FLOPs reduction is to take the backpropagation cost of activation gradients, which is at least another 33% of the total training FLOPs, into account and selectively involve only the most appropriate model structures in backpropagation. The major challenge, however, is that such selective training will nearly always bring model accuracy loss. Our basic idea to minimize the accuracy loss is to adapt such selection in backpropagation to a flexible objective of FLOPs reduction, which is determined by the carbon footprint in energy supply for LLM fine-tuning. For example, when such carbon footprint is low due to more insertion of renewable energy, a lower objective of FLOPs reduction can be used to retain more model structures to training and hence retain the training accuracy. On the other hand, high carbon footprint in energy production could lead to a higher objective of FLOPs reduction for better embracing Green AI. Based on this idea of adaptive backpropagation, in this paper we present _GreenTrainer_, a new training technique for efficient LLM fine-tuning with the minimum accuracy loss. As shown in Figure 1, given an objective of FLOPs reduction, GreenTrainer adaptively selects the most appropriate set of trainable neural network (NN) tensors at run-time, based on evaluation of different tensors' importance in training. Such importance evaluation is difficult because NN tensors do not directly associate with any input data variables or intermediate features, and most attribution techniques [38, 15] that evaluate feature importance are hence not applicable. Traditional approaches based on weight magnitudes [22], random perturbations [5], and gating functions [14], on the other hand, are either inaccurate or computationally expensive for LLMs. Instead, our approach is to follow a similar rationale with current attribution techniques that measures the importance of an input data variable as the accumulation of relevant gradients, to evaluate tensor importance as the cumulative gradient changes of its weight updates in training. In this way, we ensure that selected tensors will make the maximum contribution to reducing the training loss. Another challenge is how to precisely profile the training FLOPs of different tensor selections. Due to the interdependency between different tensors, their total FLOPs in training is usually not equal to the summation of their individual training FLOPs. Such interdependency is determined by the backpropagation characteristics of the specific NN operators connected to each tensor, but existing FLOPs models cannot link NN operators to tensors based on the computing flow of backpropagation. To tackle this challenge, we build a new FLOPs model that incorporates the relations between tensors and NN operations into profiling of training FLOPs. Based on this model, we develop a dynamic programming (DP) algorithm that can find the nearly optimal tensor selection from an exponential number of possibilities (e.g., \(2^{515}\) for 515 tensors in OPT-2.7B model [44]), with negligible computing overhead. We evaluated the training performance of GreenTrainer with three open-sourced LLMs, namely OPT [44], BLOOMZ [30] and FLAN-T5 [10], on text generation datasets including SciTLDR [7] and DialogSum [9]. Our experiment results show that GreenTrainer can save up to 64% training FLOPs compared to full LLM fine-tuning, without any noticeable accuracy loss. In some cases, GreenTrainer can even improve the model accuracy compared to that of full fine-tuning, by removing model redundancy and hence mitigating the model overfitting. Compared to existing fine-tuning techniques such as Prefix Tuning [23] and LoRA [16], GreenTrainer can improve the model accuracy by 4%, with the same amount of FLOPs reduction, and also provides users with the flexibility to balance between the training accuracy and cost depending on the specific needs of green AI. ## 2 Background & Motivation ### Transformer Architectures for Text Generation Current LLMs are stacked by transformer blocks [40], each of which contains a Multi-Head Attention (MHA) layer, LayerNorms [4], and a Feed-Forward Network (FFN) with two dense layers. Given an input sequence \(X\in\mathbb{R}^{n\times d}\) with \(n\) tokens, the MHA separately projects all the tokens into a \((Q,K,V)\) space \(h\) times, using \(h\) suites of trainable projectors \((W_{Q}^{(i)},W_{K}^{(i)},W_{V}^{(i)})_{i=1,...,h}\). Each projection \(f_{i}:\mathbb{R}^{n\times d}\rightarrow\mathbb{R}^{n\times\frac{d}{h}}\) is defined as: \[Q_{i},K_{i},V_{i}=XW_{Q}^{(i)},XW_{K}^{(i)},XW_{V}^{(i)}. \tag{1}\] The output \((Q_{i},K_{i},V_{i})\) then performs attention mechanisms to produce \(O_{i}\) by weighting \(V_{i}\) with the attention scores between \(Q_{i}\) and \(K_{i}\). The MHA's final output is obtained by concatenating each \(O_{i}\), following a linear projection \(g:\mathbb{R}^{n\times d}\rightarrow\mathbb{R}^{n\times d}\) with a trainable projector \(W_{o}\): \[O_{i}=\mathrm{Softmax}\left(Q_{i}K_{i}^{\top}/\sqrt{d/h}\right)V_{i},\qquad \quad\mathrm{MHA}_{\mathrm{out}}=\mathrm{Concat}(O_{1},O_{2},...,O_{h})W_{o}. \tag{2}\] Due to their auto-regressive nature, LLMs can only generate a single output token in each forward pass, which is inefficient in training. Instead, LLMs adopt the teacher-forcing method [19] to generate the entire sequence of output tokens in a single forward pass. Specifically, causal masks are applied to MHA's attention scores, so that each output token can be predicted from the label tokens at previous positions. With this technique, when being fine-tuned, LLMs can be trained in a standard way like any feed-forward models. ### The Need for Adaptive Backpropagation By stacking a sufficient number of large transformer blocks, pre-trained LLMs can capture general language patterns and world knowledge. However, when being fine-tuned for a downstream task, they are usually over-parameterized because only part of the world knowledge they learned is useful for the target task. In these cases, only involving some of the model's substructures into fine-tuning could have little impact on the model accuracy, but significantly reduces the amount of computations. Existing work has made attempts with fixed selections of some NN components, such as the last 2 layers, decoder prefixes [23], and linear projectors \((W_{Q},W_{V})\)[16], to be involved in fine-tuning. However, due to the interdependencies of NN parameters [18], using such fixed selections for fine-tuning will significantly impair the trained model's accuracy. As shown in Table 1, solely fine-tuning either the last 2 layers or decoder prefixes leads to up to 10 % accuracy drop compared to full fine-tuning. The major reason is that the nearby NN substructures that have interdependencies with the fixed selections have been excluded from fine-tuning, and hence become inconsistent with those \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Trainable**} & \multicolumn{2}{c}{**OPT-2.7B**} & \multicolumn{2}{c}{**FLAN-TS-3B**} \\ \cline{2-5} & **FLOPs** (\(\times 10^{15}\)) & **Acc. (\%)** & **FLOPs** (\(\times 10^{15}\)) & **Acc. (\%)** \\ All params & 262.0 & 23.6 & 135.7 & 46.5 \\ Last 2 layers & 181.6 (31\%\(\downarrow\)) & 20.8 & 46.1 (66\%\(\downarrow\)) & 39.2 \\ Decoder prefix & 174.7 (33\%\(\downarrow\)) & 13.4 & 55.3 (60\%\(\downarrow\)) & 37.6 \\ \((W_{Q},W_{V})\) & 174.7 (33\%\(\downarrow\)) & 23.8 & 90.5 (33\%\(\downarrow\)) & 44.7 \\ \hline \hline \end{tabular} \end{table} Table 1: Fine-tuning different substructures of OPT-2.7B and FLAN-T5-3B LLMs on the DialogSum dataset (ROUGE-1 score on the test set is used as the accuracy metric) selected substructures. Increasing the density of selected substructures, such as including all the linear projectors \((W_{Q},W_{V})\), could mitigate the model accuracy loss caused by such inconsistency, but can only save at most 33% FLOPs due to backpropagating activation gradients through all the transformer blocks. Some naive methods of dynamic selections, such as progressively expanding the trainable portion from the last layer, have the similar limitation in FLOPs reduction. The deficiency of these existing methods motivates us to enforce more flexible and adaptive selection of LLM substructures in backpropagation. In GreenTrainer, we develop a tensor importance metric that incorporates parameter dependencies to evaluate how fine-tuning each tensor contributes to the trained model's accuracy at runtime. Knowledge about such tensor importance, then, allows us to achieve the desired FLOPS reduction while maximizing the model accuracy. ### FLOPs Model of Backpropagation The design of GreenTrainer relies on proper calculation of the selected model substructures' backpropagation FLOPs, which can be decomposed into two parts using the chain rule. For example, as shown in Figure 2, when training a 4-layer dense NN without bias, each layer computes _i)_\(\mathrm{dy_{i}}\) as the loss \(L\)'s gradient w.r.t the activation \(y_{i}\), and _ii)_\(\mathrm{dw_{i}}\) as the loss gradient w.r.t weight \(W_{i}\), such that \[\mathrm{dy_{i}}=\frac{\partial L}{\partial y_{i}}=\frac{\partial L}{\partial y _{i+1}}W_{i}^{\top}=\mathrm{dy_{i+1}}W_{i}^{\top},\hskip 28.452756pt\mathrm{dw_{i }}=\frac{\partial L}{\partial W_{i}}=y_{i}^{\top}\frac{\partial L}{\partial y _{i+1}}=y_{i}^{\top}\mathrm{dy_{i+1}}, \tag{3}\] and the corresponding amount of FLOPs for computing \(\mathrm{dy_{i}}\) and \(\mathrm{dw_{i}}\) can be denoted as \(t_{dy_{i}}\) and \(t_{dw_{i}}\), respectively. \((\mathrm{dy_{i}},\mathrm{dw_{i}})\) can be computed from the upstream \((\mathrm{dy_{i+1}},\mathrm{dw_{i+1}})\). In particular, even if a layer is not selected in fine-tuning, it still needs to compute and pass error gradients (\(\mathrm{dy_{i}}\)) to the downstream layers. Hence, the amount of computations in backpropagation do not only depend on the selected layers, but also depends on some unselected layers. For example, if only Layer 2 is trainable and all other layers are frozen, the total FLOPs for backpropagation includes _i)_ the FLOPs of computing \(\mathrm{dw_{2}}\) and _i)_ the FLOPs of computing \(\mathrm{dy_{3}}\) and \(\mathrm{dy_{4}}\). Due to the generality of the chain rule, such rationale of FLOPs calculation is also applicable to other types of NN layers. Based on this rationale, we can construct FLOPs models for LLM substructures, including MHA and FFN. However, the layer-level time model is coarse-grained and can lead to inaccurate selection of the trainable portion in LLM fine-tuning. Some important parameters may be unselected because many others within the same layer are unimportant. In GreenTrainer, we push the selection granularity to the tensor level, which can be well-supported by tensorized NN libraries (e.g., TensorFlow [2] and PyTorch [33]). On the other hand, although the weight-level selection is more fine-grained, it also requires fine-grained indexing and incurs unnecessarily high overhead. ## 3 GreenTrainer Method To reduce the FLOPs of LLM fine-tuning, an intuitive problem formulation is to minimize the FLOPs while achieving the desired objective of the fine-tuned model accuracy. However, it is generally hard to determine an appropriate accuracy objective in advance, because some accuracy objectives may require very intensive training and the accuracy that we can achieve with our FLOPs budget cannot be pre-estimated before training. Instead, GreenTrainer aims to maximize the training loss reduction while achieving the desired FLOPs reduction, as formulated below: \[\max\Delta_{loss}(\mathbf{m})\hskip 8.535827pt\text{s.t.}\ T_{selective}(\mathbf{m})\leq\rho T_{full}, \tag{4}\] where \(\mathbf{m}\) is a binary selector to be solved for selecting the appropriate set of tensors in fine-tuning. \(\mathbf{m}\) parametrizes both the loss reduction (\(\Delta_{loss}\)) and the per-batch FLOPs of training (\(T_{selective}\)), and Figure 2: Backpropagation of a 4-layer dense NN \(T_{selective}\) is constrained to be lower than a user-specified ratio (\(\rho\)) of the per-batch training FLOPs of fine-tuning the whole model (\(T_{full}\)). For example, \(\rho=0.5\) means that the FLOPs of fine-tuning should be reduced to 50% of that in fine-tuning the whole model. In practice, depending on the specific LLM fine-tuning scenario, the value of \(\rho\) can either be preset prior to training, or dynamically adjusted at runtime in any stage of training. To clearly identify each tensor's contribution in fine-tuning, we model \(\Delta_{loss}(\mathbf{m})\) as the aggregated importance of the selected tensors in training, and calculate the FLOPs incurred by the selected tensors using the FLOPs model of backpropagation being described in Section 2.3. With this FLOPs model, Eq. 4 can be rewritten as: \[\max\ \Delta_{loss}(\mathbf{m})\quad\text{ s.t. }T_{fp}+\mathbf{m}\cdot\mathbf{t}_{dw}+ \sigma(\mathbf{m})\cdot\mathbf{t}_{dy}\leq\rho T_{full}, \tag{5}\] where \(T_{fp}\) indicates the per-batch FLOPs of the forward pass, and each pair of variables in \((\mathbf{t}_{dy},\mathbf{t}_{dw})\) represents the FLOPs of computing \((\mathrm{dy},\mathrm{dw})\) for the corresponding tensor, respectively. Given a binary selector \(\mathbf{m}\), \(\sigma(\mathbf{m})\) incorporates all the tensors along the backward pass that contribute to the FLOPs of fine-tuning, by involving in passing the error gradients \((\mathrm{dy})\). For example, if \(\mathbf{m}=[0,0,1,0,1,0,0]\), all the tensors that are in deeper layers than the selected tensors are involved in passing the error gradients, and hence \(\sigma(\mathbf{m})=[0,0,1,1,1,1,1]\). To ground the above formulation and solve \(\mathbf{m}\), GreenTrainer consists of three key components: _(i) Tensor FLOPs Profiling_, which calculates the FLOPs of all NN tensors (i.e., \(\mathbf{t}_{dy}\) and \(\mathbf{t}_{dw}\)) prior to training; _(ii) Tensor Importance Evaluation_, which quantifies the contribution of updating each NN tensor to the training quality at runtime; _(iii) Tensor Selector_, which grounds the tensor selection problem using tensors' FLOPs and importances, and provides solutions via dynamic programming at runtime. ### Tensor FLOPs Profiling Standard NN profilers, such as the Torch Profiler [33], can measure the execution FLOPs of individual NN operators, such as matrix multiplication and convolution. However, such profiling of NN operators cannot be directly linked to the corresponding NN tensors that participate in these operations. In other words, when a set of selected tensors is trained, the training FLOPs of backpropagation are not equal to the summation of individual tensors' backpropagation FLOPs. To address this limitation, our approach consists of two steps. First, we convert the layer-based NN structure of LLMs into a tensor-level computing graph, which retains the execution order of all tensors' involvements in training. Then, we extract the related backpropagation operators of each tensor, and derive each tensor \(i\)'s FLOPs in backpropagation (\(t_{dy_{i}}\) and \(t_{dw_{i}}\)) by matching and aggregating the FLOPs of these NN operators. For example in Figure 3, the training of each linear projector (\(Q\), \(K\) and \(V\)) in an MHA layer should be executed after its corresponding bias tensor's training. Training each linear projector, then, will involve two matrix multiplication operators, whose FLOPs in backpropagation will be aggregated. We categorize such rules of matching and aggregation by the type of LLM layers where tensors are located, as described below. **Input & output embedding layers.** The input embedding layer contains a trainable embedding tensor that maps each raw token into a dense representation through efficient lookup operations. Given the activation gradient \(\mathrm{dy}_{i+1}\) from upstream layers, deriving the update \(\mathrm{dw}_{i}\) of this embedding tensor only involves variable assignment without any heavy computations. Hence, we can safely consider \(t_{dw_{i}}\approx 0\) for any tensor \(i\). Specifically, if a raw token is mapped to the \(k\)-th vector in the embedding tensor during the forward pass, then during backpropagation, \(\mathrm{dy}_{i+1}\) from the upstream will be only assigned to \(k\)-th row of \(\mathrm{dw}_{i}\), such that \[\mathrm{dw}_{i}[s]=\mathrm{dy}_{i+1}\text{ if }s=k,\text{ else }\mathbf{0}. \tag{6}\] Figure 3: An sample workflow of tensor FLOPs profiling Since the input layer doesn't propagate activation gradients, we can also conclude that its \(t_{dy}\) is 0. Reversely, the output embedding layer projects each token back to the probability space by multiplying its trainable tensor with the token vector. Intuitively, its \((t_{dy},t_{dw})\) can be derived in the same way as we did for the dense NN layer in Eq. (3). However, in most LLMs, the output embedding layer shares the same trainable tensor with the input embedding layer. This implies that if the output embedding is trainable, then the input embedding will also be involved in training. To reflect this correlation, all the \(t_{dy}\) from the LLM's output, up to the input embedding layer, should be accumulated to the \(t_{dy}\) of the output embedding tensor, while its \(t_{dw}\) remains unchanged. **Multi-Head Attention (MHA) layer.** As described in Section 2.2, a MHA layer contains multiple linear projectors as trainable tensors, and their FLOPs in training can be generally derived in the same way as we did with the dense NN layer. In addition, some LLMs such as OPT also include bias as another type of trainable tensor after such projection. In this case, based on the chain rule, the backpropagation of bias is computed as:SS \[\mathrm{dy_{i}}=\mathrm{dy_{i+1}}, \mathrm{dw_{i}}=\mathbf{1}^{\top}\mathrm{dy_{i+1}}, \tag{7}\] which indicates that \(t_{dy}\) for bias is 0 since \(\mathrm{dy_{i}}\) is identically passed from \(\mathrm{dy_{i+1}}\). \(t_{dw}\) of bias can be derived as the FLOPs of adding up the elements in \(\mathrm{dy_{i+1}}\), along every embedding feature channel. The attention mechanism in Eq. (2) is backpropagated prior to the projectors. If any of these projectors are involved in training, the attention's backpropagation FLOPs must be also calculated. To do this, we accumulate such FLOPs to the corresponding projector tensor (\(W_{V}\))'s \(t_{dy}\). **LayerNorm.** Given a token, LayerNorm first normalizes its features and then uses two trainable tensors \(\gamma\) and \(\beta\) to element-wise multiply with and add to the token, respectively. The operations of multiplication and addition are similar to those in the dense NN layer, and so its FLOPs can also be calculated in the similar way. However, the backpropagation FLOPs of normalization operators should be accumulated to the previous tensor's \(t_{dy}\). That means if any tensors in the previous layers are trained, the FLOPs of propagating the normalization operators should be also included when calculating the FLOPs of the current layer. **Feed-Forward Network (FFN).** In the FFN, there is a nonlinear activation function between two dense layers. Following the same method of calculating LayerNorm's FLOPs, we accumulate the FLOPs of propagating through this activation function to the bias tensor's \(t_{dy}\) in the first dense layer. ### Tensor Importance Evaluation Generally speaking, a tensor's importance in training can be estimated as the summation of the importances of all its weights. In training, since the model weights are iteratively updated to minimize the training loss, an intuitive approach to evaluating the importance of a weight update in a given iteration is to undo this update and check how the training loss increases back: \[\Delta L=L(w)-L(w+\Delta w), \tag{8}\] so that a higher value of \(\Delta L\) means this update is more important and the corresponding weight should be selected for fine-tuning. However, repeatedly applying this approach to every NN weight is expensive due to the large number of weights in LLMs. Instead, our approach is to estimate the importance of all the NN weights in one shot by utilizing the information available in the backpropagation procedure. More specifically, we compute the importance of each weight by smoothing the undo operation described above and computing the loss gradients with respect to the updates that correspond to all the weights. Letting the multiplicative \(\mathbf{c}\in[0,1]^{M}\) denote the continuous undo operation for all the \(M\) weights in the model, we can compute the loss gradient with respect to \(\mathbf{c}\) as \[-\frac{\partial L(\mathbf{w}+\mathbf{c}\odot\Delta\mathbf{w})}{\partial\mathbf{c}}=-\left. \Delta\mathbf{w}\odot\frac{\partial L(\mathbf{u})}{\partial\mathbf{u}}\right|_{\mathbf{u}=\bm {w}+\mathbf{c}\odot\Delta\mathbf{w}}, \tag{9}\] where \(\odot\) denotes element-wise multiplication. When \(\mathbf{c}=\mathbf{0}\), Eq. (9) becomes an importance vector where each element corresponds to a model weight. Since the loss gradient is parametrized by all the model weights, the weight importances calculated in this way implicitly incorporate the impact of weight dependencies. A tensor \(k\)'s importance is then calculated as \[I_{k}=-\sum\nolimits_{i}\Delta w_{i}^{(k)}\frac{\partial L}{\partial w_{i}^{( k)}}. \tag{10}\] In some cases, when the training process encounters divergence, both the values of the gradients and the calculated tensor importances in Eq. (10) could become very large, eventually leading to overflow when using these importance values for deciding tensor selection in Eq. (5). To address this issue, we could further scale all the tensor importance by the maximum amplitude to improve numerical stability. **Reducing memory usage.** Our approach to importance evaluation requires caching all the previous model weights and the current gradients, in order to compute Eq. (10). However, doing so significantly increases the GPU memory consumption, especially for modern LLMs with billions of model weights. To reduce such GPU memory usage, we observe that our problem formulation in Eq. (5) will prevent tensors in early layers to be selected for training, due to the high costs of propagating their activation gradients in backpropagation. Hence, we could safely exclude these tensors from the trainable portion of LLM fine-tuning and save a significant amount of GPU memory. More specifically, the backpropagation during tensor importance evaluation can be early stopped at a certain tensor \(k\), such that \[\sum_{i=k-1,...,N}t_{dy_{i}}<\rho T_{full}\leq\sum_{i=k,...,N}t_{dy_{i}}, \tag{11}\] i.e., the cumulative FLOPs of all the tensors from 1 to \(k\) just exceeds our objective of FLOPs reduction. As shown in Table 2, by applying such early stopping method, we could proportionally save GPU memory with respect to the value of \(\rho\), as a smaller value of \(\rho\) leads to smaller \(k\) and the backpropagation can hence be stopped earlier. For example, when \(\rho=\)50%, 25% of GPU memory can be saved, and such saving could further increase to 50% when \(\rho=\)34%. ### Tensor Selection Since Eq. (5) is a nonlinear integer programming problem and hence NP-hard, in GreenTrainer we instead seek for an approximate solution in pseudo-polynomial time using dynamic programming (DP). Specifically, we decompose the whole problem into subproblems that are constrained by different depths of backpropagation. These subproblems can be sequentially solved from the easiest one with the smallest depth of one, by using their recurrence relations. **Subproblem definition.** As shown in Figure 4(a), we define each subproblem \(P[k,t]\) as to maximize the cumulative importance of selected tensors when 1) selection is among the top \(k\) tensors1 and 2) backpropagation FLOPs is at most \(t\). DP starts by solving the smallest subproblem \(P[k=1,t=1]\) and gradually solves larger subproblems based on the results of smaller subproblems and the recurrence relation of these subproblems, until the target problem \(P[N,T_{full}]\) is solved. Footnote 1: We consider the tensor that is closest to the NN output as the topmost. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model**} & **Full** & **Early-stop** & **Early-stop** & **Early-stop** & **Early-stop** \\ & **evaluation** & \(\rho=34\%\) & \(\rho=40\%\) & \(\rho=50\%\) & \(\rho=60\%\) \\ \hline OPT-2.7B & 10.8 & 5.5 & 6.5 & 8.1 & 9.7 \\ FLAN-T5-3B & 12.0 & 6.1 & 7.2 & 9.0 & 10.8 \\ \hline \hline \end{tabular} \end{table} Table 2: GPU memory consumption (in GigaBytes) of tensor importance evaluation Figure 4: Solving the selection problem by DP **Recurrence relations of subproblems.** The recurrence relation between subproblem \(P[k,t]\) and \(P[k-1,t]\) depends on whether we further select the top tensor \(k\) from the solution of \(P[k-1,t]\), as shown in Figure 4(b). **Case 1:** If the top tensor \(k\) is not selected, \(P[k,t]\) will fall back to \(P[k-1,t]\), since the importance of selected tensors will not be further increased. **Case 2:** If the top tensor \(k\) is selected, then its FLOPs will be included into the solution of \(P[k,t]\), no matter which other tensors are selected. The FLOPs involved with tensor \(k\) include 1) the FLOPs to update tensor \(k\) and 2) the FLOPs to pass activation gradients from the closest selected tensor \(k_{c}\), such as tensor \(k-3\) as shown in Figure 4(b), to tensor \(k\). This implies that \(P[k,t]\) falls back to a previously solved subproblem \(P[k-k_{c},t-\Delta t]\), where \[\Delta t=t_{dw_{k}}+\sum\nolimits_{j=k_{c}}^{k-1}t_{dy_{j}}. \tag{12}\] Since \(k_{c}\) is unknown in advance, we backtrace the previously solved subproblems and explore all the possibilities of \(k_{c}\) by reducing the depth of backpropagation from \(k\), and the optimal solution to \(P[k,t]\) is the one with the highest cumulative importance of the selected tensors. Based on this recurrence relation, we can solve all the subproblems by sequentially traversing the subproblem space. The time complexity of solving each subproblem is \(O(N)\) due to the backtracing in Case 2, and the overall time complexity of DP algorithm is \(O(N^{2}T_{full})\). **Reducing the computational cost.** Due to the high volume of FLOPs in LLM fine-tuning, the value of \(T_{full}\) could be very large. To reduce the computational cost of DP, we can reduce the subproblem space by skipping two types of subproblems: 1) **invalid ones**, whose FLOPs constraint \(t\) exceeds the desired constraint (\(\rho T_{full}\)); 2) **redundant ones**, whose FLOPs to pass activation gradients to the maximally allowed depth (\(k\)) exceeds \(t\). Our preliminary experiment show that, doing so on an OPT model with \(\rho_{bp}=50\%\) can reduce the number of subproblems by 5.5\(\times\) without affecting the optimality of training. Besides, to further reduce the number of subproblems, we scale tensors' FLOPs \((t_{dw},t_{dy})\) by multiplying a factor of \(Z\): \[\widetilde{t_{dw}}=\lfloor t_{dw}\cdot Z\rfloor\,,\quad\widetilde{t_{dy}}= \lfloor t_{dy}\cdot Z\rfloor\,, \tag{13}\] where \(Z=\frac{T_{q}}{T_{full}}\) reduces the backpropagation FLOPs to a resolution of \(T_{q}<T_{full}\). The overall time complexity of DP is then reduced to \(O(N^{2}T_{q})\). On the other hand, such reduced resolution could increase the ambiguity in DP and affect the training quality. To investigate such tradeoff between the training quality and cost, we conducted preliminary experiments on multiple LLMs. Results in Table 3 show that, for both OPT-2.7B and BLOOMZ-3B models, setting \(T_{q}=1e3\) reduces the DP overhead to \(<\) 1% without affecting the training quality. Similarly, for FLAN-T5-3B, choosing \(T_{q}=1e2\) can retain good training quality with negligible overhead. On the other hand, when \(T_{q}\) is too small, the solution of DP could be inaccurate and hence result in ineffective reduction of the training FLOPs. ## 4 Experiments We implemented GreenTrainer in PyTorch and conducted our experiments on a Lambda Cloud instance with a Nvidia H100 80GB GPU and 24 vCPUs. In our evaluation, we include recently open-sourced decoder-only LLMs including OPT [44] and BLOOMZ [30], and an encoder-decoder LLM, namely FLAN-T5 [10]). The number of parameters in these LLMs ranges from 350M to 6.7B, depending on the specific model variants. Our experiments are conducted using the following two datasets of abstractive summarization: \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Model** & \(T_{q}=1e1\) & \(T_{q}=1e2\) & \(T_{q}=1e3\) & \(T_{q}=1e4\) & \(T_{q}=1e5\) \\ \hline OPT-2.7B & 0.02/64.1/32.0 & 0.04/47.6/30.1 & 0.64/49.8/30.7 & 7.5/50.0/30.9 & 76.5/50.0/30.9 \\ BLOOMZ-3B & 0.000/13.3/9.30 & 0.00/47.5/25.2 & 0.21/49.5/27.2 & 2.3/49.8/27.1 & 25.3/50.0/27.1 \\ FLAN-T5-3B & 0.04/64.9/36.5 & 0.25/57.1/36.5 & 3.5/55.3/36.7 & 41.8/51.8/36.7 & 449/50.0/36.7 \\ \hline \hline \end{tabular} \end{table} Table 3: The impact of DP resolution \(T_{q}\) on fine-tuning OPT-2.7B, BLOOMZ-3B, and FLAN-T5-3B LLMs, on the SciTLDR dataset with \(\rho=50\%\). Each triplet [a/b/c] presents a) the percentage of wall-clock time incurred by DP compared to full fine-tuning, b) the percentage of FLOPs after reduction compared to full fine-tuning, and c) the testing ROUGE-1 score, respectively. * **SciTLDR**[7] is a dataset of 5.4K text summaries over 3.2K papers. It contains both author-written and expert-derived TLDRs, where the latter ones are collected using a novel annotation protocol that produces high-quality summaries while minimizing the annotation burden. * **DialogSum**[9] is a dialogue summarization dataset, consisting of 13,460 dialogues with corresponding manually labeled summaries and topics. It has been demonstrated more challenging than other summarization datasets, such as SAMSum [13] and CNN/Daily [31] at a similar scale. Note that, in our evaluations we do not consider non-generative tasks such as sentimental classification, entailment classification, and extractive QA. The basic reason is that these tasks are too easy for today's LLMs, and testing them with LLMs will hence result in exaggerated performance gain over the baseline. For OPT and BLOOMZ, we follow GPT2-like prompt structures [34], "[source seq.] TL;DR:", for summarization tasks to preprocess all the input data. For FLAN-T5, we adopt the prompt structure "summarize: [source seq.]", which is used in the original T5 pre-training. We truncate the source sequences so that the length of every preprocessed input sequence is within 512 tokens. On the test data, we use a beam search size of 4, and set the maximum number of generated tokens to 64 for SciTLDR and 128 for DialogSum. We compare the performance of GreenTrainer (GT) with the following four baselines: * **Full Fine-Tuning (Full FT)** fine-tunes all the LLM parameters and should intuitively achieve the best accuracy of the trained model. * **Fine-Tuning Top2 (FT-Top2)** only fine-tunes the last two layers of the LLM, which typically include the embedding layer and a LayerNorm. The input and output embedding layers are tied for OPT and BLOOMZ, but are not tied for FLAN-T5. This naive baseline only fine-tunes the smallest portion of LLM parameters and is used to identify whether the dataset is trivial to the LLM. * **Prefix Tuning (Prefix-T)**[23] inserts trainable prefixes into each transformer block's input sequence while freezing the model parameters. For encoder-decoder LLMs, the trainable prefixes are only inserted into the decoder blocks. * **LoRA**[16] is currently the most popular method for efficient LLM fine-tuning. It uses low-rank matrix decomposition to reduce the training cost. We apply LoRA to both query and value projectors, as suggested in [16]. In all experiments, we use a batch size of 4 and fine-tune the model for 5 epochs. We use the AdamW optimizer [27] at a learning rate of \(2\times 10^{-5}\) with linear schedule and weight decay of \(10^{-2}\). We use the ROUGE scores (%R1/R2/RL) [25] on the test datasets as the accuracy metric, and measure both the Peta-FLOPs (PFLOPs) and wall-clock time as the training cost in each run. ### Training Cost & Accuracy We first compare the training cost and accuracy of GreenTrainer (GT) with other baseline schemes on LLMs with 3B parameters, using both datasets. As shown in Table 4, for the OPT-2.7B model, GT-0.5 can achieve the required objective of FLOPs reduction (50%), with at most 2% accuracy loss on both datasets, and GT-0.7 can even achieve 0.2%-3% higher ROUGE scores than Full FT. We hypothesize that GT achieves such accuracy improvement by only fine-tuning the most important tensors and hence mitigating the overfitting that may exist in Full FT. On the other hand, insufficient trainable parameters can also lead to underfitting, such that FT-Top2 has significantly lower ROUGE scores than all other schemes, indicating that the fine-tuning task is non-trivial for the OPT-2.7B model. Similarly, compared to LoRA and Prefix Tuning, GT-0.7 achieves at least 2% higher accuracy with the same amount of training FLOPs. Similarly, for BLOOMZ-3B, GT-0.5 can save 50% training FLOPs and wall-clock time with \(<2\)% accuracy loss. Compared to Full FT, GT-0.7 achieves the same ROUGE scores on the SciTLDR dataset, and 4% to 10% higher on the DialogSum dataset. With the same training FLOPs, GT-0.7 has 0.4%-1.4% higher ROUGE scores than the best baseline (LoRA). Note that both datasets are non-trivial for the BLOOMZ model, since the naive baseline (FT-Top2) still exhibits significant accuracy loss, and Prefix-T performs much worse than any other baselines on the SciTLDR dataset. The major reason may be that the inserted trainable prefixes break the original prompt structure and confuse the model on the scientific corpus. For the FLAN-T5-3B model, we observe that FT-Top2 achieves similar fine-tuning qualities to Full FT with significant FLOPs reduction, indicating that the SciTLDR dataset is trivial for FLAN-T5. This is because FLAN-T5 has been instruction-fine-tuned after pre-training, and can potentially have better zero-shot adaptability. In this case, GT-0.34 can achieve the same training FLOPs and ROUGE scores by selecting only a small portion of tensors. On the other hand, FT-Top2 loses accuracy significantly on the DialogSum dataset, but GT-0.4 reduces 54% of training FLOPs and 43% of wall-clock time without noticeable accuracy loss. GT-0.4 also outperforms LoRA by 1% on ROUGE scores and reduces 11% more training FLOPs. Compared to Prefix tuning, GT-0.34 achieves 2%-5% higher ROUGE scores, while reducing the same amount of training FLOPs. ### The Impact of FLOPs Reduction Objective To better understand how GreenTrainer performs with different objectives of FLOPs reduction, we vary the value of \(\rho\) between 0.36 and 0.8, and compare GreenTrainer with LoRA, which provides the best training performance among all the baseline schemes, on the OPT-2.7B model. As shown in Table 5, on the SciTLDR dataset, when the requirement of FLOPs reduction is high and corresponds to a value of \(\rho\leq\)0.4, GreenTrainer outperforms LoRA by achieving 2% higher ROUGE scores and saving 25% more FLOPs and wall-clock time. On the other hand, when the value of \(\rho\) increases to 0.6, GreenTrainer outperforms the Full FT on ROUGE scores by 0.5% and outperforms LoRA by 5.2%, but saves 40% of training FLOPs and 39% of wall-clock time compared to Full FT. Similar results are also observed on the DialogSum dataset. In summary, with different objectives of FLOPs reduction, GreenTrainer can always provide better tradeoffs between the training accuracy and cost, compared to the SOTA baselines. These results, on the other hand, also demonstrates that GreenTrainer provides great flexibility in LLM fine-tuning between the training accuracy and cost, by adjusting the value of \(\rho\). The user can opt to set a low value of \(\rho\) (\(\leq\)0.4) to maximize the FLOPs reduction (\(>\)60%) with moderate model accuracy loss (3%-4% on the two datasets we use). Alternatively, they can use a high value of \(\rho\) (\(\geq\)0.6) \begin{table} \begin{tabular}{l r r r r r r} \hline \hline \multirow{2}{*}{**\# Model**} & \multicolumn{3}{c}{**ScitTLDR**} & \multicolumn{3}{c}{**DialogSum**} \\ \cline{2-7} & **PFLOPs** & **Time (h)** & **R1/R2/RL** & **PFLOPs** & **Time (h)** & **R1/R2/RL** \\ \hline \hline **OPF2.2/B** & & & & & & \\ \hline Full FT & 41.8 & 0.92 & 32.9/14.9/27.1 & 262.0 & 5.5 & 23.6/9.5/18.8 \\ FT-Top2 & 29.0 (31\%\(\downarrow\)) & 0.61 (34\%\(\downarrow\)) & 9.14/07.6 & 181.6 (31\%\(\downarrow\)) & 3.8 (31\%\(\downarrow\)) & 20.8/7.9/17.5 \\ Prefix-T & 27.9 (33\%\(\downarrow\)) & 0.58 (37\%\(\downarrow\)) & 7.60/46.1 & 174.7 (33\%\(\downarrow\)) & 3.7 (33\%\(\downarrow\)) & 13.4/3.3/10.9 \\ LoRA & 27.9 (33\%\(\downarrow\)) & 0.59 (36\%) & 28.2/12.1/21.0 & 174.7 (33\%\(\downarrow\)) & 3.6 (35\%\(\downarrow\)) & 23.8/9.5/18.8 \\ GT-0.5 & 20.8 (50\%\(\downarrow\)) & 0.46 (50\%\(\downarrow\)) & 30.5/13.1/25.2 & 130.1 (50\%\(\downarrow\)) & 2.7 (51\%\(\downarrow\)) & 21.4/8.2/17.6 \\ GT-0.7 & 29.2 (30\%\(\downarrow\)) & 0.68 (26\%) & 33.1/15.2/27.6 & 182.7 (30\%\(\downarrow\)) & 4.0 (27\%\(\downarrow\)) & 26.8/11.0/21.6 \\ \hline \hline **BLOOM2-3B** & & & & & & \\ \hline Full FT & 47.2 & 1.0 & 28.3/12.1/22.5 & 294.8 & 6.5 & 26.1/10.6/21.0 \\ FT-Top2 & 36.5 (23\%\(\downarrow\)) & 0.75 (25\%\(\downarrow\)) & 23.7/8.8/18.8 & 227.9 (23\%\(\downarrow\)) & 4.6 (29\%\(\downarrow\)) & 22.1/8.5/17.8 \\ Prefix-T & 31.5 (33\%\(\downarrow\)) & 0.68 (34\%\(\downarrow\)) & 6.5/22.5/5 & 196.5 (33\%\(\downarrow\)) & 4.2 (35\%\(\downarrow\)) & 29.6/9.4/24.9 \\ LoRA & 31.5 (33\%\(\downarrow\)) & 0.69 (33\%\(\downarrow\)) & 27.4/11.7/21.8 & 196.5 (33\%\(\downarrow\)) & 4.3 (34\%\(\downarrow\)) & 35.4/14.3/28.6 \\ GT-0.5 & 23.4 (51\%\(\downarrow\)) & 0.51 (50\%\(\downarrow\)) & 26.7/10.7/21.2 & 146.4 (50\%\(\downarrow\)) & 3.1 (52\%\(\downarrow\)) & 24.9/9.5/20.0 \\ GT-0.7 & 32.3 (32\%\(\downarrow\)) & 0.74 (28\%\(\downarrow\)) & 28.0/12.2/22.4 & 204.7 (31\%\(\downarrow\)) & 4.3 (34\%\(\downarrow\)) & 36.8/14.7/29.4 \\ \hline \hline **FLAN-T5-3B** & & & & & \\ \hline Full FT & 21.7 & 0.64 & 37.1/18.5/31.7 & 135.7 & 4.0 & 46.5/20.8/38.5 \\ FT-Top2 & 7.3 (66\%\(\downarrow\)) & 0.21 (67\%\(\downarrow\)) & 36.5/18.4/31.5 & 46.1 (66\%\(\downarrow\)) & 1.4 (65\%\(\downarrow\)) & 39.2/16.7/32.9 \\ Prefix-T & 8.0 (63\%\(\downarrow\)) & 0.23 (64\%\(\downarrow\)) & 36.0/18.2/31.0 & 55.3 (60\%\(\downarrow\)) & 1.7 (57\%\(\downarrow\)) & 37.6/16.4/32.1 \\ LoRA & 14.4 (33\%\(\downarrow\)) & 0.41 (36\%\(\downarrow\)) & 36.6/18.5/31.5 & 90.5 (33\%\(\downarrow\)) & 2.5 (38\%\(\downarrow\)) & 44.7/19.8/37.1 \\ GT-0.34 & 7.5 (65\%\(\downarrow\)) & 0.23 (64\%\(\downarrow\)) & 36.4/18.4/31.7 & 53.5 (61\%\(\downarrow\)) & 1.4 (65\%\(\downarrow\)) & 42.7/18.3/35.1 \\ GT-0.4 & 10.0 (54\%\(\downarrow\)) & 0.38 (41\%\(\downarrow\)) & 36.7/18.5/31.5 & 62.5 (54\%\(\downarrow\)) & 2.3 (43\%\(\downarrow\)) & 46.0/20.7/38.1 \\ GT-0.5 & 12.4 (43\%\(\downarrow\)) & 0.44 (31\%\(\downarrow\)) & 36.3/17.7/30.9 & 77.6 (43\%\(\downarrow\)) & 2.6 (35\%\(\downarrow\)) & 46.2/20.7/38.1 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of the training cost & accuracy in LLM fine-tuning. GreenTrainer with an objective \(\rho\) of FLOPs reduction is denoted as GT-\(\rho\). to have the same level of FLOPs reduction as that of LoRA, but ensure the minimum model accuracy loss or even minor model accuracy improvement. We believe that such flexibility is practically important when fine-tuning LLMs for downstream tasks with different green AI requirements and constraints. ### Efficacy of Tensor Importance Metrics The fine-tuning quality of GreenTrainer builds on the effectiveness of tensor importance evaluation. We compare our metric (\(\Delta w\frac{\partial L}{\partial w}\)) to the magnitude-based metric (\(\Delta w\)) [20] and the gradients-only metric (\(\frac{\partial L}{\partial w}\)) [3], using the OPT-2.7B model with \(\rho=\)0.7. As shown in Table 6, with the same objective of FLOPs reduction, using our metric (\(\Delta w\frac{\partial L}{\partial w}\)) for tensor importance evaluation achieves the highest model accuracy and outperforms Full FT by 1%-3% on ROUGE scores. This is because magnitude-based metrics ignore the dependencies of weight updates. Gradient-only metrics, on the other hand, only contain the direction information about tensor importance but cannot reflect the intensity of importance. Inaccurate importance measurements will in turn lead to inappropriate selections of trainable tensors. ### Impact of LLM Size A specific type of LLM may contain several variants with different parameter sizes. To study GreenTrainer's performance with different LLM sizes, we performed LLM fine-tuning using the OPT models with parameter sizes, ranging from 350M to 6.7B. As shown in Table 7, even on small models (OPT-350M), GT-0.5 can save 17%-21% more training FLOPs than LoRA does, while achieving 2%-4% higher accuracy (on SciTDR) or the same accuracy (on DialogSum). When the model size increases to 2.7B, GT-0.5 outperforms LoRA and GT-0.7 outperforms Full FT on the SciTLDR dataset. On DialogSum, GT-0.7 performs similarly compared to LoRA. For the OPT-6.7B model2, GT-0.4 can save 27% more training FLOPs than LoRA does on SciTLDR, while achieving the same model accuracy, and similar advantages can also be observed when comparing GT-0.5 and GT-0.7 with LoRA. Generally speaking, GreenTrainer's performance advantage widely applies to LLMs with different sizes. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**SciTLDR**} & \multicolumn{3}{c}{**DialogSum**} \\ \cline{2-7} & **PFLOPs** & **Time (h)** & **R1/R2/RL** & **PFLOPs** & **Time (h)** & **R1/R2/RL** \\ \cline{2-7} Full FT & 41.8 & 0.92 & 32.9/14.9/27.1 & 262.0 & 5.5 & 23.6/9.5/18.8 \\ GT-0.7 (\(\Delta w\)) & 29.4 (30\%\(\downarrow\)) & 0.68 (26\%\(\downarrow\)) & 32.7/15.2/27.2 & 183.8 (30\%\(\downarrow\)) & 4.0 (276\(\downarrow\)) & 24.9/10.2/19.7 \\ GT-0.7 (\(\frac{\partial L}{\partial w}\)) & 29.4 (30\%\(\downarrow\)) & 0.67 (27\%\(\downarrow\)) & 32.8/15.1/27.2 & 184.0 (30\%\(\downarrow\)) & 4.0 (27\%\(\downarrow\)) & 25.0/10.2/20.0 \\ GT-0.7 (\(\Delta w\frac{\partial L}{\partial w}\)) & 29.2 (30\%\(\downarrow\)) & 0.68 (26\%\(\downarrow\)) & 33.1/15.2/27.6 & 182.7 (30\%\(\downarrow\)) & 4.0 (27\%\(\downarrow\)) & 26.8/11.0/21.6 \\ \hline \hline \end{tabular} \end{table} Table 6: Efficacy of Tensor Importance Metrics (OPT-2.7B) \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**SciTLDR**} & \multicolumn{3}{c}{**DialogSum**} \\ \cline{2-7} & **PFLOPs** & **Time (h)** & **R1/R2/RL** & **PFLOPs** & **Time (h)** & **R1/R2/RL** \\ Full FT & 41.8 & 0.92 & 32.9/14.9/27.1 & 262.0 & 5.5 & 23.6/9.5/18.8 \\ LoRA & 27.9 (33\%\(\downarrow\)) & 0.59 (36\%\(\downarrow\)) & 28.2/12.1/21.0 & 174.7 (33\%\(\downarrow\)) & 3.6 (35\%\(\downarrow\)) & 23.8/9.5/18.8 \\ GT-0.36 & 14.9 (64\%\(\downarrow\)) & 0.32 (65\%\(\downarrow\)) & 4.1/1.7/3.6 & 92.9 (65\%\(\downarrow\)) & 1.9 (65\%\(\downarrow\)) & 15.7/5.0/13.8 \\ GT-0.4 & 16.6 (60\%\(\downarrow\)) & 0.36 (61\%\(\downarrow\)) & 28.6/11.6/23.5 & 103.4 (61\%\(\downarrow\)) & 2.2 (60\%\(\downarrow\)) & 17.9/6.3/15.4 \\ GT-0.5 & 20.8 (50\%\(\downarrow\)) & 0.46 (50\%\(\downarrow\)) & 30.5/13.1/25.2 & 1301. (50\%\(\downarrow\)) & 2.7 (51\%\(\downarrow\)) & 21.4/8.2/17.6 \\ GT-0.6 & 25.0 (40\%\(\downarrow\)) & 0.56 (39\%\(\downarrow\)) & 33.4/15.3/27.8 & 156.6 (40\%\(\downarrow\)) & 3.3 (40\%\(\downarrow\)) & 24.0/9.7/19.2 \\ GT-0.7 & 29.2 (30\%\(\downarrow\)) & 0.68 (26\%\(\downarrow\)) & 33.1/15.2/27.6 & 182.7 (30\%\(\downarrow\)) & 4.0 (27\%\(\downarrow\)) & 26.8/11.0/21.6 \\ GT-0.8 & 33.4 (20\%\(\downarrow\)) & 0.77 (16\%\(\downarrow\)) & 33.1/15.5/27.6 & 209.6 (20\%\(\downarrow\)) & 4.4 (20\%\(\downarrow\)) & 23.9/9.9/19.1 \\ \hline \hline \end{tabular} \end{table} Table 5: Impact of different objectives of FLOPs reduction on the OPT-2.7B model ## 5 Conclusion & Broader Impact In this paper, we present GreenTrainer, a new fine-tuning technique for LLMs that allows efficient selection of trainable parameters via adaptive backpropagation, to ensure high training quality while significantly reducing the computation cost. GreenTrainer can save up to 64% training FLOPs compared to full fine-tuning without noticeable accuracy loss. Compared to the existing fine-tuning technique such as Prefix Tuning and LoRA, GreenTrainer can achieve up to 4% accuracy improvement with the same amount of FLOPs reduction. Although we target LLM fine-tuning in this paper, the rationale of GreenTrainer's adaptive backpropagation can also be applicable to large generative models in other fields, such as Stable Diffusion [35] for image generation and PaLM-E [12] for motion planning of multimodal embodied agents. Extensions to these domains will be our future work.
2309.14825
Timing the moment when atom decays (and Schroedinger's cat dies)
We propose detecting the moment an atom emits a photon by means of a nearly classical macroscopic clock and discuss its viability. It is shown that what happens in such a measurement depends on the relation between the clock's accuracy and the width of the energy range available to the photon. Implications of the analysis for the long standing Schroedinger's cat problem are reported.
D. Sokolovski, A. Uranga, E. Akhmatskaya
2023-09-26T10:46:14Z
http://arxiv.org/abs/2309.14825v1
# Timing the moment when atom decays # Timing the moment when atom decays (and Schrodinger's cat dies) D. Sokolovski\({}^{a,c,d,}\)1, A. Uranga\({}^{b}\), and E. Akhmatskaya\({}^{b,c}\) \({}^{a}\) Departmento de Quimica-Fisica, Universidad del Pais Vasco, UPV/EHU, 48940, Leioa, Spain \({}^{b}\) Basque Center for Applied Mathematics (BCAM), Alameda de Mazarredo 14, 48009, Bilbao, Spain \({}^{c}\) IKERBASQUE, Basque Foundation for Science, Plaza Euskadi 5, 48009, Bilbao, Spain \({}^{d}\) EHU Quantum Center, Universidad del Pais Vasco, UPV/EHU, 48940, Leioa, Spain 1 Footnote 1: Corresponding author. Email: [email protected] November 7, 2021 **Abstract.** We propose detecting the moment an atom emits a photon by means of a nearly classical macroscopic clock and discuss its viability. It is shown that what happens in such a measurement depends on the relation between the clock's accuracy and the width of the energy range available to the photon. Implications of the analysis for the long standing Schrodinger's cat problem are reported. ## I Introduction In non-relativistic quantum mechanics, time is a mere parameter, quite distinct from the dynamical variables such as positions and momenta, conveniently represented by Hermitian operators. This often complicates the queries, easily answered in the classical context (a good overview has been given in refs. [1] and [2]). When does a quantum particle arrive at a given location (see Egusquiza, Muga and Baute in [1] and Galapon in [2]? How much time does a tunnelling particle spend in the barrier (see, e.g., [3], [4])? How long does a quantum jump take (see Schulman, in [5] and Refs. therein)? These questions continue to cause controversy, and here we add one more to the list. If an atom, initially in an excited state, emits a photon and is later found in its ground state, when exactly did the transition take place? If the decay sets off a chain of events leading to the death of a cat [6], how long ago did the cat die? This is another general problem in elementary quantum mechanics, and below we will address it, using the simplest model available. ### A meaningful question? Does it make sense to talk about the moment the atom decayed? Not always. Decay of a metastable state is often described by a model [7] where a discrete state \(|e\rangle\), corresponding to an excited "atom" with energy \(E_{e}\), is connected to "reservoir" states \(\{|E_{r}\rangle\}\), representing an "atom" in its ground state, \(E_{g}=0\), plus an emitted "photon" with energy \(E_{r}\). The corresponding Hamiltonian takes the form, \[\hat{H}=\hat{H}_{0}+\hat{V},\quad\hat{H}_{0}=|e\rangle E_{e}\langle e|+\sum_{r }|E_{r}\rangle E_{r}\langle E_{r}|, \tag{1}\] \[\hat{V}=\sum_{r}\Omega(E_{r})\left(|E_{r}\rangle\langle e|+|e\rangle\langle E _{r}|\right),\] where \(\Omega(E_{r})\) is the matrix element responsible for the transitions between the system's discrete and continuum states, i.e., for the decay of the excited atom. In the continuum limit, whenever the final probabilities are added up, one can replace the sum \(\sum_{r}\) by an integral \(\int\rho(E_{r})dE_{r}\), where \(\rho(E_{r})\) is the density of the reservoir states [7]. After preparing an atom in its excited state, and waiting for \(t\) seconds, one can find a photon with an energy \(E_{r}\). Expanding the transition amplitude \(\langle E_{r}|\exp(-i\hat{H}t)|e\rangle\) in powers of \(\hat{V}\) reveals a variety of scenarios where the photon, emitted for the first time at \(\tau_{\rm first}\) is re-absorbed and re-emitted until settling down into its final state \(|E_{r}\rangle\) at some \(\tau_{\rm last}\). Thus, the emission process may not occur via a single transition to the ground state, but can have a finite duration \(\tau_{\rm last}-\tau_{\rm first}\). Measuring even "the first passage time" \(\tau_{\rm first}\) presents considerable difficulties [8], and we do not know if \(\tau_{\rm last}-\tau_{\rm first}\) can be measured at all. A helpful exception is the first order transition in the weak coupling limit, which does indeed occur via a single jump, \[\langle E_{r}|\exp(-i\hat{H}t)|e\rangle=-i\Omega(E_{r})\int_{0}^{t}d\tau\exp [-iE_{r}(t-\tau)]\exp(-iE_{e}\tau)+{\cal O}(V^{3}), \tag{2}\] yet the jump's precise moment remains indeterminate due to the Uncertainty Principle [9]. One way to pinpoint the time of transition is to subject the atom to frequent observations every \(\delta t=t/K\), \(K>>1\). This, however, is known to lead to the Zeno effect which quenches the transition, whose rate changes from the one given by the Fermi's golden rule [10] for an unobserved atom, \(\Gamma_{Fermi}=2\pi|\Omega(E_{e})|^{2}\rho(E_{e})\) to \(\Gamma_{\delta t}\approx\delta t\times\int_{-\infty}^{\infty}dE\rho(E)|\Omega( E)|^{2}\), which vanishes as \(\delta t\to 0\) (see, e.g., [5]). Yet, there is a case where the transition proceeds via a single jump, and the Zeno effect does not occur. We will discuss it next. ## II Results and discussion ### The wide band (Markovian) case In the Markovian (wide band) approximation [7], both \(\Omega(E_{r})\) and \(\rho(E_{r})\), are taken to constant, very small and very big respectively, i.e. \(\Omega\to 0\), \(\rho\rightarrow\infty\), in such a manner that a product \(\rho\Omega^{2}\) remains finite, \[2\pi\rho\Omega^{2}\equiv\Gamma<\infty. \tag{3}\] The model admits an exact solution for any \(\Gamma\), and there is no need to limit oneself to the first order approximation (5). The amplitudes of the four possible processes are given by [7]: \[\langle e|\exp(-i\hat{H}t)|e\rangle=\exp(-iE_{e}t-\Gamma t/2), \tag{4}\] \[\langle E_{r}|\exp(-i\hat{H}t)|e\rangle=-i\Omega\int_{0}^{t}dt^{\prime}\exp[ -iE_{r}(t-t^{\prime})]\exp(-iE_{e}t^{\prime}-\Gamma t^{\prime}/2), \tag{5}\] \[\langle e|\exp(-i\hat{H}t)|E_{r}\rangle=0,\quad\mbox{since}\quad\Omega \to 0, \tag{6}\] \[\langle E_{r^{\prime}}|\exp(-i\hat{H}t)|E_{r}\rangle=\exp(-iE_{r}t)\delta_{rr^ {\prime}},\quad\mbox{since}\quad\langle E_{r^{\prime}}|\hat{H}|E_{r}\rangle=E _{r}\delta_{rr^{\prime}}. \tag{7}\] By (4), atom's decay is exponential at all times, and by (5), the energy distribution of the emitted photons is Lorentzian \[P(E_{r}\gets e,t\rightarrow\infty)=\frac{\rho\Omega^{2}}{(E_{r}-E_{e})^{ 2}+\Gamma^{2}/4}. \tag{8}\] Further helpful to our purpose is the fact that, according to Eqs.(5) and (6), the atom can emit a photon only once, and never re-absorbs it afterwords. The moment of transition can, therefore, be defined at least in terms of the virtual scenarios available to the system. With the purely exponential decay in Eq.(4) frequent checks of the atom's state do not affect the decay rate, which stays the same with or without such checks [hence, the adjective _Markovian_, \(P_{M}^{\rm decay}(t)=1-\exp(-\Gamma t)=1-[\exp(-\Gamma t/K)]^{K}=P_{\delta t}^{ \rm decay}(t)\)]. Even so, destruction of coherence between the moments of emission in Eq.(5) must change something akin to the interference pattern in a double slit experiment. Below we will show that it is the energy spectrum of the emitted photons (8) that is affected by the measurement's accuracy. ### A quantum hourglass and its macroscopic limit Suppose Alice the experimenter, does not wish to subject the system to frequent checks, and prefers instead to have, at the end of the experiment, a single record of the moment the atom decayed. For this purpose, she might consider a clock which stops at the moment the atom leaves its excited state. The clock could be an hourglass, in which case the number of the sand grains escaped, would tell Alice the time of the event. A quantum analogue of an hourglass is not difficult to find. Alice could use an array of identically polarised distinguishable spins precessing in a magnetic field, and estimate the elapsed time by counting the spins which have been flipped. Alternatively, Alice can employ a large number of non-interacting bosonic atoms, \(N>>1\), initially in the left well of a symmetric double well potential (see Fig.1). The clock's Hamiltonian given by [\(\hat{a}_{R(L)}^{+}\) creates a boson in the right (R), or left (L) well, and \(\omega\) is the hopping matrix element] \[\hat{H}^{\rm clock}=\omega[\hat{a}_{R}^{+}\hat{a}_{L}+\hat{a}_{L}^{+}\hat{a}_{ R}], \tag{9}\] and the amplitude of finding \(n\) bosons in the right well is easily found to be \[A_{\rm Bose}^{\rm clock}(n\gets 0,t)=(-i)^{n}\sqrt{C_{n}^{N}p^{n}(t)(1-p(t)) ^{N-n}},\quad p(t)\equiv\sin^{2}(\omega t), \tag{10}\] where \(C_{n}^{N}=\frac{N!}{n!(N-n)!}\) is the binomial coefficient. Alice can choose \(\omega t<<1\), so that the Rabi period of a single boson is very large, and have a practically irreversible flow of bosons from left to right. She can also assure, by making very large, that the mean number of atoms in the right well is also large (except perhaps at very short times), \(\overline{n}(t)\equiv p(t)N>>1\). Under these conditions, binomial distribution under the root sign in (10) can be approximated by a normal distribution [11], and after some algebra (see the "Methods" section (Derivation of Eq.(11)) we have \[A_{\rm Bose}^{\rm clock}(n\gets 0,t)\approx\frac{(-i)^{n}}{[2\pi n]^{1/4}} \exp\left[-\frac{(t_{n}-t)^{2}}{\Delta t^{2}}\right],\quad t_{n}\equiv\omega^ {-1}\sqrt{n/N},\quad\Delta t=\omega^{-1}N^{-1/2}. \tag{11}\] Alice can now count the atoms in the right well and use \(t_{n}\) in Eq.(11) as an estimate for the elapsed time. Equation (11) shows that her estimate is likely to be within an error margin \(\Delta t\) of the true value \(t\). A good clock is the one which has a small relative error. If \(\omega t\) is kept constant while \(N\rightarrow\infty\), the error tends to zero, since \(\Delta t/t_{n}=1/\sqrt{n}\approx 1/(\omega t\sqrt{N})\sim 1/\sqrt{N}\), and with many bosons Alice has a good clock (see Fig.2). A further remark is in order. As \(N\rightarrow\infty\), a large system of independent particles begins Figure 1: A classical hourglass (left), and its quantum version (right). a) With the barrier closed (the clock is switched off) the bosons remain in the left well. b) If the barrier is down (the clock is switched on), the number of bosons escaping into the right well allows one to estimate the elapsed time. to develop certain classical properties [12], [13] (see also the "Methods" section (Derivation of Eq.(11)). For example, denoting one-partial states in the wells as \(|L\rangle\) and \(|R\rangle\), and preparing the bosons in a state \(|\Phi_{\rm Bose}^{\rm clock}(0)\rangle=\prod_{i=1}^{N}|L\rangle_{i}\), one later finds them in \(|\Phi_{\rm Bose}^{\rm clock}(t)\rangle=\prod_{i=1}^{N}[u_{LL}(t)|L\rangle_{i}+u _{RL}(t)|R\rangle_{i}]\), where \(u_{LL}(t)=\cos(\omega t)\) and \(u_{RL}(t)=-i\sin(\omega t)\) are the matrix elements of the one-particle evolution operator. The evolved state \(|\Phi_{\rm Bose}^{\rm clock}(t)\rangle\) is not an eigenstate of an operator \(\hat{n}=\sum_{i=1}^{N}|R\rangle_{i}\langle R|_{i}=\hat{a}_{R}^{+}\hat{a}_{R}\), which gives the number of bosons in the right well. However, expanding it in the eigenstates of \(\hat{n}\), \(\hat{n}|n\rangle=n|n\rangle\), \(n=0,1...N\), one finds [13] the coefficients localised in a range \(\sim\sqrt{N}\) around a mean value \(\overline{n}(t)\equiv\langle\Phi_{\rm Bose}^{\rm clock}(t)|\hat{n}|\Phi_{\rm Bose }^{\rm clock}(t)\rangle=N\sin^{2}(\omega t)\approx N\omega^{2}t^{2}\propto N\), \[\langle n|\Phi_{\rm Bose}^{\rm clock}(t)\rangle\approx\frac{(-i)^{n}}{\left[2 \pi\overline{n}(t)\right]^{1/4}}\exp\left[-\frac{\left(n-\overline{n}(t) \right)^{2}}{2\overline{n}(t)}\right]. \tag{12}\] A similar localisation would occur if \(|\Phi(t)\rangle\) expanded in any basis, and this has important consequences. Firstly, one can accurately measure \(\hat{n}\) (or any other operator [13]) and obtain a result close to its mean value (\(\sim N\)) with an error margin \(\sim\sqrt{N}\). This is a good measurement, since its relative error tends to zero. Secondly, one can measure it inaccurately, e.g., by using a von Neumann pointer prepared in a Gaussian state of a width \(\sim N^{1/2+\epsilon}\), where \(0<\epsilon<1/2\)[13]. This is still a good measurement since \(\sim N^{1/2+\epsilon}/N\to 0\), but also one which in the limit \(N\to\infty\) leaves the state (12) almost intact, since \(N^{1/2+\epsilon}/\sqrt{N}\to\infty\) Figure 2: Comparison between Eqs. (10) and (11). Out of \(N=10^{5}\) bosons, 250 are in the right well. (see the "Methods" section (A macroscopic clock) for details). Alice can keep reading this macroscopic nearly classical clock without affecting its operation, like she would do with a classical wrist watch. ### A clock which first runs and then stops Next Alice needs to make the clock run until the moment the atom emits a photon. This can be achieved by coupling it to the atom-photon Markovian system (\(M\)) by means of a Hamiltonian \[\hat{H}^{\rm a+ph+clock}=\hat{H}_{M}+\hat{\pi}_{e}\hat{H}^{\rm clock},\quad\hat {\pi}_{e}\equiv|e\rangle\langle e|, \tag{13}\] where \(\hat{\pi}_{e}\) projects onto the atom's excited state. The corresponding Schrodinger equation is easily solved, and the amplitude for the composite {(a)tom+(ph)oton + clock}, starting with the right well empty, to end with \(n\) bosons there, is found to be (see the "Methods" section (Coupling the clock to a quantum system)) \[A_{\rm Bose}^{\rm a+ph+clock}(j,n\gets e,0)=\int_{0}^{t}A_{ \rm Bose}^{\rm clock}(n\gets 0,\tau)A^{\rm a+ph}(j\gets e,t|\tau)d\tau, \tag{14}\] \[j=e\quad{\rm or}\quad E_{r},\quad n=0,...N,\] where \(A_{\rm Bose}^{\rm clock}(n\gets 0,\tau)\) is given by Eq.(11), and \(A^{\rm a+ph}(j\gets e,t|\tau)\) is the amplitude for the atom-photon system to reach a final state \(|e\rangle\) or \(|E_{r}\rangle\) after remaining in \(|e\rangle\) for exactly \(\tau\) seconds, \[A^{\rm a+ph}(j\gets e,t|\tau)=\langle j|\hat{U}^{\rm a+ph}(t |\tau)|e\rangle, \tag{15}\] \[\hat{U}^{\rm a+ph}(t|\tau)\equiv(2\pi)^{-1}\int_{-\infty}^{\infty }\exp[i\lambda\tau-i(\hat{H}_{M}+\lambda\hat{\pi}_{e})t]d\lambda.\] where \(\hat{U}^{\rm a+ph}(t|\tau)\) is the conditional evolution operator. This is clearly the desired result. The clock runs only while the atom remains in the excited state, and the amplitudes are added for all possible durations \(\tau\), which may lie between \(0\) and \(t\). The integral in Eqs.(15) is evaluated by noting that adding \(\lambda\hat{\pi}_{e}\) to \(\hat{H}_{M}\) only shifts the energy of the discrete state \(E_{e}\) by \(\lambda\) (see the "Methods" section (Timing the transition in the Markovian case)). The result (\(0\leq\tau\leq t\)) \[A^{\rm a+ph}(e\gets e,t|\tau)=\exp(-iE_{e}t-\Gamma t/2) \delta(\tau-t), \tag{16}\] \[A^{\rm a+ph}(E_{r}\gets e,t|\tau)=-i\Omega\exp[-iE_{r}(t-\tau )]\exp[-i(E_{e}-i\Gamma/2)\tau]\] confirms what is already known from Eqs.(4) and (5). An atom, still found in the excited state at \(t\), has remained in that state all the time. An atom, found in the ground state, has not returned to the excited state after making a single transition at some \(\tau\) between \(t=0\) and \(t\). Alice the practitioner can now prepare the atom in its excited state, couple it with a "good" clock (11), wait until time \(t\), and then measure the energy of the photon (if any), as well as count the bosons in the right well. She can find no photon and \(n\) bosons, with a probability \[P(e,n\gets e,0)=\exp(-\Gamma t)P_{\rm Bose}^{\rm clock}(n\gets 0,t), \quad\sum_{n=0}^{N}P_{\rm Bose}(n\gets 0,t)=1, \tag{17}\] where \(P_{\rm Bose}^{\rm clock}(n\gets 0,t)=|A_{\rm Bose}^{\rm clock}(n\gets 0,t)|^{2}\) (see Eq.(11)). She may find \(n\) bosons, a photon with an energy \(E_{r}\), and conclude that the emission occurred around (see Eq. (11)) \[\tau_{n}=\omega^{-1}\sqrt{n/N}. \tag{18}\] The error of this result is determined by the width of the Gaussian (11) which, restricts the possible values of \(\tau\) in Eq.(14). Alice's relative error is, therefore, \(\Delta t/\tau_{n}\sim 1/\sqrt{n}<<1\), where \(\Delta t=\omega^{-1}N^{-1/2}\) was defined in Eq.(11). The probability of this outcome is given by the absolute square of \(A_{\rm Bose}^{\rm a+ph+clock}(E_{r},n\gets e,0)\) in Eq.(14). Extending in Eq.(14) the limits of integration to \(\pm\infty\), and evaluating Gaussian integrals yields \[P(E_{r},\tau_{n}\gets e,0)\approx\frac{\pi\Omega^{2}}{\omega\sqrt{nN}}\exp (-\Gamma\tau_{n})\times\frac{\Delta t}{\sqrt{2\pi}}\exp[-(E_{r}-E_{e})^{2} \Delta t^{2}/2] \tag{19}\] for \(0<\tau_{n}<t\), and \(P(E_{r},\tau_{n}\gets e,0)=0\) otherwise. The net probability of an outcome \(\tau_{n}\) is \[P(\tau_{n}\gets e,0)=\int dE_{r}\rho(E_{r})P(E_{r},\tau_{n}\gets e,0) \approx\frac{\Gamma}{2\omega\sqrt{nN}}\exp[-\Gamma\tau_{n}] \tag{20}\] and replacing \(\sum_{n}\rightarrow\int_{0}^{t}d\tau_{n}\) helps to verify that the overall decay rate is not affected by the presence of the clock, \(P_{\Delta t}^{\rm decay}(t)=\sum_{n}P(\tau_{n}\gets e,0)=1-\exp(-\Gamma t)\). Finally, the spread of the energies of the emitted photons is no longer Lorentzian, but Gaussian, \[P(E_{r}\gets e,t\rightarrow\infty)=\sum_{n}P(E_{r},\tau_{n}\gets e,0 )\approx\frac{\Delta t}{\sqrt{2\pi}}\exp[-(E_{r}-E_{e})^{2}\Delta t^{2}/2], \tag{21}\] and becomes broader as Alice's accuracy improves, \(\Delta t\to 0\). [Note that we cannot arrive at the Lorentzian distribution (8) simply by sending \(\Delta t\rightarrow\infty\) in Eq.(21), since Eq.(20) was derived under assumption that the number of bosons in the right well is large.] ### A clock which first waits and then runs Alice can also consider a Markovian clock which starts running only after the transition has taken place and continues doing so until the time of observation \(t\). (It will be clear shortly why this case is of interest). Replacing in Eq.(13) projector \(\hat{\pi}_{e}\) by \(1-\hat{\pi}_{e}=\int_{-\infty}^{\infty}dE_{r}|E_{r}\rangle\langle E_{r}|\), \(\tau\) with \(t-\tau\), and acting as before yields (see the "Methods" section (Coupling the clock to a quantum system)) \[A_{\rm Bose}^{\rm a+ph+clock}(e,n\gets e,0)=\exp[-iE_{e}t- \Gamma t/2]\delta_{n0}, \tag{22}\] \[A_{\rm Bose}^{\rm a+ph+clock}(E_{r},n\gets e,0)\approx\frac{ (-i)^{n+1}\Omega}{[2\pi n]^{1/4}}\times\hskip 56.905512pt,\] \[\int_{0}^{t}\exp\left[-\frac{(t_{n}-\tau)^{2}}{\Delta t^{2}} \right]\exp\{-iE_{r}\tau-i(E_{e}-i\Gamma/2)(t-\tau)]\}d\tau,\] where \(\delta_{n0}\) is the Kronecker delta. Now the number of the bosons in the right well is determined by the time which has elapsed since the moment of emission, and we can attend to the cat which dies as a result of the atom's decay. ### Exploding powder keys and poisoned cats It is difficult to resist the temptation to relate the present discussion to the famous Schrodinger's Cat problem. In 1935 Einstein and Schrodinger discussed a hypothetical case in which explosion of a powered leg was caused by a photon emitted by a decaying atom. In [6] Schrodinger dramatised the narrative further by replacing the unstable powder by a now famous live cat, which dies in the event. The perceived contradiction was due to the fact that, prior to the final observation of the cat's state, the wave function of the joint system was deemed to be a superposition of the states \(|{\rm atom}\): excited\(\rangle\otimes|{\rm cat}\): alive\(\rangle\) and \(|{\rm atom}\): decayed\(\rangle\otimes|{\rm cat}\): dead\(\rangle\). With wave function believed to reflect on the actual condition of a system, this left a big question mark over the cat's situation prior to be found either dead or alive. The same contradiction was observed in the powder leg example, where, again, macroscopically distinguishable states \(|{\rm unexploded}\rangle\) and \(|{\rm exploded}\rangle\) were forced into superposition through entanglement with the atom. It is worth revisiting the situation by replacing the cat (the leg) with the (nearly) classical clock of Section. So far, the cat paradox did not arise because we only required a matrix element of a unitary operator \(\hat{U}^{\rm a+ph+clock}(t)=\exp(-i\hat{H}^{\rm a+ph+clock}t)\) between the states \(|E_{r}\rangle\otimes|n\rangle\) and \(|e\rangle\otimes|0\rangle\) in the Hilbert space of the composite a+ph+clock. The question, we recall, was "is there a photon, and how many bosons are there in the right well at \(t\)?" Although there appears to be no need for it, one can create a kind of "cat" problem by looking at the ket \[\hat{U}^{\rm a+ph+clock}(t)|e,0\rangle=\exp[-iE_{e}t-\Gamma/2t]| \Phi^{\rm clock}_{\rm Bose}(0)\rangle\otimes|e\rangle+ \tag{23}\] \[\sum_{r}\int_{0}^{t}d\tau^{\prime}A^{\rm a+ph}(E_{r}\gets e,t |t-\tau^{\prime})|\Phi^{\rm clock}_{\rm Bose}(\tau)\rangle\otimes|E_{r}\rangle\] and object the appearance of a superposition of distinguishable macroscopic states in r.h.s. of Eq.(23). Indeed, for an accurate clock, i.e. \(\Delta t\to 0\) (\(N>>1\)), the clock's states in the r.h.s. of Eq.(23) are practically orthogonal [cf. Eq.(12)], \(\langle\Phi^{\rm clock}_{\rm Bose}(\tau^{\prime})|\Phi^{\rm clock}_{\rm Bose}( \tau)\rangle\sim\exp\left[-(\tau-\tau^{\prime})^{2}/\Delta t^{2}\right] \xrightarrow[\Delta t\to 0]{}0\). Alternatively, one can avoid the paradox of the cat being both dead and alive by considering the superposition to be a transient artefact of the calculation, needed only to establish the likelihood of finding \(n\) escaped bosons, and having no further significance. The analogy can be taken further. Neither the cat's demise, nor an explosion are purely instantaneous events. By looking at the deterioration of the cat's body (we leave outside the question of what it means to be alive) one can tell how long ago it stopped functioning. By looking at how much of the powder has been burnt, or how much dust thrown up in the air has settled, it is possible to deduce the moment when explosion started. Remarkably, the waiting clock of Section - keeps a similar record, only in a more direct way (Fig.3). Alice may find no bosons in the right well (cat is alive), or a certain number of them (a particular stage of decay of the dead cat's body). The more accurately Alice is able to deduce the "moment of death", the broader will be the energy distribution of the photon whose emission has killed the cat [cf. Eq.(21)]. A valid analogy could be a very long fuse, whose burnt length (number of bosons in the right well) would let one deduce the moment when it was set on fire. ### Beyond the wide band approximation Next we revisit a more general (non-Markovian) case of Section II [cf. Eq.(2)], where the product \(|\Omega(E_{r})|^{2}\rho(E_{r})\) may depend on the photon's energy, \(\int_{-\infty}^{\infty}dE|\Omega(E_{r})|^{2}\rho(E_{r})\) is finite, and the transition occurs via a single jump. Only a small proportion of all atoms will be found decayed by the time \(t\), but Alice may still want to know when this unlikely transition did occur. A simple calculation (see the "Methods" section (Timing the first order transition in a non-Markovian case)) shows that the probability of the clock's reading \(\tau_{n}\) for a system ending in a state \(|E_{r}\rangle\), is still given by an expression similar to Eq.(19), \[P(E_{r},n\gets e,0,t)\approx\frac{\pi\Omega^{2}(E_{r})\Delta t^{2}}{[2 \pi n]^{1/2}}\exp[-(E_{r}-E_{e})^{2}\Delta t^{2}/2], \tag{24}\] so that measuring the moment of emission to an accuracy \(\Delta t\) broadens the range of the photon's energies, which grows as \(1/\Delta t\) owing to the Gaussian in r.h.s. of Eq.(24). Therefore, it is the availability of the final system's states that restricts the decay rate, and is responsible for the Zeno effect already mentioned in Sect.II. Indeed, acting as before (cf. Section V), for the probability to decay by the time \(t\) we find \[P_{\Delta t}^{\rm decay}(t)=\sum_{n}\int dE_{r}\rho(E_{r})P(E_{r},n\gets e,0,t)=\Gamma_{\Delta t}\times t, \tag{25}\] \[\Gamma_{\Delta t}\equiv\sqrt{2\pi}\Delta t\int dE_{r}\rho(E_{r})\Omega^{2}(E_ {r})\exp[-(E_{r}-E_{e})^{2}\Delta t^{2}/2]. \tag{26}\] In the Markovian wide band limit \(\Gamma_{\Delta t}\) in Eq.(26) does reduce to Fermi's golden rule [10], \(\Gamma_{\Delta t}=\Gamma_{\rm Fermi}=2\pi\Omega^{2}\rho\). But if the integration of an ever broader Gaussian is restricted to a finite range, the factor of \(\Delta t\) in Eq.(26) is no longer cancelled, and the decay rate eventually Figure 3: An artists’s impression of a primitive cat a) alive and well, and b) sadly, dead for some time. Any resemblance to real cats, living or dead, is purely coincidental. decreases as the measurement becomes more accurate. For example, consider a special case of an energy band of a width \(\Delta E_{r}=E_{\rm max}-E_{\rm min}\), wherein \(\rho(E_{r})\Omega^{2}(E_{r})=const\). Comparing the decay rates prescribed by Eq.(26) and by Fermi's rule, we have \[\Gamma_{\Delta t}/\Gamma_{\rm Fermi}\xrightarrow[\Delta t\to 0]{}\frac{\Delta E_{r} \Delta t}{\sqrt{2\pi}}. \tag{27}\] The accuracy with which the moment of emission can be determined without significantly altering the decay rate is, ultimately, limited by the width of the energy range, available to the emitted photon. What happens for not too small values of \(\Delta t\) depends, however, on whether the excited atom's energy lies within the allowed range, as explained in Fig.4. If \(E_{e}<E_{\rm min}\) or \(E_{e}>E_{\rm max}\) unobserved atom cannot decay, and the decay rate first increases as \(\Delta t\) becomes smaller, leading to a kind of "anti-Zeno" effect [16]. It eventually begins to fall off in agreement with Eq.(27), when the exponential in Eq.(24) can be approximated by unity. **Figure** 4. The rate of decay into a finite-sized band \(E_{\rm min}\leq E_{r}\leq E_{\rm max}\) as a function of the clock's accuracy \(\Delta t\) [\(\xi=2(E_{e}-E_{\rm min})/\Delta E_{r}\)]. For \(E_{\rm min}<E_{e}<E_{\rm max}\) better accuracy means a smaller decay rate (Zeno effect); for \(E_{e}<E_{\rm min}\) there is an initial increase in the value of \(\Gamma\) (anti-Zeno effect). Both possibilities are illustrated in the inset. The anti-Zeno effect also occurs in the case \(E_{e}>E_{\rm max}\), not shown here. With all this in mind, we can revisit the analysis of [5], where the _duration_ of a jump was estimated in the following manner. Every \(\delta t\) seconds one checks whether the atom continues in its excited state. The jump time, \(\tau_{\rm J}\), is then taken to be the \(\delta t\) for which the checks begin to affect the atom's decay rate. For \(\tau_{\rm J}\), [5] finds \[\tau_{\rm J}\approx\Gamma_{\rm Fermi}\tau_{\rm z}^{2}, \tag{28}\] where \(\tau_{\rm z}\equiv\left[\langle e|\hat{H}_{\rm a+ph}^{2}|e\rangle-\langle e| \hat{H}_{\rm a+ph}|e\rangle^{2}\right]^{-1/2}\) is the "Zeno time". In the regime studied in [5]\(\delta t\) is short enough for the transition to occur via a single jump. One can, therefore, equally interpret \(\tau_{\rm J}\) as the _uncertainty_ in the moment at which an instantaneous transition takes place. In the case we have studied here, Alice's clock begins to affect the decay rate when its error is of order of the inverse band width, \(\Delta t\sim 1/\Delta E_{r}\) [cf. Eq.(27)]. It is easy to check (see the "Methods" section (The "jump time" in Eq.(28))) that Eq.(28) yields a similar result, \(\tau_{\rm J}\sim 1/\Delta E_{r}\). ### Feynman's "only mystery of quantum mechanics" All this leaves one with a question: "what can be said about the moment of emission if it has not been timed by a cat, gunpowder, or a clock? " Very little, according to Feynman [9], [14]. In a double slit experiment a particle can reach a point on the screen by passing through the holes, with the probability amplitudes \(A_{1}\) and \(A_{2}\), respectively. The probability of arriving at the screen with both slits open is \(|A_{1}+A_{2}|^{2}\), while with only the first one open it is \(|A_{1}|^{2}\). With no restriction on the signs of the amplitudes, it is possible to have (e.g., near a dark fringe) \(|A_{1}+A_{2}|^{2}<|A_{1}|^{2}\), so that eliminating one of the routes increases the number of arriving particles. For this reason, it is not possible to assume that a setting of the particle's internal machinery (or any other hidden variable) predetermines the hole to be chosen by each particle on its way to the screen. The mathematics cannot be simpler, and one must conclude that "... when you have no apparatus to determine through which hole the thing goes, then you cannot say that it either goes through one hole or the other". This is an illustration of the Uncertainty Principle [9] which states that one cannot determine which of the alternatives has been taken without destroying interference between them. The same principle, applied to the case of a decaying atom, states that with no apparatus to determine the moment of decay, one cannot say that the atom emits a photon with an energy between \(E_{r}\) and \(E_{r}+dE_{r}\) at one moment or the other. Indeed, if each atom were predestined to decay at a given time, the number of decayed atoms could only _increase_ or stay the same as the time span available for the atom's decay becomes longer. However, the corresponding probability is given by \(W(E_{r},E_{r}+dE_{r})=P(E_{r})dE_{r}\), and \(P(E_{r})=\rho|\int_{0}^{t}A^{\rm a+ph}(E_{r}\gets e,t|\tau)d\tau|^{2}\), shown in Fig.5, can _decrease_ with \(t\). (Note that the probability in Fig.5 is that of a single measurement made at different times. If the decayed atoms are counted twice, the number measured at a later time is, of course, always greater.) The decrease cannot be blamed on the re-absorption of the photon, impossible in the Markovian model [cf. Eq.(6)]. Neither can it be explained by the change in the emitted photon's energy [cf. Eq.(7)]. This seems even stranger than the double slit case. One could imagine the routes passing through different holes merged, like two confluent rivers, where it is impossible to say on which of the two a boat is. Merging time intervals may be even more difficult to fathom, but to conclude that an unobserved transition has occurred at a particular moment would lead to "an error in prediction"[14], as was discussed above. This is, according to Feynman [9], the only mystery of quantum mechanics, which defies "any deeper explanation". Figure 5: Probability of finding the photon in a unit interval around energy \(E_{r}\) in a single measurement made at time \(t\). ## Conclusions The story of the Schrodinger's cat, whose death is caused by the decay of an excited atom, is one of the best known illustration of a problem which one expects to arise when the classical world meets its quantum counterpart [6]. A classical system, believed to have an unbroken continuous history, appears to loose this property if forced to interact with a quantum object, for which no continuous description is thought to be available [17]. To bridge the gap between the classical and quantum views we design a nearly classical macroscopic clock, capable of timing the moment of decay to a good, yet finite accuracy. The complete narrative is as follows. An atom, prepared in its excited state is found decayed at time \(t\), after having emitted a photon with energy \(E_{r}\). The instant of emission is unknown, and to determine it the experimenter needs a device which would measure it. One suitable choice is a clock consisting of large number \(N\) of noninteracting bosonic atoms, initially trapped in the left well of a double well potential. Finding that \(n\) bosons have made the transition to the right well, one can estimate the elapsed time \(t\) as \(t_{n}\approx\omega^{-1}\sqrt{n/N}\), with an error \(\Delta t\approx\omega^{-1}/\sqrt{N}\). With the transition amplitude \(\omega\) small, and the number of bosons large, \(N>>1\) the clock is a source of irreversible current flowing from left to right. With many bosons in the right well, \(N>>n>>1\), the clock is seen to acquire an important classical property. Its wave function becomes localised, and one is able to measure time to a good accuracy without significantly perturbing the clock's evolution (for more details see also [13]). The clock can be arranged to run until the moment of emission, which would yield a good estimate of the time of emission provided \(\Delta t/t_{n}<<1\), except in the unlikely case of the decay occurring almost immediately. The effect of the measurement on the atom's decay depends on the range of energies \(\Delta E_{r}\), available to the emitted photon. In the wide band limit, \(\Delta E_{r}\Delta t>>1\) the decay rate \(\Gamma\) remains the same, and destruction of interference between the moments of emission leads only to broadening of the photon's energy spectrum, whose shape is no longer Lorentzian, but Gaussian, with a width \(\sim 1/\Delta t\). Having obtained a result \(t_{n}\), and knowing that more measurements could have been added both before and after \(t=t_{n}\) (almost) without altering the clock's evolution, the experimenter has a complete history of what has happened. The atom remained in its excited state until \(t_{n}-\Delta t\lesssim t_{n}+\Delta t\), and then continued in the ground state until the time when the clock is read. Note that essential for recovering such a continuous description is the classical property of the macroscopic clock reached in the limit \(N>>1\). The Zeno effect sets in when the inverse clock's accuracy become comparable to the range of available photon's energies, \(\Delta E_{r}\Delta t\lesssim 1\). Now the notion of the moment of decay is meaningful only in the weak coupling limit, \(\Gamma_{\rm Fermi}t<<1\) [cf. Eq.(2)]. In the "narrow band" limit, \(\Delta E_{r}\Delta t<<1\), the decay rate is proportional to \(\Delta t\), and the unlikely atomic decay is further suppressed as \(\Delta t\to 0\). The clock set up to run after the decay has occurred, helps provide an additional insight into the fate of the Schrodinger's feline [6]. Now one knows that there were no bosons in the right well until \(t_{n}\) (within an error margin \(\Delta t\)), after which their number there was steadily growing. One can leave the question of what it means to be alive outside the scope of quantum theory, and concentrate instead on the deterioration of the cat's macroscopic physical body. The waiting clock is a blueprint for a very primitive "cat", said to be alive if there are no bosons in the right well, \(n=0\), and dead in some stage of decay with \(n>>1\). If the analogy holds, a real cat's physical frame should be characterised by a quantum uncertainty \(\Delta t_{\rm cat}\), which limits the ability of an experienced forensic scientist to determine the time of death by studying the cat's remains. The cat's fate depends, therefore, on the details of the atom's decay. In the wide band model, the probability of survival up to time \(t\) decreases as \(\exp(-\Gamma t)\), regardless of the \(\Delta t_{\rm cat}\). However, in the finite band case, a cat whose body would allow to determine the moment of death with greater precision should have a better chance to survive its ordeal. ## Methods ### Derivation of Eq.(11) The normal approximation to \(A_{\rm Bose}^{\rm clock}(n\gets 0,t)=(-i)^{n}\sqrt{C_{n}^{N}p^{n}(1-p)^{N-n}}\) reads (\(p(t)=\omega^{2}t^{2}<<1\)) \[A_{\rm Bose}^{\rm clock}(n\gets 0,t)\approx(-i)^{n}(2\pi Np)^{-1/4}\exp[-(n- Np)^{2}/4Np]. \tag{29}\] In new variables \(t_{n}\equiv\omega^{-1}\sqrt{n/N}\) and \(\Delta t\equiv\omega^{-1}N^{-1/2}\) we have \[A_{\rm Bose}^{\rm clock}(n\gets 0,t)\approx(-i)^{n}(2\pi N\omega^{2}t^{2})^ {-1/4}\exp[-(t_{n}^{2}-t^{2})^{2}/4t^{2}\Delta t^{2}]. \tag{30}\] As \(N\rightarrow\infty\), \(\Delta t\to 0\), and the exponential is sharply peaked around \(t_{n}\sim t\), or \(n\sim N\omega^{2}t^{2}\), and the amplitude can be approximated by \[A_{\rm Bose}^{\rm clock}(n\gets 0,t)\approx(-i)^{n}(2\pi n)^{-1/4}\exp[-(t_{n} -t)^{2}/\Delta t^{2}], \tag{31}\] which is Eq.(11). ### A macroscopic clock Consider \(N\) non-interacting bosons, each occupying the same state \(|\phi\rangle\), \(|\Phi\rangle=\prod_{i=1}^{N}|\phi\rangle_{i}\). Expanding the \(|\phi\rangle\)s in an orthonormal basis \(|j\rangle\), \(j=1,2\), \(|\phi\rangle=\alpha|1\rangle+\beta|2\rangle\) yields \[|\Phi\rangle=\sum_{n_{1}=0}^{N}B_{n_{1}}|n_{1},N\rangle,\quad B_{n_{1}}=\sqrt{ C_{n_{1}}^{N}}\alpha^{n_{1}}\beta^{N-n_{1}}, \tag{32}\] where \(C_{n_{1}}^{N}\) is the binomial coefficient, and \(|n_{1},N\rangle\) describes a state with \(n_{1}\) particles populating the state \(|1\rangle\). Suppose one wants to measure the number of particles in the state \(|1\rangle\), \(\hat{N}_{1}=\sum_{i=1}^{N}|1\rangle_{i}\langle 1|_{i}\), using a Gaussian von Neumann pointer, whose initial state is \(G(f)=C\exp[-f^{2}/\Delta f^{2}]\). After the measurement for the entangled state of the pointer and the bosons, \(|\Phi\rangle\), one finds \[\langle f|\Phi\rangle=\sum_{n_{1}=0}^{N}B_{n_{1}}G(f-n_{1})|n_{1},N\rangle. \tag{33}\] The distribution of the pointer's readings \(f\) and the mixed state of the bosons \(\hat{\rho}\) are, therefore, given by \[w(f)=\sum_{n_{1}=0}^{N}|B_{n_{1}}|^{2}G^{2}(f-n_{1}), \tag{34}\] and \[\hat{\rho}=\sum_{n_{1},n_{1}^{\prime}=0}^{N}B_{n_{1}^{\prime}}^{*}B_{n_{1}}I_ {n_{1}^{\prime}n_{1}}|n_{1},N\rangle\langle n_{1}^{\prime},N|,\quad I_{n_{1}^{ \prime}n_{1}}\equiv\int G(f-n_{1}^{\prime})G(f-n_{1})df. \tag{35}\] With \(N,|\alpha|^{2}N>>1\) the readings lie near the mean value \(\overline{n}_{1}=|\alpha|^{2}N\). Using the normal approximation for the binomial distribution \(|B_{n_{1}}|^{2}\), and replacing the sum by an integral yields \[w(f)\approx\frac{C^{2}}{\sigma\sqrt{2\pi}}\int\exp\left[-\frac{(n_{1}- \overline{n}_{1})^{2}}{2\sigma^{2}}-\frac{2(f-n_{1})^{2}}{\Delta f^{2}}\right] dn_{1}\sim\exp\left[-\frac{2(f-\overline{n}_{1})^{2}}{\Delta f^{2}+4\sigma^{2}} \right], \tag{36}\] where \(\sigma=\sqrt{N|\alpha|^{2}(1-|\alpha|^{2})}\). For a large \(N\) it is possible to choose \(\sigma<<\Delta f<<\overline{n}_{1}\). This yields a good measurement, \(w(f)\sim\exp{[-2(f-\overline{n}_{1})^{2}/\Delta f^{2}]}\), with a relative error \(\sim\Delta f/\overline{n}_{1}<<1\). What is more, since the non-zero \(B_{n_{1}}\)s lie within a range \(\sim\sigma\) around \(\overline{n}_{1}\), all relevant factors \(I_{n_{1}^{\prime}n_{1}}\) in Eq.(35) can be replaced by unity. Thus, the bosons' state is almost unperturbed by a good, yet weakly perturbing measurement, and is ready for the next observation. Since the choice of the basis \(|j\rangle\) is arbitrary, one can say that, for a large system, different collective (macroscopic) variables acquire well define "classical" values even when the corresponding one-particle projectors \(\hat{n_{1}}=|1\rangle\langle 1|\) and \(\hat{n}_{1^{\prime}}=|1^{\prime}\rangle\langle 1^{\prime}|\) do not commute. By the same token, the progress of a large system can be monitored by consecutive measurements of the same macroscopic quantity without seriously affecting its evolution. This is "classicality by numbers" [13]. ### Coupling the clock to a quantum system Consider an evolution operator for a system (s) coupled to a clock, \[\hat{U}^{\rm s+clock}(t)=\exp[-i(\hat{H}^{\rm s}+\hat{\pi}\hat{H}^{\rm clock}) t], \tag{37}\] where \(\hat{\pi}\) projects onto a sub-space \(h\) of the system's Hilbert space. Since \(\hat{H}^{\rm clock}\) commutes with both \(\hat{H}^{\rm s}\) and \(\hat{\pi}\) we can write (\(\delta(x)\) is the Dirac delta) \[\hat{U}^{\rm s+clock}(t)=\int_{-\infty}^{\infty}d\lambda\delta(\lambda-\hat{H }^{\rm clock})\exp[-i(\hat{H}^{\rm s}+\lambda\hat{\pi})t]. \tag{38}\] But \(\delta(\lambda-\hat{H}^{\rm clock})=(2\pi)^{-1}\int_{-\infty}^{\infty}d\tau\exp (i\lambda\tau-\hat{H}^{\rm clock}\tau)\), and we have \[\hat{U}^{\rm s+clock}(t)=\int_{-\infty}^{\infty}d\tau\hat{U}^{\rm s}(t|\tau) \hat{U}^{\rm clock}(\tau), \tag{39}\] where \(\hat{U}^{\rm s}(t|\tau)=(2\pi)^{-1}\int_{-\infty}^{\infty}d\lambda\exp(i \lambda\tau)\exp[-i(\hat{H}^{\rm s}+\lambda\hat{\pi})t]\) evolves the system under an additional condition that it must spend \(\tau\) seconds in the chosen sub-space, and \(\hat{U}^{\rm clock}(\tau)=\exp(-i\hat{H}^{\rm clock}\tau)\) evolves the clock for precisely \(\tau\) seconds. If the clock is set to measure the duration spent by the system in the sub-space orthogonal to \(h\), \(\hat{\pi}\) is replaced by \(1-\hat{\pi}\), and Eq.(39) becomes \[\hat{U}^{\rm s+clock}(t)=\int\hat{U}^{\rm s}(t|\tau)\hat{U}^{\rm clock}(t-\tau )d\tau=\int\hat{U}^{\rm s}(t|t-\tau)\hat{U}^{\rm clock}(\tau)d\tau \tag{40}\] with the clock running whenever the system is _not_ in the subspace \(h\). For a transition amplitude between states \(|\psi^{\rm s}_{i}\rangle|\phi^{\rm clock}_{i}\rangle\) and \(|\psi^{\rm s}_{f}\rangle|\phi^{\rm clock}_{f}\rangle\), \(A^{\rm s+clock}(\psi^{\rm s}_{f},\phi^{\rm clock}_{f}\leftarrow\psi^{\rm s}_{i}, \phi^{\rm clock}_{i},t)=\langle\psi^{\rm s}_{f}|\langle\phi^{\rm clock}_{f}|\exp (-i\hat{H}^{\rm s+clock}t)|\psi^{\rm s}_{i}\rangle|\phi^{\rm clock}_{i}\rangle\) we have \[A^{\rm s+clock}(\psi^{\rm s}_{f},\phi^{\rm clock}_{f}\leftarrow\psi^{\rm s}_{i},\phi^{\rm clock}_{i},t)=\int A^{\rm s}(\psi^{\rm s}_{f}\leftarrow\psi^{\rm s}_ {i},t|\tau)A^{\rm clock}(\phi^{\rm clock}_{f}\leftarrow\phi^{\rm clock}_{i},\tau )d\tau, \tag{41}\] where \(A^{\rm s}(\psi^{\rm s}_{f}\leftarrow\psi^{\rm s}_{i},t|\tau)\) is the amplitude of the system found in its final state while spending a duration \(\tau\) in \(h\), and \(A^{\rm clock}(\phi^{\rm clock}_{f}\leftarrow\phi^{\rm clock}_{i},\tau)\) is that of the clock reaching \(|\psi^{\rm s}_{i}\rangle\) after \(\tau\) seconds. For a clock measuring the duration spent in the part of the Hilbert space, orthogonal to \(h\), \(\tau\) should be replaced by \(t-\tau\) as it has been done in Eq.(40). ### Timing the transition in the Markovian case Now the system including the atom and a photon (if any) is described by the Hamiltonian (1), \(\hat{\pi}_{e}\equiv|e\rangle\langle e|\) projects onto the atom's excited state (no photons). Thus introducing \(\lambda\hat{\pi}_{e}\) to the Hamiltonian simply adds \(\lambda\) to the energy of the excited state \(E_{e}\to E_{e}+\lambda\). We may evaluate the amplitudes for the modified Hamiltonian, \(\hat{H}+\lambda\hat{\pi}_{e}\) and then perform the Fourier transform. We have \[A^{\rm a+ph}(e\gets e,t|\tau)=(2\pi)^{-1}\int_{-\infty}^{ \infty}d\lambda\exp[i\lambda\tau-i(E_{e}+\lambda)t-\Gamma t/2]= \tag{42}\] \[\exp(-iE_{e}t-\Gamma t/2)\delta(\tau-t).\] Similarly, we find \[A^{\rm a+ph}(E_{r}\gets e,t|\tau)=-i\Omega\exp(-iE_{r}t) \int_{0}^{t}dt^{\prime}\exp[-i(E_{e}-E_{r})t^{\prime}-\Gamma t^{\prime}/2] \delta(\tau-t^{\prime}) \tag{43}\] \[=\begin{cases}-i\Omega\exp[-iE_{r}(t-\tau)]\exp[-iE_{e}\tau- \Gamma\tau/2]&\text{for}\quad 0\leq\tau\leq t\\ 0&\text{otherwise}\end{cases},\] which is the second of Eqs.(16). The remaining amplitudes are \[A^{\rm a+ph}(e\gets E_{r},t|\tau)=0 \tag{44}\] and \[A^{\rm a+ph}(E^{\prime}_{r}\gets E_{r},t|\tau)=\exp(-iE_{r}t) \delta(E_{r}-E^{\prime}_{r})\delta(\tau). \tag{45}\] ### Timing the first-order transition in a non-Markovian case In the general non-Markovian case, to calculate the required amplitude we expand, to the first order in \(\hat{V}\), a transition amplitude \[\langle E_{r}|\exp[-i(\hat{H}+\lambda\hat{\pi}_{e})t]|e\rangle \approx-i\sum_{r^{\prime}}\Omega(E_{r^{\prime}})\int_{0}^{t}dt^{\prime}\times \tag{46}\] \[\langle E_{r}|\exp[-i(\hat{H}_{0}+\lambda\hat{\pi}_{e})(t-t^{ \prime})]E_{r^{\prime}}\rangle\langle e|\exp[-i(\hat{H}_{0}+\lambda\hat{\pi}_{e })t^{\prime}]|e\rangle.\] The integrand reduces to [recall that adding \(\lambda\hat{\pi}_{e}\) changes \(E_{e}\) into \(E_{e}+\lambda\) in Eq.(1)] \[\exp[-iE_{r}(t-t^{\prime})]\delta_{r^{\prime}r}\exp[-i(E_{e}+\lambda)t^{\prime }], \tag{47}\] and performing the Fourier transform with respect to \(\lambda\) yields \[A^{\rm a+ph}(E_{r}\gets e,t|\tau)=\begin{cases}-i\Omega(E_{r})\exp[-iE_{r }(t-\tau)]\exp[-iE_{e}\tau]&\text{for}\quad 0\leq\tau\leq t\\ 0&\text{otherwise}\end{cases}. \tag{48}\] Using Eqs.(11), (14) and (48) we find \[A^{\rm a+ph+clock}_{\rm Bose}(E_{r},n\gets e,0)=const\times\int_{0}^{t} \exp[-(\tau-\tau_{n})^{2}/\Delta t^{2}+i(E_{r}-E_{e})\tau]d\tau. \tag{49}\] For \(\Delta t\to 0\) the limits of integration can be extended to \(\pm\infty\). Evaluating the Gaussian integral, and taking the absolute square then yields \[P(E_{r},n\gets e,0,t)\approx\frac{\pi\Omega^{2}(E_{r})\Delta t^{2}}{[2\pi n ]^{1/2}}\exp[-(E_{r}-E_{e})^{2}\Delta t^{2}/2]. \tag{50}\] Replacing (\(n_{\rm max}=\omega^{2}t^{2}N\)) the sum \(\sum_{n=0}^{n_{\rm max}}n^{-1/2}\) by an integral \(\int_{0}^{n_{\rm max}}n^{-1/2}dn=2\sqrt{n_{\rm max}}=2t/\Delta t\) we obtain the energy distribution of the photons in the presence of a clock \[P(E_{r}\gets 0,t)=\sum_{n=0}^{n_{\rm max}}P(E_{r},n\gets e,0,t) \approx\sqrt{2\pi}\Omega^{2}(E_{r})\rho(E_{r})\Delta t\exp[-(E_{r}-E_{e})^{2 }\Delta t^{2}/2]\times \tag{51}\] ### The "jump time" in Eq.(28) Let the decay occur into a finite energy range \(\Delta E_{r}=E_{\rm max}-E_{\rm min}\) around \(E_{e}\), and assume that \(\rho(E_{r})\Omega^{2}(E_{r})=const\) inside the range, and vanishes outside it. Using Hamiltonian (1), for the Zeno time we have \[\tau_{\rm z}^{2}\equiv\left[\langle e|\hat{H}^{2}|e\rangle-\langle e|\hat{H}| e\rangle^{2}\right]^{-1}=[\rho\Omega^{2}\Delta E_{r}]^{-1}. \tag{52}\] Recalling that \(\Gamma_{\rm Fermi}=2\pi\rho(E_{e})|\langle E_{r}=E_{e}|\hat{H}|E_{e}\rangle|^{2}=2 \pi\rho\Omega^{2}\) shows that Eq.(28) reduces to \[\tau_{\rm J}\approx 2\pi/\Delta E_{r}\sim 1/\Delta E_{r}. \tag{53}\]
2307.16546
An Overconstrained Vertical Darboux Mechanism
In this article, we will construct an overconstrained closed-loop linkage consisting of four revolute and one cylindrical joint. It is obtained by factorization of a prescribed vertical Darboux motion. We will investigate the kinematic behaviour of the obtained mechanism, which turns out to have multiple operation modes. Under certain conditions on the design parameters, two of the operation modes will correspond to vertical Darboux motions. It turns out, that for these design parameters, there also exists a second assembly mode.
Johannes Siegele, Martin Pfurner
2023-07-31T10:22:35Z
http://arxiv.org/abs/2307.16546v1
# An Overconstrained Vertical Darboux Mechanism ###### Abstract In this article, we will construct an overconstrained closed-loop linkage consisting of four revolute and one cylindrical joint. It is obtained by factorization of a prescribed vertical Darboux motion. We will investigate the kinematic behaviour of the obtained mechanism, which turns out to have multiple operation modes. Under certain conditions on the design parameters, two of the operation modes will correspond to vertical Darboux motions. It turns out, that for these design parameters, there also exists a second assembly mode. Keywords:vertical Darboux motion, closed-loop linkage, motion factorization, overconstrained mechanism ## 1 Introduction In 1881, Darboux determined all possible motions with the property, that every point has a planar trajectory. Vertical Darboux motions are a sub-type of these motions and are obtained by the composition of a rotation with a suitably parametrized oscillating translation in the direction of the rotation axis. All generic point trajectories for both the non-vertical and the vertical Darboux motion are ellipses. The vertical Darboux motion is in addition a cylindrical and line symmetric motion. For more detail we refer to [1, Chapter 9]. Vertical Darboux motions are of particular interest, when using Study parameters for the representation of spatial displacements. Any line in the ambient space of the Study quadric represents a vertical Darboux motion [8]. Lines on the Study quadric correspond to rotations and translations. Therefore the vertical Darboux motion is a natural generalization of rotations and translations. By representing the motion by a curve on the Study quadric,its instantaneous behaviour corresponds to the instantaneous motion given by the curve tangent, which is a line in the ambient space. Thus, vertical Darboux motions may also be used for the description of the instantaneous behaviour of a motion. In this article, we construct an overconstrained 4RC-linkage performing an arbitrary vertical Darboux motion. The construction is based on the factorization theory for dual quaternion polynomials [2] and on the construction of a non-vertical Darboux linkage in [6]. Lines on the Study quadric can be parametrized by linear dual quaternion polynomials, thus they represent rotations or translations. Both motions can easily be realized by revolute or prismatic joints, respectively. Thus, by decomposing a dual quaternion polynomial into the product of linear factors, we are able to construct open kinematic chains. For the vertical Darboux motion, we obtain an open chain, which can perform a cylindrical motion. Therefore, it can be closed using a cylindrical joint to obtain a single-loop mechanism. Overconstrained mechanisms performing a vertcal Darboux motion are constructed in [3, 4]. Our approach, however, yields a new type of overconstrained mechanisms. We will analyze operation and assembly modes of the obtained linkage. In general, they will have two operation modes, one of them is the desired vertical Darboux motion, the other is a cylindrical motion of degree 5. Further, we will give a condition, which ensures existence of a second assembly mode, as well as the decompositon of the second operation mode into another vertical Darboux motion and two rotations. ## 2 Preliminaries In this manuscript, we will construct a closed-loop linkage able to perform a vertical Darboux motion. Our construction is based on the factorizaton theory of dual quaternion polynomials, therefore we will give a short introduction to dual quaternions and motion polynomials in this section. For further detail we refer to [5]. ### Dual Quaternions A dual quaternion \(h\in\mathbb{DH}\) is given by \[h=p_{0}+p_{1}\mathbf{i}+p_{2}\mathbf{j}+p_{3}\mathbf{k}+d_{0}\varepsilon+d_{1} \varepsilon\mathbf{i}+d_{2}\varepsilon\mathbf{j}+d_{3}\varepsilon\mathbf{k}\] for real numbers \(p_{0},\ldots,p_{3}\), \(d_{0},\ldots,d_{3}\in\mathbb{R}\). The non-commutative multiplication of dual quaternions abides by the rules \[\mathbf{i}^{2}=\mathbf{j}^{2}=\mathbf{k}^{2}=\mathbf{ij}\mathbf{k}=-1,\quad \varepsilon^{2}=0,\quad\varepsilon\mathbf{i}=\mathbf{i}\varepsilon,\quad \varepsilon\mathbf{j}=\mathbf{j}\varepsilon,\quad\varepsilon\mathbf{k}= \mathbf{k}\varepsilon.\] The quaternions \(p=p_{0}+p_{1}\mathbf{i}+p_{2}\mathbf{j}+p_{3}\mathbf{k}\), \(d=d_{0}+d_{1}\mathbf{i}+d_{2}\mathbf{j}+d_{3}\mathbf{k}\) are called primal and dual part of \(h\). The dual quaternion conjugate is given by \[h^{*}=p_{0}-p_{1}\mathbf{i}-p_{2}\mathbf{j}-p_{3}\mathbf{k}+d_{0}\varepsilon- d_{1}\varepsilon\mathbf{i}-d_{2}\varepsilon\mathbf{j}-d_{3}\varepsilon \mathbf{k},\] the dual quaternion norm is given by \(\|h\|=hh^{*}\). Dual quaternions can be used to represent rigid body displacements by simply using the Study parameters of a displacement as the coefficients of the dual quaternion. The action of a displacement on a point \([x_{0},x_{1},x_{2},x_{3}]\) in projective three-space can be represented by a dual quaternion product by embedding the point into the dual quaternions via \([x_{0},x_{1},x_{2},x_{3}]\mapsto x_{0}+\varepsilon x\) with \(x=x_{1}\mathbf{i}+x_{2}\mathbf{j}+x_{3}\mathbf{k}\). Acting on this point by a displacement given by \(p+\varepsilon d\) corresponds to computing the product \[(p-\varepsilon d)(x_{0}+\varepsilon x)(p^{*}+\varepsilon d^{*}).\] The coefficients of a dual quaternion \(h\) fulfill the Study condition if and only if \(\|h\|\) is real. Note that all scalar multiples of a dual quaternion yield the same displacement. ### Motion Polynomials Rational motions can be represented by polynomials with dual quaternion coefficients \(Q=\prod_{\ell=0}^{n}q_{\ell}t^{\ell}\) with \(q_{0}\), \(q_{1},\ldots,q_{n}\in\mathbb{DH}\) such that \(\|Q\|=QQ^{*}\in\mathbb{R}[t]\) is a real polynomial. Here the conjugate polynomial \(Q^{*}\) is obtained by conjugating all of its coefficients. Such polynomials are called motion polynomials. The simplest examples of motion polynomials are linear, monic polynomials \(t-h\), where the scalar coefficient \(d_{0}\) of the dual part has to vanish for the Study condition to be fulfilled. Such a linear polynomial either represents a rotation, if \(\|t-h\|\) has complex roots, or a translation otherwise. In case of a rotation, its axis has Plucker coordinates \([p_{1},p_{2},p_{3},-d_{1},-d_{2},-d_{3}]\). Otherwise the direction of translation is given by \([d_{1},d_{2},d_{3}]\in\mathbb{R}^{3}\). Both of these motions can be realized by revolute or prismatic joints, respectively. Decomposing a given motion polynomial into the product of linear factors therefore corresponds to decomposing the represented motion into a concatenation of rotations and translations, which in turn can be realized by joints. This gives rise to a kinematic chain which is able to perform the given motion. It can be constrained by another chain generated by a different factorization of the same motion polynomial, which yields a closed mechanism still able to perform the given motion. ## 3 Vertical Darboux Motion A vertical Darboux motion is the composition of a rotation and an oscillating translation along the same axis. Vertical Darboux motions around the third coordinate axis can be parameterized by the dual quaternion polynomial [5]. \[M=(t^{2}+1)(t-\mathbf{k})+\varepsilon(-b\mathbf{k}t+c\mathbf{k})(t-\mathbf{k}).\] It does not admit a factorization into three linear factors, but multiplying \(M\) with \((t^{2}+1)\) allows us to find factorizations, each consisting of 5 linear polynomials [7]. Every factorization corresponds to an open kinematic chain with at most five revolute joints, which can perform the vertical Darboux motion given by \(M\). Combining several of these chains would result in a rather complicated mechanism. But since the vertical Darboux motion is a cylindrical motion, we can close the obtained open chain with a C-joint. This results in a closed-loop mechanism with at most six joints where one of the joints has two degrees of freedom. To obtain an overconstrained mechanism, we can try to find factorizations for which two neighboring factors are equal. This yields an open 4R chain which can be close with a C-joint resulting in an overconstrained mechanism. ### Factorization of the vertical Darboux motion Like in [6, Section 3.3], we will try to find \(P_{4}\in\mathbb{DH}[t]\) such that \(P_{4}^{2}\) is a right factor of \((t^{2}+1)M\). Solving a system of equations for the coefficients of \(P_{4}\) shows that it has to be of the shape \[P_{4}=t-k-\varepsilon(q_{1}\mathbf{i}+q_{2}\mathbf{j})\] for arbitrary \(q_{1}\), \(q_{2}\in\mathbb{R}\). After dividing off these two right factors, we obtain \[Q=(t^{2}+1)(t+\mathbf{k})+\varepsilon(bt-c+2t(q_{1}t+q_{2})\mathbf{i}-2t(q_{2}t+ q_{1})\mathbf{j}-t(bt-c)\mathbf{k}),\] which represents a Darboux motion. As long as \(q_{1}\) and \(q_{2}\) do not vanish simultaneously, it is non-vertical, thus admits infinitely many factorizations into three linear factors. Using factorization techniques, it is straight forward to compute \[P_{3}=t+\mathbf{k}+\varepsilon(y_{1}\mathbf{i}+y_{2}\mathbf{j}),\] which is a right factor of \(Q\). Dividing off this factor will leave us with a quadratic translation, which admits a factorization if and only if it is a circular translation [5]. To ensure factorizability, we need to choose \[y_{1} =\frac{b^{2}q_{1}-2bcq_{2}-c^{2}q_{1}+4q_{1}^{3}+4q_{1}q_{2}^{2}} {4(q_{1}^{2}+q_{2}^{2})},\] \[y_{2} =\frac{b^{2}q_{2}+2bcq_{1}-c^{2}q_{2}+4q_{1}^{2}q_{2}+4q_{2}^{3} }{4(q_{1}^{2}+q_{2}^{2})}.\] The resulting motion is then a translation along a circle with axis in the direction \([4(bq_{1}-cq_{2}),4(bq_{2}+cq_{1}),-b^{2}-c^{2}+4q_{1}^{2}+4q_{2}^{2}]\). To find a factorization for this translation, we can simply take any line parallel to the circle axis and use its normalized Plucker coordinates as the coefficients of the right factor, i.e. we can define \[P_{2}=t+\frac{4(bq_{1}-cq_{2})\mathbf{i}+4(bq_{2}+cq_{1})\mathbf{j}-(b^{2}+c^{ 2}-4q_{1}^{2}-4q_{2}^{2})\mathbf{k}}{b^{2}+c^{2}+4q_{1}^{2}+4q_{2}^{2}}- \varepsilon(z_{1}\mathbf{i}+z_{2}\mathbf{j}+z_{3}\mathbf{k})\] for \(z_{1}\), \(z_{2}\), \(z_{3}\in\mathbb{R}\) such that the Study (Plucker) condition is fulfilled. After dividing off \(P_{2}\) we are left with the last factor which is given by \[P_{1}=t -\frac{4(bq_{1}-cq_{2})\mathbf{i}+4(bq_{2}+cq_{1})\mathbf{j}-(b^ {2}+c^{2}-4q_{1}^{2}-4q_{2}^{2})\mathbf{k}}{b^{2}+c^{2}+4q_{1}^{2}+4q_{2}^{2}}\] \[+\varepsilon\frac{2q_{2}\left(bc+2q_{1}q_{2}+2z_{1}q_{2}\right)-q _{1}\left(b^{2}-c^{2}-4q_{1}^{2}-4q_{1}z_{1}\right)}{4q_{1}^{2}+4q_{2}^{2}} \mathbf{i}\] \[-\varepsilon\frac{2q_{1}\left(bc-2q_{1}q_{2}-2q_{1}z_{2}\right)+q _{2}\left(b^{2}-c^{2}-4q_{2}^{2}-4q_{2}z_{2}\right)}{4q_{1}^{2}+4q_{2}^{2}} \mathbf{j}\] \[+\varepsilon\frac{4(bq_{1}-cq_{2})z_{1}+4(bq_{2}+cq_{1})z_{2}-b(b ^{2}+c^{2}-4q_{1}^{2}-4q_{2}^{2})}{b^{2}+c^{2}-4q_{1}^{2}-4q_{2}^{2}}\mathbf{k}\] assuming \(b^{2}+c^{2}-4q_{1}^{2}-4q_{2}^{2}\neq 0\). Note, that \(z_{3}\) is chosen such that the Study condition for \(P_{2}\) is fulfilled. If \(b^{2}+c^{2}-4q_{1}^{2}-4q_{2}^{2}=0\), \(z_{3}\) can be chosen arbitrarily, with the restriction that \(z_{1}\) and \(z_{2}\) need to fulfill \([z_{1},z_{2}]=z[bq_{2}+cq_{1},-bq_{1}+cq_{2}]\) for arbitrary \(z\in\mathbb{R}\) With this condition, the last factor is given by \[P_{1}=t -2\frac{(bq_{1}-cq_{2})\mathbf{i}+(bq_{2}+cq_{1})\mathbf{j}}{b^{2}+c ^{2}}\] \[-\varepsilon\frac{(b^{2}+c^{2})z-2c}{b^{2}+c^{2}}((bq_{2}+cq_{1}) \mathbf{i}-(bq_{1}-cq_{2})\mathbf{j})\] \[+\varepsilon(z_{3}-b)\mathbf{k}.\] This factorization \[(t^{2}+1)M=P_{1}P_{2}P_{3}P_{4}^{2}\] now gives rise to a 4R chain, which we can close with a cylindrical joint to obtain a closed-loop linkage, see Fig, 1. It admits, by construction, the initial vertical Darboux motion given by \(M\) as one operation mode. ## 4 Kinematic Analysis of the Vertical Darboux Mechanism To analyze other possible operation modes, let us investigate the kinematic chain obtained by these factors, where each joint can move inpenpenently of each other, i.e. \(C=P_{1}(v_{1})P_{2}(v_{2})P_{3}(v_{3})P_{4}(v_{4})(\tau-k)(1-\varepsilon s \mathbf{k})\). Here the last two factors simply describe a cylindrical joint with the third coordinate axis as joint axis. A kinematic chain can be closed, if the third coordinate axes of the base and the moving frame coincide. This yields two closure conditions, the first one being, that the axes point in the same direction, the second one, that they point in opposite directions. Figure 1: An example of a 4RC vertical Darboux mechanism (C-joint is blue). ### First Assembly Mode The first closure condition means that the coefficients of the dual quaternion units **i**, **j**, **k**, \(\varepsilon\), \(\varepsilon\)**i**, \(\varepsilon\)**j** and \(\varepsilon\)**k** of \(C\) vanish, i.e. \(C\) describes the identity transformation. This gives us seven polynomial equations, where the first and the second have a common factor \(v_{1}-v_{2}\) while the other factors do not have common real solutions. After substituting this into our set of equations, the third equation has the factor \(\tau v_{3}-\tau v_{4}+v_{3}v_{4}+1\) while last equation has the factor \(sv_{1}^{2}+bv_{1}-c+s\). They admit the real solution \(s=-(bv_{2}-c)/(v_{1}^{2}+1)\), \(\tau=-(v_{3}v_{4}+1)/(v_{3}-v_{4})\). After substituting these solutions into the remaining equations, we are left with two polynomial equations which are quadratic in each of the variables \(v_{1}\), \(v_{3}\) and \(v_{4}\). Computing a resultant to eliminate \(v_{4}\) and dividing off unnecessary factors yields an equation with two factors, one of them is \(v_{1}-v_{3}\), the other \[F= 8bcq_{1}^{2}v_{1}^{3}v_{3}+8bcq_{2}^{2}v_{1}^{3}v_{3}+b^{4}v_{1} ^{3}-b^{4}v_{1}^{2}v_{3}+2b^{2}c^{2}v_{1}^{3}-2b^{2}c^{2}v_{1}^{2}v_{3}+4b^{2} q_{1}^{2}v_{1}^{3}\] \[+12b^{2}q_{1}^{2}v_{1}^{2}v_{3}+4b^{2}q_{2}^{2}v_{1}^{3}+12b^{2}q_ {2}^{2}v_{1}^{2}v_{3}+c^{4}v_{1}^{3}-c^{4}v_{1}^{2}v_{3}-4c^{2}q_{1}^{2}v_{1}^{3}\] \[-12c^{2}q_{1}^{2}v_{1}^{2}v_{3}-4c^{2}q_{2}^{2}v_{1}^{3}-12c^{2}q_ {2}^{2}v_{1}^{2}v_{3}-24bcq_{1}^{2}v_{1}^{2}-24bcq_{1}^{2}v_{1}v_{3}\] \[-24bcq_{2}^{2}v_{1}^{2}-24bcq_{2}^{2}v_{1}v_{3}+v_{1}b^{4}-b^{4}v_ {3}+2v_{1}c^{2}b^{2}-2b^{2}c^{2}v_{3}-12b^{2}q_{1}^{2}v_{1}\] \[-4b^{2}q_{1}^{2}v_{3}-12b^{2}q_{2}^{2}v_{1}-4b^{2}q_{2}^{2}v_{3}+ v_{1}c^{4}-c^{4}v_{3}+12c^{2}q_{1}^{2}v_{1}+4c^{2}q_{1}^{2}v_{3}\] \[+12c^{2}q_{2}^{2}v_{1}+4c^{2}q_{2}^{2}v_{3}+8bcq_{1}^{2}+8bcq_{2}^ {2} \tag{1}\] This, in general, gives rise to two sets of solutions, the first one being \(v_{3}=v_{1}\) which in turn also yields \(v_{4}=(v_{1}^{2}-1)/2v_{1}\). This solution corresponds to the initial vertical Darboux motion. The second solution is obtained by solving \(F\) for \(v_{3}\) as it is linear in this variable and resubstituting the obtained solution into the system of equations. This yields two equations with a common factor linear in \(v_{4}\) and each of them has one other factor, respectively, which do not have a common solution provided \(b^{2}+c^{2}-4q_{1}^{2}-4q_{2}^{2}\neq 0\) (this case will be investigated below). This common factor yields the solutions \[v_{3} =\frac{v_{1}(v_{1}^{2}+1)(b^{2}+c^{2})^{2}+\left((b^{2}-c^{2})(v_ {1}^{3}-3v_{1})-6bcv_{1}^{2}+2bc\right)(4q_{1}^{2}+4q_{2}^{2})}{(v_{1}^{2}+1)( b^{2}+c^{2})^{2}-((b^{2}-c^{2})(3v_{1}^{2}-1)+2bcv_{1}^{3}-6bcv_{1})(4q_{1}^{2}+4 q_{2}^{2})}\] \[v_{4} =-\frac{(bv_{1}+cv_{1}+b-c)(bv_{1}-cv_{1}-b-c)}{2(cv_{1}+b)(bv_{1 }-c)}.\] This solution corresponds to a motion with trajectories of degree six. It is the composition of the vertical Darboux motion given by \[(t^{2}+1)(bt-c-(ct+b)\textbf{k})+\varepsilon(bt-c)(ct+b+(bt-c)\textbf{k})\] and a quadratically parametrized rotation around the third coordinate axis \[-(bt-2ct-b)(b^{2}+c^{2}+4q_{1}^{2}+4q_{2}^{2})+(ct^{2}+2bt-c)(b^{2}+c^{2}-4q_{1 }^{2}-4q_{2}^{2})\textbf{k}. \tag{2}\] Figure 2 (left) shows a trajectory of the vertical Darboux motion (first solution) and this other motion (second solution) for the values \(q_{1}=1\), \(q_{2}=0\), \(z_{1}=0\), \(z_{2}=0\). Let us now investigate the case, where \(b^{2}+c^{2}-4q_{1}^{2}-4q_{2}^{2}=0\). With this condition the factor \(F\) in Eq. (1) of the resultant simplifies to \[F=(bv_{1}^{2}-2cv_{1}-b)(cv_{1}v_{3}+bv_{1}+bv_{3}-c).\] The second factor yields the same solutions as in the case above. The first factor, however, gives rise to two additional sets of solutions \[v_{1}=\frac{c\pm\sqrt{b^{2}+c^{2}}}{b},\qquad v_{4}=\frac{c}{b}.\] Both of these solutions correspond to rotations around the third coordinate axis given by \[2(c\sqrt{b^{2}+c^{2}}\pm(b^{2}+c^{2}))(ct+b-(bt-c)\mathbf{k})-\varepsilon b^{ 2}\sqrt{b^{2}+c^{2}}(bt-c+(ct+b)\mathbf{k}).\] Further, the polynomial in Eq. (2) simplifies to a real polynomial, which implies, that the second solution in this special case also corresponds to a vertical Darboux motion. The trajectories of all of these motions are depicted in Figure 2 (right). ### Second Assembly Mode For the second closure condition, the dual quaternion coefficients of 1, \(\mathbf{k}\), \(\varepsilon\), \(\varepsilon\mathbf{i}\), \(\varepsilon\mathbf{j}\), \(\varepsilon\mathbf{k}\) need to vanish, while the coefficients of \(\mathbf{i}\) and \(\mathbf{j}\) must fulfill a linear Figure 2: Trajectories of the coupler motion for \(b=1\), \(c=2\) and \(b=c=\sqrt{2}\). equation. This corresponds to assembling the open kinematic chain such that the third coordinate axes of the base and moving frame coincide, but they point in opposite directions. For the linear condition of the coefficients \(c_{i}\) and \(c_{j}\) of \(\mathbf{i}\) and \(\mathbf{j}\) we will use the equation \((bq_{1}-cq_{2})c_{i}+(bq_{2}+cq_{1})c_{j}=0\) which will be the second equation in our closure condition. Solving the first equation for \(v_{2}\) and substituting the result into the third equation yields an equation which simplifies, after dividing off unnecessary factors, to \(b^{2}+c^{2}-4q_{1}^{2}-4q_{2}^{2}=0\). Thus, this second assembly mode only exists, if this condition is fulfilled. Note that this condition is the same as in the section above for the existence of four operation modes in the first assembly. In this case, the first, second and third equation only have one common solution for \(v_{2}\) and \(\tau\), namely \[v_{2}=-1/v_{1},\qquad\tau=(v_{3}-v_{4})/(v_{3}v_{4}+1).\] After resubstituting these solutions, equations five and six have one common factor which is linear in \(s\) while their other factors do not admit common solutions. The solution for \(s\) is \[s=\frac{z(b^{2}+c^{2})(v_{1}^{2}+1)+2bv_{1}-2c}{2(v_{1}^{2}+1)}.\] After resubstituting this solution, the last equation, after dividing off unnecessary factors, reads \[(bv_{4}-c)(v_{3}^{2}-2v_{3}v_{4}-1)=0. \tag{3}\] The first factor in Eq. (3) yields the two solutions \[v_{1}=\frac{-c\pm\sqrt{-3b^{2}+8bz_{3}+c^{2}-4z_{3}^{2}}}{b-2z_{3}},\qquad v_{ 4}=\frac{c}{b}.\] For the second factor in Eq. (3) we get \(v_{4}=(v_{3}^{2}-1)/2v_{3}\). Resubstituting this solution yields the equation \[-v_{1}^{2}v_{3}^{2}z_{3}+cv_{1}^{2}v_{3}+cv_{1}v_{3}^{2}+bv_{1}^{2}+bv_{3}^{2} -v_{1}^{2}z_{3}-v_{3}^{2}z_{3}+cv_{1}+cv_{3}+2b-z_{3}=0\] This equation is quadratic in \(v_{1}\) (and \(v_{3}\)), thus solving it for \(v_{1}\) will yield two solutions. In contrast to the first assembly mode, all solutions depend on \(z\) and \(z_{3}\), but not on \(q_{1}\) and \(q_{2}\). Further they contain square roots, thus the solutions can be complex. On the left hand side of Fig. 3 the trajectories of a point under these motions are shown for \(b=c=\sqrt{2}\), \(z=z_{3}=0\). For these values, only the second operation mode admits real trajectories. On the right hand side of Fig. 3, the trajectories for \(b=1\), \(c=2\), \(z=z_{3}=0\) are shown. Here, also the first two solutions are real and the corresponding motions are rotations around the third coordinate axis. ## 5 Conclusion We have generated an overconstrained 4RC closed-loop linkage, which is able to perform a prescribed vertical Darboux motion. Its kinematic analysis revealed the existence of, in general, two operation modes, one of them corresponding to the initial vertical Darboux motion. We gave a condition on the design parameters of the mechanism for which the second operation mode decomposes into two rotations and an additional vertical Darboux motion. The same condition also ensures the existence of a second assembly mode, which in turn has up to three real operation modes. ## Acknowledgement Johannes Siegele was supported by the Austrian Science Fund (FWF): P 33397 (Rotor Polynomials: Algebra and Geometry of Conformal Motions).
2309.11785
Cheeger type inequalities for high dimensional simplicial complexes
Cheeger inequality is a classical result emerging from the isoperimetric problem in the field of geometry. In the graph theory, a discrete version of Cheeger inequality was also studied deeply and the notion was further extended for higher dimensional simplicial complexes in various directions. In this paper, we consider an analogue of discrete Cheeger inequality for high dimensional simplicial complexes from a combinatorial viewpoint.
Satoshi Kamei
2023-09-21T05:17:58Z
http://arxiv.org/abs/2309.11785v1
# Cheeger type inequalities ###### Abstract. Cheeger inequality is a classical result emerging from the isoperimetric problem in the field of geometry. In the graph theory, a discrete version of Cheeger inequality was also studied deeply and the notion was further extended for higher dimensional simplicial complexes in various directions. In this paper, we consider an analogue of discrete Cheeger inequality for high dimensional simplicial complexes from a combinatorial viewpoint. Key words and phrases:simplicial complex; Cheeger inequality 2020 Mathematics Subject Classification: 52A38,52B60 ## 1. Introduction The isoperimetric problem has long been studied in geometry, and among the results that have been generated within it, Cheeger inequality([3]) is one of the most important outcomes. Also, the same type inequality on graph, which relates the expansion properties of graphs and the spectra of their Laplacian, was deeply explored, for example, in [1],[2],[4],[14]. In this paper, we consider a higher-dimensional analogue of the Cheeger inequality on graph. There have been several studies on higher dimensional versions. One approach is considering _coboundary expansion_, originating in [7], [10], and researched in, for example, [5], [13]. For a combinatorial approach, prominent examples include [12](see also [11]). In [12], a generalization of Cheeger constant was defined as \[h(X)=\min_{V=\sqcup_{i=0}^{d}Ai}\frac{|V|\cdot|F(A_{0},A_{1},\cdots,A_{d})|}{ |A_{0}|\cdot|A_{1}|\cdot\cdots\cdot|A_{d}|}\] for a finite \(d\)-dimensional simplicial complex \(X\) with the set of vertices \(V\). Here, the minimum is taken over all partitions of \(V\) into nonempty sets \(A_{0},\cdots,A_{d}\), and \(F(A_{0},\cdots,A_{d})\) denotes the set of \(d\)-dimensional faces with one vertex in each \(A_{i}\). The value of \(h(X)\) is estimated by the spectral gap of Laplace operator when the complex has a complete skeleton in [12], and this result was extended for the case where complexes do not have complete skeletons in [6]. In this work, we consider a modification of \(h(X)\) defined above. The reasons for the consideration of taking over a combinatorial approach such as [12] are as follows. The first is that the same as [12] the area of a region should be measured by the number of vertices, as vertices are often used to represent data in the field of computer science applications. The second is that also the same as [12] the size of the partitioning of the vertices should be measured by the number of n-dimensional faces. For example in [5], there has been research using discrete Hodge theory for ranking among data, where 2-faces play a significant role. Therefore, it is important to consider the relationship between vertices and \(n\)-dimensional faces for future applications. Further in this work, we divide vertices into two subsets even if we consider the case where the dimensions of simplicial complexes are greater than or equal to 2, because of the ease of use in application. With these in mind, we define a Cheeger type constant for high dimensional simplicial complexes and estimate the values. Let \(X\) be a simplicial complex. If all inclusion-maximum faces of \(X\) have the same dimension, then \(X\) is called _pure_. Definition 1.1.: For a finite connected pure \(n\)-simplicial complex \(X\), we define \[H(X)=\min_{0<|A|<|V|}\frac{|V|\cdot|F(A,V\setminus A)|}{|A|\cdot|V\setminus A|},\] where \(V\) is the set of all vertices of \(X\), \(A\) is a nonempty proper subset of \(V\) and \(F(A,V\setminus A)\) is the set of \(n\)-faces which contains vertices of both \(A\) and \(V\setminus A\). In the following, \(H(X)\) may occasionally be abbreviated as \(H\). For estimating the value of \(H(X)\), we construct a graph from \(X\) as follows. We set each \((n-1)\)-face of \(X\) as a vertex and connect these vertices with edges if the corresponding \((n-1)\)-faces are contained within an \(n\)-face. We called the graph the _embedded graph_ of \(X\), and the set of the vertices of the embedded graph is denoted as \(W\). \(\lambda\) denotes the second large eigenvalue of the adjacent matrix of the embedded graph. We consider only the case where the number of \(n\)-faces containing an \((n-1)\)-faces is fixed and denoted by \(D\). The minimal number of \((n-1)\)-faces containing a vertex is denoted by \(\delta_{min}\). For the constant \(H\), we prove the following inequalities. First, we consider the case where the dimension of a simplicial complex is \(2\). **Theorem 1.2**.: _Let \(X\) be a finite connected pure \(2\)-dimensional simplicial complex and \(V\), \(W\), \(\delta_{min}\), \(D\), \(\lambda\) are the same as defined above. Then_ \[\frac{|V|\cdot\delta_{min}\cdot(2D-\lambda)}{4\cdot|W|}\leq H(X).\] Next, we consider the case where the dimension of a simplicial complex is larger than \(2\). Set \(k=\lfloor(n+1)/2\rfloor\). **Theorem 1.3**.: _Let \(X\) be a finite connected pure \(n\)-dimensional simplicial complex whose dimension \(n\) is larger than \(2\) and \(V\), \(W\), \(\delta_{min}\), \(k\), \(D\), \(\lambda\) are the same as defined above. Then_ \[\frac{2\delta_{min}\cdot(nD-\lambda)}{|W|\cdot n\cdot k\cdot(n+1-k)}\leq H(X).\] To prove Theorem 1.2 and 1.3, we apply the Cheeger inequality on graph to embedded graphs of simplicial complexes. Therefore in section 2, we remind the inequality on graph and prepare some notions to consider the higher dimensional cases. We prove Theorem 1.2 in section 3 and Theorem 1.3 in section 4. In section 5, we see some examples for the cases where the dimensions of simplicial complexes are \(2\). ## 2. preliminaries Let us start by recalling the Cheeger inequality on graph. Consider a finite connected \(d\)-regular graph \(G=(V,E)\), and let \(A\) be a nonempty proper subset of \(V\). We define a quantity called the Cheegar constant as follows. Definition 2.1.: \(h(G)=\min\limits_{0<|A|<|V|}\dfrac{|V|\cdot|E\left(A,V\setminus A\right)|}{|A| \cdot|V\setminus A|}.\) This constant satisfies the following inequality, where \(\lambda\) denotes the second greater eigenvalue of the adjacent matrix of \(G\). **Theorem 2.2**([1],[2],[4]).: \(d-\lambda\leq h(G)\leq 2\sqrt{2d(d-\lambda)}.\)__ This theorem was proved by Dodziuk in [4], and independently by Alon-Milman in [2], and Alon in [1]. The proof of this theorem can be found in, for example, [8]. Note that in [8], a slightly different constant \(\phi(G)\) is defined instead of \(h(G)\) as follows: \[\phi(G)=\min\limits_{0<|A|<|V|/2}\dfrac{E\left(A,V\setminus A\right)}{|A|}.\] It is then shown that the inequality \(\phi(G)\leq h(G)\leq 2\phi(G)\) holds, and that \(\phi(G)\) satisfies \((d-\lambda)/2\leq\phi(G)\leq\sqrt{2d(d-\lambda)}\). We introduce some notions related to simplicial complexes. An _(abstract) simplicial complex_\(X\) with vertex set \(V\) is a collection of subsets of \(V\), called _faces_ or _simplices_, which is closed under taking subsets. That is, if \(\sigma\in X\) and \(\tau\subset\sigma\), then \(\tau\in X\). The _dimension_ of a simplex \(\sigma\) is \(|\sigma|-1\). The subcomplex of \(X\) formed by all of \(j\)-dimensional simplices and their faces is called \(j\)_-skeleton_ of \(X\). The \(0\)-dimensional simplices in \(X\) are _vertices_ and the \(1\)-dimensional simplices are _edges_. The _degree_ of a \(j\)-face is the number of \((j+1)\)-faces that contain it. For a \(j\)-face \(\sigma\), we denote the degree of \(\sigma\) as \(\deg(\sigma)\). Further in this paper, the number of \((n-1)\)-faces that contain a vertex \(v\) is denoted as \(\delta(v)\). Note that for the case where the dimension of a simplicial complex is \(2\) and \(v\) is a vertex of the simplicial complex, \(\deg(v)\) and \(\delta(v)\) determine the same value. As we defined in section 1, \(\delta_{min}=\min_{v\in V}\delta(v)\), where \(V\) is the set of all vertices of a simplicial complex \(X\). Also as mentioned in section 1, we only consider the case where the degrees of the \((n-1)\)-faces of a simplicial complex are constant \(D\). Thus the degree of each vertex of the embedded graph is \(nD\). For each edge of the embedded graph which connects two \((n-1)\)-faces of \(X\), there is an \(n\)-face which contains both of the \((n-1)\)-faces. In this situation, we will refer to this as the edge of the embedded graph being _contained_ within the \(n\)-face. The embedded graph captures the relationship between \((n-1)\)-faces and \(n\)-faces in a simplicial complex. Thus we will use the embedded graph to estimate the constant \(H\) of a simplicial complex. ## 3. An inequality for \(2\)-dimensional simplicial complexes In this section, we estimate \(H\) of \(2\)-dimensional simplicial complexes. Let \(G=(V,E)\) be a graph and \(V^{\prime}\) be a subset of \(V\). The subgraph of G _induced_ by \(V^{\prime}\) is the subgraph \(G^{\prime}=(V^{\prime},E^{\prime})\) such that \(G^{\prime}\) has the set of vertices \(V^{\prime}\) and for all \(u\),\(v\in V^{\prime}\), \(e=uv\in E^{\prime}\) if and only if \(e\in E\). We call a _induced subgraph_ of \(V^{\prime}\) if the subgraph is induced by \(V^{\prime}\). **Lemma 3.1**.: _Let \(X\) be a finite connected pure \(2\)-dimensional simplicial complex, and let \(V\) be the set of all vertices of \(X\). Consider the \(1\)-skeleton of \(X\) as a graph, \[\frac{|F(A,V\setminus A)|}{|A|\cdot|V\setminus A|}-\frac{|F(A_{1},V\setminus A_{1}) |}{|A_{1}|\cdot|V\setminus A_{1}|}=\frac{f_{1}+f_{2}}{(a_{1}+a_{2})(v-a_{1}-a_{2} )}-\frac{f_{1}}{a_{1}(v-a_{1})}\] \[=\frac{f_{2}va_{1}-f_{2}a_{1}^{2}+2f_{1}a_{1}a_{2}-f_{1}va_{2}+f_{1}a_{2}^{2}}{ (a_{1}+a_{2})(v-a_{1}-a_{2})a_{1}(v-a_{1})}\] \[\geq\frac{f_{1}va_{2}-f_{1}a_{2}^{2}+2f_{1}a_{1}a_{2}-f_{1}va_{2}+f_{1}a_{2}^{2 }}{(a_{1}+a_{2})(v-a_{1}-a_{2})a_{1}(v-a_{1})}\] \[=\frac{2f_{1}a_{1}a_{2}}{(a_{1}+a_{2})(v-a_{1}-a_{2})a_{1}(v-a_{1})}>0.\] This means that \[\frac{|V|\cdot|F(A,V\setminus A)|}{|A|\cdot|V\setminus A|}>\frac{|V|\cdot|F(A _{1},V\setminus A_{1})|}{|A_{1}|\cdot|V\setminus A_{1}|}\] which contradicts the choice of \(A\). \(\Box\) The next lemma establishes a correspondence between \(2\)-dimensional simplicial complexes and their embedded graphs. **Lemma 3.2**.: _Let \(X\) be a finite connected pure \(2\)-dimensional simplicial complex and let \(V\) be the set of all vertices of \(X\). Assume that a nonempty proper subset \(A\) of \(V\) realizes_ \[H=\min_{0<|A|<|V|}\frac{|V|\cdot|F(A,V\setminus A)|}{|A|\cdot|V\setminus A|}.\] _Then there exists a subset \(B\) of \(W\) satisfying \(0<|B|<|W|\) and the following conditions:_ \[\frac{|E(B,W\setminus B)|}{2}\leq|F(A,V\setminus A)|,\ |A|\leq|B|\,|V\setminus A |\leq\frac{2|W\setminus B|}{\delta_{min}}.\] Proof.: For each edge of \(X\), we treat it as a vertex of the embedded graph and we allocate the vertex to \(B\) or \(W\setminus B\) as follows. If both end vertices of an edge of \(X\) are contained in \(A\), we set the edge as a vertex of \(B\). Similarly, if both end vertices of an edge of \(X\) are contained in \(V\setminus A\), we set the edge as a vertex of \(W\setminus B\). If neither of these cases applies, we choose one edge at random and set the edge as a vertex of \(B\) and set the remaining edges as vertices of \(W\setminus B\). Based on Lemma 3.1, the subgraph induced by the vertices \(B\) in the embedded graph of \(X\) is connected. Thus \(|A|\leq|B|\) and the equality is achieved when the induced subgraph is a tree. To deduce that \(|V\setminus A|\leq\frac{2|W\setminus B|}{\delta_{min}}\), we must compare \(2|W\setminus B|\) and \(\sum_{v\in V\setminus A}\delta(v)\). Consider a pair \((v,e)\) where \(v\in V\) and \(v\) is incident to an edge \(e\) of \(X\); we refer to this pair as an _incident pair_. It's important to note that the count of incident pairs \((v,e)\) with the condition \(v\in V\setminus A\) is equal to \(\sum_{v\in V\setminus A}\delta(v)\). All the edges of \(X\) connecting vertices in \(V\setminus A\) are contained in \(W\setminus B\). If all the edges connecting vertices in both \(V\setminus A\) and \(A\) are also contained in \(W\setminus B\), every incident pair \((v,e)\) with \(v\in V\setminus A\) contributes to \(2|W\setminus B|\). Consequently, we can establish the inequality \(2|W\setminus B|>\sum_{v\in V\setminus A}\delta(v)\). However, there exists exactly one edge connecting a vertex in \(V\setminus A\) and a vertex in \(A\), which is contained in \(B\). Consequently, one incident pair \((v,e)\) with \(v\in V\setminus A\) does not contribute to \(2|W\setminus B|\). Let's consider a 2-face of \(X\) that contains the particular edge. This 2-face also contains at least one other edge in \(W\setminus B\) that connects a vertex in \(V\setminus A\) and a vertex in \(A\). See Figure 1. The incident pair involving the vertex in \(A\) and this edge contributes to \(2|W\setminus B|\) and compensates for the incident pair involving the vertex of \(V\setminus A\) from the previous edge. As a result, we have \(2|W\setminus B|\geq\sum_{v\in V\setminus A}\delta(v)\geq\delta_{\min}\cdot|V \setminus A|\), which implies \(|V\setminus A|\leq\frac{2|W\setminus B|}{\delta_{min}}\). To prove that \(\frac{|E(B,W\setminus B)|}{2}\leq|F(A,V\setminus A)|\), let \(F^{\prime}\) be the set of 2-faces of \(X\) that contains edges of \(E(B,W\setminus B)\), then each 2-face of \(F^{\prime}\) contains both a vetex of \(V\setminus A\) and a vertex of \(A\) and two edges of \(E(B,W\setminus B)\). Thus \(F^{\prime}\subset F(A,V\setminus A)\). This implies \(|F(A,V\setminus A)|\geq|F^{\prime}|=\frac{|E(B,W\setminus B)|}{2}\). \(\Box\) Proof of Theorem 1.2.: Let \(A\subset V\) realizes \(\min_{0<|A|<|V|}\frac{|V|\cdot F(A,V\setminus A)|}{|A|\cdot|V\setminus A|}\). There exists \(B\subset W\) which satisfies the conditions of Lemma 3.2. Thus from Lemma 3.2 and Theorem 2.2, we can establish the inequality \[H\geq\frac{|V|\cdot\frac{|E(V,W\setminus B)|}{2}}{|B|\cdot\frac{2|W\setminus B |}{\delta_{min}}}=\frac{|V|\cdot\delta_{min}}{4|W|}\cdot\frac{|W|\cdot|E(B,W \setminus B)|}{|B|\cdot|W\setminus B|}\geq\frac{|V|\cdot\delta_{min}\cdot(2D- \lambda)}{4\cdot|W|}.\ \Box\] ## 4. An inequality for high dimensional simplicial complexes In this section, we discuss the case where the dimensions of simplicial complexes are greater than 2. **Claim 4.1**.: _If we divide all vertices of an \(n\)-simplex into two nonempty sets \(P\) and \(Q\), then the number of \((n-1)\)-faces of the \(n\)-simplex containing vertices of both P and Q is either \(n\) or \(n+1\)._ Proof.: If \(P\) (or \(Q\)) has \(n\) vertices, then there exists exactly one \((n-1)\)-face whose vertices are all contained in \(P\) (or \(Q\)), hence the number of \((n-1)\)-faces containing Figure 1. edges connecting vertices in \(A\) and in \(V\setminus A\) vertices belonging to both P and Q is \(n\). In the other cases, all of the \((n-1)\)-faces contain vertices belonging to both \(P\) and \(Q\).\(\Box\) We prepare a key lemma for the proof of Theorem 1.3. Remind that we define \(k=\lfloor(n+1)/2\rfloor\) in section 1. **Lemma 4.2**.: Let \(X\) be a finite connected pure \(n\)-simplicial complex and let \(V\) be the set of all vertices of \(X\). Suppose that \(A\) is a subset of \(V\) satisfying \(0<|A|<|V|\). Let \(W\) be the set of all vertices of the embedded graph of \(X\). Then, there exists a subset \(B\) of \(W\) that satisfies the following conditions: \[|E(B,W\setminus B)|\leq k(n+1-k)|F(A,V\setminus A)|,\ 1\leq|B|,\ |V\setminus A| \leq\frac{n|W\setminus B|}{\delta_{min}}.\] Proof.: For each \((n-1)\)-face of \(X\), we treat it as a vertex of the embedded graph and we allocate the vertex to \(B\) or \(W\setminus B\) as follows. If all the vertices of an \((n-1)\)-face of \(X\) are contained in \(A\), we set the \((n-1)\)-face as a vertex of \(B\). Similarly if all the vertices of an \((n-1)\)-face of \(X\) are contained in \(V\setminus A\), we set the \((n-1)\)-face as a vertex of \(W\setminus B\). If neither of these cases applies, we choose one \((n-1)\)-face at random and set it as a vertex of \(B\), and set the remaining \((n-1)\)-faces as vertices of \(W\setminus B\). Let \(F^{\prime}\) be the set of \(n\)-faces such that each element of \(F^{\prime}\) contains at least one element of \(E(B,W\setminus B)\). Then each \(n\)-face of \(F^{\prime}\) contains a vertex of \(A\) and a vertex of \(V\setminus A\). Thus we have \(F^{\prime}\subset F(A,V\setminus A)\). If an \(n\)-face of \(X\) contains vertices of both \(B\) and \(W\setminus B\), the number of edges of \(E(B,W\setminus B)\) contained in the \(n\)-face is at least \(n\) and at most \(k(n+1-k)\). Thus, we have \(|E(B,W\setminus B)|\leq k(n+1-k)|F^{\prime}|\leq k(n+1-k)|F(A,V\setminus A)|\). To deduce that \(|V\setminus A|\leq\frac{n|W\setminus B|}{\delta_{min}}\), we must compare \(n|W\setminus B|\) and \(\sum_{v\in V\setminus A}\delta(v)\). Consider a pair \((v,f)\) where \(v\in V\) and \(v\) is contained in an \((n-1)\)-face \(f\) of \(X\); we refer to this pair as a _contained pair_. It's important to note that the count of contained pairs \((v,f)\) with the condition \(v\in V\setminus A\) is equal to \(\sum_{v\in V\setminus A}\delta(v)\). All the \((n-1)\)-faces of \(X\) containing vertices of \(V\setminus A\) are contained in \(W\setminus B\). If all the \((n-1)\)-faces containing vertices of both \(V\setminus A\) and \(A\) are also contained in \(W\setminus B\), every contained pair \((v,f)\) with \(v\in V\setminus A\) contributes to \(n|W\setminus B|\). Consequently, \(n|W\setminus B|>\sum_{v\in V\setminus A}\delta(v)\). However, among the \((n-1)\)-faces in \(B\), exactly one \((n-1)\)-face contains vertices of both \(V\setminus A\) and \(A\). Consequently, at most \(n-1\) contained pairs containing vertices in \(V\setminus A\) do not contribute to \(n|W\setminus B|\). Consider an \(n\)-face which contains the particular \((n-1)\)-face. From Claim 4.1, the number of \((n-1)\)-faces of the \(n\)-face containing vertices in both \(A\) and \(V\setminus A\) is at least \(n\). Therefore, at least \(n-1\) of these \((n-1)\)-faces are in \(W\setminus B\). The contained pairs involving the vertices in \(A\) and these \((n-1)\)-faces contribute to \(n|W\setminus B|\) and compensate for the \(n-1\) contained pairs involving the vertices of \(V\setminus A\) from the previous \((n-1)\)-face. As a result, we have \(n|W\setminus B|\geq\sum_{v\in V\setminus A}\delta(v)\geq\delta_{min}\cdot|V \setminus A|\), which implies \(|V\setminus A|\leq\frac{n|W\setminus B|}{\delta_{min}}\). The inequality \(|B|\geq 1\) is obvious. \(\Box\) Proof of Theorem 1.3.: We assume that \(|A|\leq|V\setminus A|\), otherwise we can simply swap \(A\) and \(V\setminus A\). Then, \[H(X)=\min_{0<|A|<|V|}\frac{|V|\cdot|F(A,V\setminus A)|}{|A|\cdot|V\setminus A|}\] \[=\min_{0<|A|<|V|}\frac{|F(A,V\setminus A)|}{|V|\cdot|V\setminus A|}\geq\min_{0 <|A|<|V|}\frac{2|F(A,V\setminus A)|}{|V\setminus A|}.\] From Lemme 4.2 and Theorem 2.2, \[\min_{0<|A|<|V|}\frac{2|F(A,V\setminus A)|}{|V\setminus A|}\geq\frac{2 \cdot\frac{|E(B,W\setminus B)|}{k(n+1-k)}}{\frac{n|W\setminus B|}{\delta_{min}}}\] \[\geq\frac{2\delta_{min}}{|W|\cdot n\cdot k(n+1-k)}\cdot\frac{|W|\cdot|E(B,W \setminus B)|}{|B|\cdot|W\setminus B|}\] \[\geq\frac{2\delta_{min}\cdot(nD-\lambda)}{|W|\cdot n\cdot k\cdot(n+1-k)}.\Box\] ## 5. Examples In this section, we will be looking at 2-dimensional simplicial complexes. On the left-hand side of each figure, we have a simplicial complex, while on the right-hand side, we have its corresponding embedded graph. The black circles represent the vertices of the simplicial complex, while the black rectangles represent the vertices of the embedded graph. **Example 4.1.** Let's take a look at a simple yet crucial example. Consider a simplicial complex consisting of only one 2-simplex. as shown in Figure 2. It is easy to see that \(H=\frac{3\cdot 1}{1\cdot 2}=\frac{3}{2}\). On the other hand, the adjacency matrix of the embedded graph is \(\left(\begin{array}{ccc}0&1&1\\ 1&0&1\\ 1&1&0\end{array}\right)\), and the second eigenvalue is \(-1\). Therefore \(\frac{|V|\cdot\delta_{min}}{4|W|}(2D-\lambda)=\frac{3\cdot 2}{4\cdot 3}(2 \cdot 1-(-1))=\frac{3}{2}\). This example achieves the equality of Theorem 1.2. Figure 3. the simplicial complex in Example 4.2 and its embedded graph Figure 2. the simplicial complex in Example 4.1 and its embedded graph **Example 4.2**.: In the second example, each degree of edge is 3, as shown in Figure 3. \(H=\frac{5\cdot 6}{2\cdot 3}=5\). On the other hand, the adjacent matrix of the embedded graph is \[\left(\begin{array}{cccccccccccc}0&1&1&1&0&1&1&0&1&0\\ 1&0&1&1&1&0&1&1&0&0\\ 1&1&0&0&1&1&0&1&1&0\\ 1&1&0&0&1&1&1&0&0&1\\ 0&1&1&1&0&1&0&1&0&1\\ 1&0&1&1&1&0&0&0&1&1\\ 1&1&0&1&0&0&0&1&1&1\\ 0&1&1&0&0&1&1&0&1\\ 0&0&1&0&0&1&1&1&1&0\\ \end{array}\right)\] and the second eigenvalue is 1. Therefore \(\frac{|V|\cdot\delta_{min}}{4|W|}(2D-\lambda)=\frac{4\cdot 4}{4\cdot 10}(2 \cdot 3-1)=2\). **Example 4.3**.: The third example is shown in Figure 4. \(H=\frac{8\cdot 6}{4\cdot 4}=3\). The adjacent matrix of the embedded graph is \[\left(\begin{array}{cccccccccccccccc}0&1&1&1&1&0&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&0&1&0&1&1&0&0&0&0&0&0&0&0&0&0&0&0\\ 1&1&0&1&0&1&0&0&0&0&0&0&0&0&0&0&0\\ 1&0&1&0&0&0&1&1&0&0&0&0&0&0&0&0&0\\ 1&1&0&0&0&0&0&1&1&0&0&0&0&0&0&0&0\\ 0&1&1&0&0&0&0&0&0&1&1&0&0&0&0&0&0\\ 0&0&0&1&0&0&1&0&0&0&1&1&0&0&0&0&0\\ 0&0&0&1&0&0&1&0&1&0&0&0&1&0&0&0&0\\ 0&0&0&0&1&0&0&0&1&0&1&0&0&1&0&0&0\\ 0&0&0&0&0&1&0&0&0&1&0&1&0&0&1&0&0&0\\ 0&0&0&0&0&1&1&0&0&0&1&0&1&0&0&0&0\\ 0&0&0&0&0&1&0&0&0&1&0&1&0&0&1&0&0\\ 0&0&0&0&0&1&1&0&0&0&1&0&0&0&0&1&1\\ 0&0&0&0&0&0&0&0&0&0&0&1&1&0&1&1\\ 0&0&0&0&0&0&0&0&0&0&0&1&1&1&1&0&0\\ \end{array}\right)\] and the second eigenvalue is \(1+\sqrt{5}\). Therefore \(\frac{|V|\cdot\delta_{min}}{4|W|}(2D-\lambda)=\frac{8\cdot 3}{4\cdot 18}(2 \cdot 2-(1+\sqrt{5}))=\frac{3-\sqrt{5}}{3}\).
2309.11949
Quantum State Reconstruction in a Noisy Environment via Deep Learning
Quantum noise is currently limiting efficient quantum information processing and computation. In this work, we consider the tasks of reconstructing and classifying quantum states corrupted by the action of an unknown noisy channel using classical feedforward neural networks. By framing reconstruction as a regression problem, we show how such an approach can be used to recover with fidelities exceeding 99% the noiseless density matrices of quantum states of up to three qubits undergoing noisy evolution, and we test its performance with both single-qubit (bit-flip, phase-flip, depolarising, and amplitude damping) and two-qubit quantum channels (correlated amplitude damping). Moreover, we also consider the task of distinguishing between different quantum noisy channels, and show how a neural network-based classifier is able to solve such a classification problem with perfect accuracy.
Angela Rosy Morgillo, Stefano Mangini, Marco Piastra, Chiara Macchiavello
2023-09-21T10:03:30Z
http://arxiv.org/abs/2309.11949v1
# Quantum State Reconstruction in a Noisy Environment via Deep Learning ###### Abstract Quantum noise is currently limiting efficient quantum information processing and computation. In this work, we consider the tasks of reconstructing and classifying quantum states corrupted by the action of an unknown noisy channel using classical feedforward neural networks. By framing reconstruction as a regression problem, we show how such an approach can be used to recover with fidelities exceeding 99% the noiseless density matrices of quantum states of up to three qubits undergoing noisy evolution, and we test its performance with both single-qubit (bit-flip, phase-flip, depolarising, and amplitude damping) and two-qubit quantum channels (correlated amplitude damping). Moreover, we also consider the task of distinguishing between different quantum noisy channels, and show how a neural network-based classifier is able to solve such a classification problem with perfect accuracy. ## I Introduction One of the main problems in quantum information processing and computation is that quantum systems can be corrupted by unwanted interactions with the environment. Therefore, the incorporation of robust quantum error correction and mitigation strategies is of paramount importance to realize the full potential of quantum information processing. Despite the effectiveness of quantum error correction protocols in preserving information, they often require significant overhead and resources. Quantum error mitigation techniques, on the other hand, focus on reducing the impact of noise without fully correcting it, making them more feasible for near-term quantum devices [1; 2]. Examples are readout mitigation techniques to correct measurement errors [3; 4; 5], noise deconvolution methods to retrieve ideal expectation values of generic observables evaluated on a system subject to a known noise before measurement [6; 7], probabilistic error cancellation [8] and data driven approaches such as zero-noise extrapolation [9] and Clifford data regression [10; 11; 12] to mitigate noise happening during a quantum computation. Another area of great interest is deep learning, which has achieved impressive successes over the past years, with generative pre-trained large language models now leading the way [13; 14; 15]. Deep learning models have excelled in diverse areas, from image and speech recognition models [16; 17] to playing games [18], reaching and often surpassing human-level performances. These advancements highlight the vast potential of deep learning in revolutionizing numerous fields, including quantum computation and information. Indeed, deep learning techniques have shown great promises also for quantum information processing applications, as they were leveraged successfully in, e.g., experimental phase estimation tasks [19], automating the development of QCVV (Quantum characterization, validation and verification) protocols [20], learning quantum hardware-specific noise models [21], increasing measurement precision of quantum observables with neural networks [22], quantum error mitigation [23; 24; 25], identifying quantum protocols such as teleportation or entanglement purification [26], classification and reconstruction of optical quantum states [27], and quantum state estimation [28]. In this work, we leverage machine learning techniques based on feed-forward neural networks to deal with the task of recovering noise-free quantum states when they undergo an undesired noise evolution. In fact, while it is well known that quantum noisy channels cannot be physically inverted in general, this may be achieved by means of classical post-processing methods [6; 7; 8]. In particular, since neural networks are universal approximators [29], they can be used to learn a mapping that effectively inverts that effect of noise, and hence they can be used to reconstruct noiseless quantum states. Specifically, let \(\mathbf{\tilde{r}}\) indicate the (generalised) Bloch components of a noisy quantum state, our goal is to train a neural network \(h_{\mathbf{w}}(\cdot)\) to output the Bloch vector of the ideal noiseless state \(\mathbf{\tilde{r}}\to h_{\mathbf{w}}(\mathbf{\tilde{r}})=\mathbf{r}\), where \(\mathbf{r}\) is the ideal Bloch vector of the state before it undergoes noise process. We explore several combinations of single- and two-qubit noisy channels acting on systems of up to three qubits, and also study the effect of using different loss functions for training, and show that our neural network-based method can reach quantum state reconstruction fidelities higher than 99.9%. The main idea of the proposed method is summarised in Figure 1. In addition to regression tasks, forward neural networks can be used for classifying different quantum channels based on the effect they have on quantum states. In particular, using as inputs Bloch vectors \([\mathbf{\tilde{r}},\mathbf{r}]\) obtained with different channels, the network will output a label corresponding to the quantum channel that has been applied to \(\mathbf{r}\) in order to produce \(\mathbf{\tilde{r}}\). Also in this case, we achieve almost perfect channel classification accuracy. The rest of the manuscript is organised as follows. In Section II we formally introduce the problem and the neural network used to obtain the quantum state reconstruction. In Section III we present the results obtained for the reconstruction of pure and mixed states, and we introduce the noise classification problems that can be solved similarly with neural networks. In Section IV we summarise all our results and possible improvements of our method. ## II Methods In this section, we formalise the quantum communication problem we want to tackle, that is, the recovery of noiseless quantum states undergoing an undesired noisy evolution. We first start by introducing the notation for describing an \(n\)-qubit quantum state in terms of its Bloch components, and then move on discussing the neural network approach used in this work, including details on the optimisation procedure and the construction of the training and test dataset. ### Reconstruction of noisy Bloch vectors The state of an \(n\)-qubit quantum system is described by its density matrix \(\rho\in\mathbb{C}^{2n\times 2n}\), which can be expressed in the Pauli basis as follows [30; 31] \[\rho=\frac{1}{2^{n}}(\mathbb{I}_{2^{n}}+\mathbf{r}\cdot\mathbf{P}) \tag{1}\] where \(\mathbf{r}\in\mathbb{R}^{4^{n}-1}\) is the _generalised Bloch vector_, and \(\mathbf{P}=(P_{1},\,\dots,\,P_{4^{n}-1})\) is a vector containing the multi-qubit Pauli basis, obtained by considering tensor products of single-qubit Pauli matrices, that is \(P_{i}=\sigma_{1}^{(i)}\otimes\cdots\otimes\sigma_{n}^{(i)}\), with \(\sigma_{k}\in\{\mathbb{I},X,Y,Z\}\). Quantum channels are completely positive trace preserving (CPTP) maps whose action on a state \(\rho\) can be expressed in Kraus form as [30] \[\rho\longrightarrow\tilde{\rho}=\mathcal{N}(\rho)=\sum_{i}E_{i}\,\rho\,E_{i}^ {\dagger}\,, \tag{2}\] where \(\{E_{i}\}\) are the Kraus operators of the channel \(\mathcal{N}(\cdot)\), satisfying the trace preserving condition \(\sum_{i}E_{i}^{\dagger}E_{i}=\mathbb{I}\). In our experiments, we consider various single-qubit noisy channels (bit-flip \(\mathcal{X}_{p}\), phase-flip \(\mathcal{Z}_{p}\), bit-phase-flip \(\mathcal{Y}_{p}\), general Pauli \(\mathcal{P}_{\mathbf{p}}\), depolarizing \(\mathcal{D}_{p}\) and amplitude damping \(\mathcal{A}_{p\gamma}\)), as well as a correlated two-qubit amplitude damping channel. We refer to Appendix A for an extended discussion on the quantum noise models used in this work. Given a noisy channel \(\mathcal{N}(\cdot)\), our goal is to obtain through a learning procedure an optimised neural network that receives noisy Bloch vectors \(\{\mathbf{\tilde{r}}_{k}\}\), and outputs the corresponding noiseless vectors \(\{\mathbf{r}_{k}\}\). In other words, we are looking for the function \(h(\cdot)\) which inverts the action of the noise on the Bloch components of the quantum states, namely \[\begin{split}& h(\mathbf{\tilde{r}})=\mathbf{r}\,,\quad\text{with}\\ &\rho=\frac{1}{2^{n}}(\mathbb{I}_{2^{n}}+\mathbf{r}\cdot\mathbf{P})\\ &\tilde{\rho}=\mathcal{N}(\rho)=\frac{1}{2^{n}}(\mathbb{I}_{2^{n} }+\mathbf{\tilde{r}}\cdot\mathbf{P})\,.\end{split} \tag{3}\] ### Reconstruction with neural networks We provide a concise overview of the fundamental aspects of neural networks, discussing their relevance in addressing the task of quantum state reconstruction. #### ii.2.1 Generation of the training set The initial phase of addressing a regression problem involves constructing a valid dataset. In our specific case, the training (and validation) set consists of pairs of noisy and noiseless Bloch vectors \[\mathcal{T}=\{(\mathbf{\tilde{r}}_{m},\,\mathbf{r}_{m})\}_{m=1}^{M}\,, \tag{4}\] which are obtained by evolving some input quantum states \(\rho_{m}\) through the noisy channel under investigation, thus obtaining the noisy states \(\tilde{\rho}_{m}\). The Bloch components of these density matrices are then computed as \(r_{i}=\text{Tr}[\rho\,P_{i}]\) for \(i=1,\dots,4^{n}-1\) (1). The choice of the (generalized) Bloch vector as element of the dataset is twofold: first, each quantum state is characterized by its own vector, which grants a unique representation of the state; and secondly a vector as input fits naturally in the processing structure of a feed-forward neural network. The input quantum states we consider are uniformly distributed in the space of quantum states. For the case of pure states \(\rho_{m}=|\psi_{m}\rangle\!\langle\psi_{m}|\), these are obtained by sampling states \(|\psi_{m}\rangle\) from the Haar distribution [32; 33], while for uniformly distributed mixed states, these can be generated either starting from uniformly distributed pure states by means of an appropriate rescaling [34; 35] or by using the Ginibre ensemble [36; 37; 38]. The cardinality \(|\mathcal{T}|=M\) of the dataset is contingent upon the specific problem under consideration and, as demonstrated in Sec. III, has a direct impact on the performance of the network. As the quantum computational resources to generate the training set \(\mathcal{T}\) may be demanding experimentally, one generally has to find a compromise between achieving high reconstruction accuracies, and the number of samples (i.e. quantum states) included in the dataset. #### ii.1.2 Feed-forward neural networks In this work, we analyze data using deep feed-forward neural networks, which are parametric models that process information in layer-wise fashion through a repeated application of similar operations, as shown in Fig. 1(a). These neural networks consist of an _input layer_ responsible for data loading, followed by multiple _hidden layers_ to process the information, and finally, an _output layer_ to obtain the computation's result. Each layer consists of a set of individual nodes known as _neurons_, and while input and output layers have a number of neurons matching the dimension of the input and the output respectively, the number of neurons in the hidden layers is an architectural hyperparameter to be chosen in advance by the designer. For example, the action of a feed-forward neural network with two hidden layers and trainable parameters \(\mathbf{\theta}\), can be expressed as \[\begin{split}\mathbf{\hat{y}}&=\text{NN}_{\mathbf{\theta}} (\mathbf{x})\\ &=\mathbf{w}\cdot g\Big{(}\mathbf{W}^{[2]}\ g\Big{(}\mathbf{W}^{[1]}\mathbf{x}+ \mathbf{b}^{[1]}\Big{)}+\mathbf{b}^{[2]}\Big{)}+b\end{split} \tag{5}\] where \(\mathbf{x}\in\mathbb{R}^{d}\) and \(\hat{\mathbf{y}}\in\mathbb{R}^{p}\) are the input and output vectors, \(\mathbf{W}^{[1]}\in\mathbb{R}^{h_{1}\times d}\) and \(\mathbf{b}^{[1]}\in\mathbb{R}^{h_{1}}\) are the trainable parameters for the first hidden layer, \(\mathbf{W}^{[2]}\in\mathbb{R}^{h_{2}\times h_{1}}\) and \(\mathbf{b}^{[2]}\in\mathbb{R}^{[h_{2}]}\) are the trainable parameters for the second hidden layer, \(\mathbf{w}\in\mathbb{R}^{p}\) and \(b\in\mathbb{R}\) are the trainable parameters for the output layer, and \(g(\cdot)\) is a non-linear activation function which is applied element-wise to the entries of the vectors. As previously mentioned, \(h_{1}\) and \(h_{2}\) are hyperparameters that represent the number of hidden neurons for each respective layer. In our simulations, we explore different architectures using networks with 2 or 3 hidden layers using \(h_{i}\in\{64,128\}\) hidden neurons per layer, while the input and output layers have dimension \(d=p=4^{n}-1\), as they are employed to represent the components of the Bloch vectors. For the activation function, as customary in machine learning, we adopt the Rectified Linear Unit (ReLU), defined as \(g(x)=\text{ReLU}(x)\coloneqq\max(0,x)\). #### ii.1.3 Performance metrics Given the dataset and the trainable model, we discuss the figures of merit employed for training and evaluating the neural network's performance in quantum state reconstruction and noise classification tasks. In the context of quantum state reconstruction, we have tested two possible alternatives coming from the classical and quantum information domain respectively, namely the Mean Squared Error (MSE) between the reconstructed and ideal Bloch vectors, and the quantum infidelity between the reconstructed and original quantum states. Figure 1: Outline of the neural network-based noise reconstruction and classification protocols. **(a)** Noisy Bloch vectors, representing quantum states affected by noise, serve as input to a feed-forward neural network. The quantum state reconstruction protocol aims to recover the original noiseless quantum states from the observed noisy Bloch vectors, utilizing the neural network model. **(b)** Both noisy and noiseless Bloch vectors are fed into the neural network as input. The network is specifically designed for a classification task, where the output provides a label representing the type of noise acting on the noiseless quantum state. In noise classification problems, we used both categorical cross-entropy (15) and accuracy metrics (16) in order to assess how effectively the neural network can distinguish between different types of noisy channels. MSE.--The mean squared error is the most common measure of performances for regression problems in machine learning and consists of the Euclidean distance between vectors, which in our case becomes \[\ell(\boldsymbol{\theta},\boldsymbol{r}_{i})=\|\boldsymbol{r}_{i}-\hat{ \boldsymbol{r}}_{i}(\boldsymbol{\theta})\|^{2}=\|\boldsymbol{r}_{i}-\text{NN }_{\boldsymbol{\theta}}(\boldsymbol{\tilde{r}}_{i})\|^{2}, \tag{6}\] where \(\boldsymbol{r}_{i}\) is the noiseless Bloch vector (see Eq. (4)), and \(\boldsymbol{\hat{r}}_{i}(\boldsymbol{\theta})=\text{NN}_{\boldsymbol{\theta}} (\boldsymbol{\tilde{r}}_{i})\) is the one predicted by the neural network, with trainable parameters \(\boldsymbol{\theta}\), when receiving as input the noisy Bloch vector \(\boldsymbol{\tilde{r}}_{i}\). Then, the mean squared error function over the entire dataset \(\mathcal{T}\) of size \(|\mathcal{T}|=M\), is \[\mathcal{L}_{\text{MSE}}(\boldsymbol{\theta};\,\mathcal{T}) =\frac{1}{M}\sum_{i=1}^{M}\ell(\boldsymbol{\theta},\boldsymbol{r }_{i}) \tag{7}\] \[=\frac{1}{M}\sum_{i=1}^{M}\|\boldsymbol{r}_{i}-\hat{\boldsymbol{ r}}_{i}\|^{2}\quad(\boldsymbol{\tilde{r}}_{i},\boldsymbol{r_{i}})\in\mathcal{T}\,.\] Infidelity.--The quantum fidelity is a measure of distance for quantum states, and given the quantum nature of the data under investigation, it is particularly suited to assess the reconstruction performances of the neural network. Given two density matrices \(\rho\) and \(\sigma\), their fidelity is defined as [30] \[F(\rho,\sigma)\coloneqq\text{Tr}\left[\sqrt{\sqrt{\rho}\,\sigma\,\sqrt{\rho}} \right]^{2}, \tag{8}\] with \(0\leq F(\rho,\sigma)\leq 1\), where the second equality holds if and only if the states are equal \(\rho=\sigma\). The _infidelity_ between two quantum states is then defined as \(I(\rho,\sigma)\coloneqq 1-F(\rho,\sigma)\). Despite being suited to measure the distance of quantum states, the complex functional dependence of the infidelity on the Bloch vectors of the density matrices -- hence the parameters of the neural network-- often leads to numerical instabilities when it is used as the loss function to drive the training of the neural network, eventually impairing the optimisation process. For this reason, when \(\rho\) and \(\sigma\) are pure states, we instead use directly the simplified but equivalent expression for the fidelity \[F(\rho,\sigma)=\text{Tr}[\rho\,\sigma]\,,\quad\text{for }\rho,\sigma\text{ pure}\,, \tag{9}\] while if the states are mixed, we use an alternative measure of distance proposed in [39] \[F(\rho,\sigma)=\frac{|\text{Tr}[\rho\,\sigma]|}{\sqrt{\text{Tr}[\rho^{2}]\, \text{Tr}[\sigma^{2}]}}\,,\quad\text{for }\rho,\sigma\text{ mixed}\,. \tag{10}\] The fidelity in Eq. (10) reaches the maximum \(1\) if and only if \(\rho=\sigma\). However, it differs from the standard fidelity reported in Eq. (8), since it is not monotonic under quantum operations, meaning it is neither concave nor convex, and it includes a normalization factor, resulting in a scaled fidelity measure, when one of the two states is pure, i.e. if \(\sigma=|\psi\rangle\!\langle\psi|\), then \(F(\rho,|\psi\rangle\!\langle\psi|)=\langle\psi|\rho|\psi\rangle/\,\text{Tr}\, \rho^{2}\). From the fidelity expressions in Eqs. (9) and (10), we calculate the corresponding infidelity and perform an average over the entire dataset, yielding \[\mathcal{L}_{\text{INF}}(\boldsymbol{\theta};\,\mathcal{T})=\frac{1}{M}\sum_ {i=1}^{M}1-F(\rho_{i},\sigma_{i}), \tag{11}\] where \(\rho_{i}\) and \(\sigma_{i}\) are the density matrices computed respectively from the Bloch vectors \(\boldsymbol{r}_{i}\) and \(\hat{\boldsymbol{r}}_{i}=\hat{\boldsymbol{r}}_{i}(\theta)=\text{NN}_{ \boldsymbol{\theta}}(\boldsymbol{\tilde{r}}_{i})\), with \((\boldsymbol{\tilde{r}}_{i},\boldsymbol{r}_{i})\in\mathcal{T}\). Moreover, it is worth noticing that for single-qubit density matrices the fidelity (8) can be further simplified and expressed in terms of the Bloch vectors as [40] \[F(\boldsymbol{r},\boldsymbol{s})=\frac{1}{2}\Big{(}1+\boldsymbol{r}\cdot \boldsymbol{s}+\sqrt{(1-\|\boldsymbol{r}\|^{2})(1-\|\boldsymbol{s}\|^{2})} \Big{)}, \tag{12}\] where \(\boldsymbol{r},\boldsymbol{s}\in\mathbb{R}^{3}\) are the Bloch vectors for \(\rho\) and \(\sigma\) respectively. Specifically, for the particular case when both states are pure \(\|\boldsymbol{r}\|^{2}=\|\boldsymbol{s}\|^{2}=1\), then the expression for the infidelity corresponds to the mean squared error up to a prefactor \[I(\boldsymbol{r},\boldsymbol{s})=1-\frac{1}{2}(1+\boldsymbol{r}\cdot \boldsymbol{s})=\frac{\|\boldsymbol{s}-\boldsymbol{r}\|^{2}}{4}\,. \tag{13}\] We refer to Appendix B for more details on the derivation of Eqs.(12) and (13). Notably, to the best of our knowledge, apart for single-qubit states, there is no straightforward connection between the fidelity of quantum states and the Euclidean distance of their generalised Bloch vectors. Finally, in order to assess the performance of the optimised neural network we introduce the Average Test Fidelity (ATF), defined as the mean fidelity between the predicted quantum states and their corresponding ideal counterparts averaged over a test dataset \(\tilde{\mathcal{T}}\) which was not used during training. The ATF is calculated as \[\text{ATF}(\boldsymbol{\theta};\tilde{\mathcal{T}})=\frac{1}{N}\sum_{i=1}^{N }F(\rho_{i},\sigma_{i}), \tag{14}\] where \(N\) is the cardinality of the test set, \(F(\cdot,\,\cdot)\) is the fidelity in Eq. (8), and \(\rho_{i}\) and \(\sigma_{i}\) are the density matrices computed respectively from the Bloch vectors \(\boldsymbol{r}_{i}\) and \(\boldsymbol{\hat{r}}_{i}=\text{NN}_{\boldsymbol{\theta}}(\boldsymbol{\tilde {r}}_{i})\), with \((\boldsymbol{\tilde{r}}_{i},\boldsymbol{r}_{i})\in\tilde{\mathcal{T}}\). A high average test fidelity, typically exceeding \(99.9\%\), indicates that our neural network is capable of accurately reconstructing the corrupted quantum states. Categorical Cross-Entropy.--The categorical cross-entropy is one of the most common measures to evaluate the performance of classification models [41]. Given a classification task with \(C\) different classes, the categorical cross-entropy quantifies the disparity between the predicted probability distribution of the classes and the true class labels, and it is mathematically defined as \[\text{CCE}\coloneqq-\sum_{i=1}^{N}\sum_{j=1}^{C}y_{ij}\log(p_{ij}), \tag{15}\] where \(M\) is the number of samples to be classified, \(y_{i}=(0,0,\ldots,1_{c(i)},\ldots,0)\in\{0,1\}^{C}\) is the true probability distribution for the samples indicating that the \(i\)-th sample belongs to class \(c(i)\), and \(p_{ij}\in[0,\,1]\) is the model's predicted probability distribution for the \(i\)-th sample to belong to the \(j\)-th class. This metric penalizes the deviation between predicted and actual distributions, with lower values indicating better alignment between the model's predictions and the true labels. Accuracy.--For classification tasks the evaluation of performance commonly revolves around the metric of accuracy, which quantifies the effectiveness of a model in correctly categorizing samples within a dataset. Mathematically, the accuracy is defined as the ratio of the number of correctly classified samples to the total number of samples, namely \[\text{ACC}\coloneqq\frac{\text{number of correct predictions}}{\text{total number of predictions}}, \tag{16}\] where a value of \(1\) indicates a perfect classification. #### ii.2.4 Optimization Training the neural network means solving the minimisation problem \[\mathbf{\theta}_{\text{opt}}=\operatorname*{arg\,min}_{\mathbf{\theta}}\mathcal{L}( \mathbf{\theta};\,\mathcal{T}), \tag{17}\] where \(\mathbf{\theta}\) are the trainable parameters of the neural network, \(\mathcal{T}\) is the training dataset as defined in Eq. (4), and \(\mathcal{L}\) is the loss function driving the learning process, which, as discussed in the previous section, in our case is either the mean squared error (7) or the average infidelity (11). We optimize the neural networks using Adam, a variant of a stochastic gradient descent with adaptive estimation of first and second order moments [42]. For the results shown in Sec. III, we report the combination of training hyperparameters (number of training epochs, batch size, and learning rate) that attains the best performances. ## III Results We now present the results obtained for quantum state reconstruction and quantum noise classification using the proposed neural network methods. We start discussing the performance of the neural network in reconstructing noise-free quantum states corrupted by various single- and two-qubit noisy channels, and then proceed showcasing the network's capability to classify different quantum noisy channels accurately. Our results demonstrate high-fidelity state reconstruction and robust channel classification, thus revealing the potential of machine learning techniques for quantum information processing and computation. All simulations performed in this work are run with Qiskit [43] and TensorFlow [44]. ### Quantum State Reconstruction In order to explore the reconstruction capabilities of the neural network approach, we first start by studying its performances in the simpler case of learning noisy single-qubit states, and then move on to the more complex case of multi-qubit systems. In both scenarios, we see a clear dependence of the performances on the amount of available data to train the model (size of the training set), and provided that data is sufficient the network is always able to restore noiseless quantum states from their noisy counterparts. Whenever we consider the task of reconstructing initially pure states, an auxiliary _normalization layer_ is added at the end of the neural network (5) so that the outputs always consists of (Bloch) vectors with unit norm (for single-qubit states), as desired for pure states. Such normalisation constraint enforces the generation of physically consistent output states, which in turns effectively constrains the value of the infidelity loss function to remain in the physical regime1. It's worth noting that even when the MSE is employed as the loss function, the same normalization layer is used to ensure that the predicted states maintain their physical integrity and adhere to the desired constraints. The effect of the normalization layer is simply to rescale each output as follows Footnote 1: The fidelity (8) can be larger than \(1\) if the Bloch vectors have norms exceeding one, which is not possible for density matrices of quantum systems. If no constraint on the Bloch vectors is present, then the training process will simply push the neural network to output states of larger and larger norm to maximise the overlap. \[\text{NN}_{\mathbf{\theta}}(\mathbf{\tilde{r}})=\hat{\mathbf{r}}\longrightarrow\hat{\mathbf{r} }/\|\hat{\mathbf{r}}\|_{2}\,. \tag{18}\] In addition, whenever we consider the case of reconstructing already mixed states \(\rho\) of given purity \(\text{Tr}\big{[}\rho^{2}\big{]}\), the output of the neural network is additionally rescaled with the purity of the initial mixed state, \(\hat{\mathbf{r}}\rightarrow\sqrt{\text{Tr}[\rho^{2}]}\hat{\mathbf{r}}\). Single-qubit states.--In Table 1 we summarize the results of the reconstruction process for various noisy channels, using different loss functions to drive the training process, and using pure or mixed states as inputs to the neural network. In all cases we observe a very good average test fidelity (ATF) (14) at the end of training exceeding \(99.6\%\), thus showing the effectiveness of the proposed approach for reconstructing noisy states. As we now see, equally good performances are also obtained in the more complex task of inverting noise in multi-qubit systems. Multi-qubit states.--We tested the reconstruction procedure also on systems of \(n=2\) and \(n=3\) qubit systems undergoing several noisy evolutions with both uncorrelated and correlated noisy channels, and we summarize the results in Table 2. For two-qubit systems, we considered the following noise maps: (_i_) phase-flip channel with \(p=0.2\) applied to the first qubit and nothing on the second indicated as \(\mathcal{Z}_{p}\otimes\mathcal{I}\); (_ii_) a phase-flip channel applied to both qubits with \(p=0.2\), denoted as \(\mathcal{Z}_{p}\otimes\mathcal{Z}_{p}\); (_iii_) phase-flip channel on the first qubit and a bit-flip channel on the second qubit, both with \(p=0.2\), denoted as \(\mathcal{Z}_{p}\otimes\mathcal{X}_{p}\); _(iv)_ and finally we also have investigated a scenario where a correlated two-qubit amplitude damping channel \(\mathcal{C}_{AD}\) (\(\eta=0.1\), \(\mu=0.2\)), was applied to the system (see Eq.(A14) in Appendix A for a definition of the channel). For \(n=3\) qubit systems instead, we tested the reconstruction performance with states subject to the composite channel \(\mathcal{X}_{p}\otimes\mathcal{Z}_{p}\otimes\mathcal{Y}_{p}\). In this configuration, each qubit experiences a distinct quantum channel, including a bit-flip, phase-flip, and bit-phase-flip channel with a common noise parameter of \(p=0.2\). The results reported in Table 2 again reveal a successful reconstruction of ideal density matrices through the use of a relatively simple feed-forward neural network, and thus confirm the effectiveness of the proposed method. As already mentioned previously, we stress again that to ensure the production of pure quantum states as output, a normalization layer has been incorporated. Specifically, by appropriately multiplying the normalisation layer in (18), the norm of the output Bloch vectors is constrained to \(\sqrt{3}\) for two-qubit states, while it is set to \(\sqrt{7}\)[45] for three-qubit states. Comparing the cardinality of the training set \(|\mathcal{T}|\) used for the simulations in Tab. 1 and Tab 2, we see that more samples in the training dataset are generally needed to ensure a good reconstruction for larger system sizes, as one would expect. A discussion on the impact of the available information on the reconstruction performances is the topic of the next section. In Fig 2 we report the evolution of the MSE (7) and infidelity (11) loss functions during training for the case of \(\mathcal{Z}_{p}\otimes\mathcal{X}_{p}\) applied to a two-qubit system. As clear from the picture, the optimisation process of both cost functions is straightforward, and interestingly they follow a similar minimisation behaviour, despite there is not a trivial relation between the two, as instead it happens for single qubit states, see Eq. 13. We conclude by noticing that, taking into account that an increase in the number of qubits leads to a proportional increase in the dataset cardinality and the neural network complexity, the reconstruction approach proposed in this work can be straightforwardly applied to any \(n\)-qubit quantum state. \begin{table} \begin{tabular}{c c c} Channel & ATF (MSE) & ATF (INF) \\ \hline \multicolumn{3}{c}{\(|\mathcal{T}|=30\) Two-qubit States} \\ \hline \(\mathcal{Z}_{p}\otimes\mathcal{I}\) (\(p=0.2\)) & 0.994 & 0.993 \\ \(\mathcal{Z}_{p}\otimes\mathcal{Z}_{p}\) (\(p=0.2\)) & 0.994 & 0.995 \\ \(\mathcal{Z}_{p}\otimes\mathcal{X}_{p}\) (\(p=0.2\)) & 0.994 & 0.995 \\ \(\mathcal{C}_{\text{AD}}\) (\(\eta=0.1\), \(\mu=0.2\)) & 0.995 & 0.999 \\ \hline \multicolumn{3}{c}{\(|\mathcal{T}|=900\) Three-qubit States} \\ \hline \(\mathcal{X}_{p}\otimes\mathcal{Z}_{p}\otimes\mathcal{Y}_{p}\) (\(p=0.2\)) & 0.999 & 0.999 \\ \end{tabular} \end{table} Table 2: Results of pure multi-qubit states reconstruction using MSE and infidelity as loss functions. For two-qubit states, we studied scenarios involving: a phase-flip channel on the first qubit \(\mathcal{Z}_{p}\otimes\mathcal{I}\), phase-flip channel on both qubits \(\mathcal{Z}_{p}\otimes\mathcal{Z}_{p}\), phase-flip and bit-flip channels respectively on the first and second qubit \(\mathcal{Z}_{p}\otimes\mathcal{X}_{p}\), and correlated amplitude damping \(\mathcal{C}_{\text{AD}}(\eta,\,\mu)\). For three-qubit states, we considered the scenario characterized by bit-, phase-, and bit-phase-flip channels applied distinctly to all three qubits \(\mathcal{X}_{p}\otimes\mathcal{Z}_{p}\otimes\mathcal{Y}_{p}\). \begin{table} \begin{tabular}{c c c} Channel & ATF (MSE) & ATF (INF) \\ \hline \multicolumn{3}{c}{\(|\mathcal{T}|=30\) Pure States} \\ \hline \(\mathcal{X}_{p}\) (\(p=0.2\)) & 0.998 & 0.997 \\ \(\mathcal{Z}_{p}\) (\(p=0.2\)) & 0.997 & 0.998 \\ \(\mathcal{Y}_{p}\) (\(p=0.2\)) & 0.998 & 0.998 \\ \(\mathcal{P}_{p}\) (\(p=0.7,p_{1,2,3}=0.1\)) & 0.998 & 0.998 \\ \(\mathcal{Y}_{p}\) (\(p=0.3\)) & 0.996 & 0.997 \\ \(\mathcal{A}_{p\gamma}\) (\(p=0.5,\gamma=0.3\)) & 0.998 & 0.997 \\ \hline \multicolumn{3}{c}{\(|\mathcal{T}|=30\) Mixed States} \\ \hline \(\mathcal{Z}_{p}\) (\(p=0.2\)) & 0.999 & 0.999 \\ \end{tabular} \end{table} Table 1: Reconstruction of single-qubit quantum states. Training with both the MSE (7) and infidelity (11) as loss functions yields good average test fidelities (ATF) at the end of the optimisation process. Results are reported for different noisy channels (bit-flip \(\mathcal{X}_{p}\), phase-flip \(\mathcal{Z}_{p}\), bit-phase flip \(\mathcal{Y}_{p}\), general Pauli \(\mathcal{P}_{p}\), depolarizing \(\mathcal{D}_{p}\), and general amplitude damping channels \(\mathcal{A}_{p\gamma}\)), and for both pure and mixed initial states. Figure 2: Optimisation process of the neural network to reconstruct a two-qubit state under application of the noisy channel \(\mathcal{Z}_{p}\otimes\mathcal{X}_{p}\), using the MSE (7) and the infidelity (11) as loss functions. Both metrics display a similar minimisation behaviour. Impact of the dataset size on the reconstruction performances So far we have focused on assessing the reconstruction performances of the neural network assuming that enough data is available, and here instead analyse how such performances depends on the size of the training set. In Fig. 3 we report the average test fidelity obtained at the end of training, for neural networks optimised using training sets of different cardinality. In panel 3a we show the data for the reconstruction of single-qubit states undergoing a phase-flip channel \(\mathcal{Z}_{0.2}\), and in panel 3b the reconstruction of two-qubit states undergoing an uncorrelated two-qubit phase-flip channel \(\mathcal{Z}_{0.2}\otimes\mathcal{Z}_{0.2}\). In both cases --and for both the considered loss functions--, we observe that the use of a larger training set yields better reconstruction performances, up until a plateau is reached, but importantly also that satisfactory results can be achieved even with a limited number of samples. ### Classification of noisy channels The application of neural networks to quantum information processing can be extended beyond quantum state reconstruction to that of classification of quantum channels. In particular, in this section we show how a neural network can be trained to discriminate between noisy channels based on the effect they have on some input states, a scenario which is graphically depicted in Fig. 1. Our exploration encompasses a series of classification scenarios, including binary and multi-class classification. In such classification problems, each data item in the training set is constructed by considering as input the noiseless Bloch vector \(\mathbf{r}_{m}\) appended to the noisy one \(\mathbf{\tilde{r}}_{m}\), and as output an integer label \(y_{m}\) encoding which type of error was applied to \(\mathbf{r}_{m}\) to obtain \(\mathbf{\tilde{r}}_{m}\). Formally, given a classification task with \(C\) possible classes, the training dataset is then defined as \[\mathcal{T}_{IN}=\left\{\left([\mathbf{\tilde{r}}_{m},\mathbf{r}_{m}],y_{m}\right) \right\}_{m=1}^{M}, \tag{19}\] where \([\mathbf{\tilde{r}}_{m},\mathbf{r}_{m}]\in\mathbb{R}^{2\times 3}\) is the input to the neural network, and \(y_{m}\in\{1,\ldots,C\}\) is the desired output. We refer to this training dataset with \(\mathcal{T}_{IN}\), where the subscript indicates that we use as input the extended vector obtained by merging the _ideal_ and _noisy_ Bloch vector. Furthermore, we consider the more complex scenario where every training data point comprises only the noisy vector \(\mathbf{\tilde{r}}_{m}\), accompanied with its associated noise label \(y_{m}\). As the neural network is now provided with less information, this scenario presents a higher level of complexity and, as will become clear from the following results, it generally necessitates a larger corpus of data points. We refer to this type of training dataset with \(\mathcal{T}_{N}\), where the subscript indicates that we use as input exclusively the noisy Bloch vector. As standard with classification tasks in machine learning, for both scenarios we use a _one-hot encoding_ of the labels and train the neural networks using the categorical cross entropy (15) as loss function [41], and then measure the final performances with the accuracy metric (16). Binary classification.--We consider the binary classification problem (\(C=2\)) of discriminating single-qubit states subject either to phase-flip channel \(\mathcal{Z}_{p}\), or amplitude damping channel \(\mathcal{A}_{p\gamma}\). The training set is generated by sampling a number \(|\mathcal{T}|\) of random uniform single-qubit states and evolving half of them with the phase-flip channel, and the remaining half with the amplitude damping one. Classification accuracies at the end of the training procedure are reported in Table 3. Remarkably, with a training set containing \(|\mathcal{T}_{IN}|=300\) samples, our model exhibits very good classification performances, reaching perfect accuracy \(\text{ACC}=1\). In this case, we noticed that even with a reduced dataset comprising just a few dozen samples, the model is able to achieve almost perfect accuracy, even though the learning process becomes perceptibly less stable. On the other hand, with the noisy training set \(\mathcal{T}_{N}\) with \(|\mathcal{T}_{N}|=300\) samples, the best accuracy obtained was \(\text{ACC}=0.92\). As expected, as the noisy dataset \(\mathcal{T}_{N}\) contains only information about the noisy Bloch vectors (and a label for the noisy channel that created them), more data is needed to reach good classification performances. Indeed, higher accuracies can then be obtained by using larger training datasets: for example, an accuracy of \(\text{ACC}=0.98\) can be achieved with a training set containing \(800\) samples. Note that when using dataset \(\mathcal{T}_{N}\), learning the relation between noisy Bloch vectors belonging to the same channel implicitly translates to that of reconstructing the shape of the Bloch sphere generated by the two channels, which are reported in Fig. 4. However, as clear from the graphical representation, there is a nontrivial in \begin{table} \begin{tabular}{c c c} \hline \hline Dataset cardinality & \multicolumn{2}{c}{Accuracy (on test set \(\widehat{\mathcal{T}}\))} \\ \hline \multicolumn{3}{c}{Binary Classification} \\ & \(\mathcal{Z}_{p}\) vs \(\mathcal{A}_{p\gamma}\) & \\ \hline \(|\mathcal{T}_{IN}|=300\) & & 1 (\(|\widehat{\mathcal{T}}|=100\)) \\ \(|\mathcal{T}_{N}|=300\) & & 0.92 (\(|\widehat{\mathcal{T}}|=100\)) \\ \hline \multicolumn{3}{c}{Ternary Classification} \\ & \(\mathcal{Z}_{p}\) vs \(\mathcal{A}_{p\gamma}\) vs \(\mathcal{D}_{p}\) \\ \hline \(|\mathcal{T}_{IN}|=960\) & & 1 (\(|\widehat{\mathcal{T}}|=120\)) \\ \hline \hline \end{tabular} \end{table} Table 3: Results for quantum channel classification tasks, with corresponding accuracies (16) obtained at the end of training and evaluated on test sets \(\widehat{\mathcal{T}}\). We studied the binary classification task of distinguishing channels \(\mathcal{Z}_{p}\) (\(p=0.2\)) vs. \(\mathcal{A}_{p\gamma}\) (\(p=0.5,\gamma=0.3\)) in both variations “IN” and “N” for the training dataset (19), and the three-class classification problem for channels \(\mathcal{Z}_{p}\) (\(p=0.2\)) vs. \(\mathcal{A}_{p\gamma}\) (\(p=0.5,\gamma=0.3\)) vs. \(\mathcal{D}_{p}\) (\(p=0.3\)) in the variant “IN”. tersection between the two Bloch spheres, which implies that certain samples could reasonably be assigned to either classes. The imperfect accuracy and the general decrease in classification accuracy for the noise-only training dataset \(\mathcal{T}_{N}\) shown in Table 3, can then be explained by such inherent ambiguity in the dataset which makes the classification task more difficult --if not impossible-- to solve exactly. Multi-class classification.--As a straightforward extension to the previous analysis, we also report results for the case of a multi-class classification task, where the network is asked to classify states generated by three different channels (phase-flip, amplitude damping and depolarizing channel), using a dataset of type \(\mathcal{T}_{IN}\), that is containing both ideal and noisy Bloch vectors. We find again that the network is able to perfectly classify all the states, reaching a perfect final accuracy on a test set \(\text{ACC}=1\), as reported in Table 3. ## IV Conclusion In conclusion, our research underscores the remarkable effectiveness of deep neural networks in quantum information processing tasks, specifically in reconstructing and classifying quantum states undergoing unknown noisy evolutions. This study, as exemplified in our results, showcases the successful recovery of ideal (generalized) Bloch vectors with fidelities exceeding \(0.99\), even for quantum states involving up to three qubits, under different correlated and uncorrelated noisy channels, and using both classical (mean squared error) and quantum-inspired (infidelity) loss functions for training. Furthermore, our investigation demonstrates the versatility of our neural network approach in classification problems, adeptly handling a wide range of noise patterns and consistently achieving remarkable classification accuracy across all test samples. Notably, in the context of discriminating between phase-flip and amplitude damping channels, our model achieves an outstanding classification accuracy of \(98\%\), highlighting its remarkable capacity to discern the relationships between states affected by similar noise sources, even when presented with the noisy vectors alone. As we look ahead, an intriguing avenue for further exploration lies in examining the intricate connections be Figure 4: Deformed Bloch spheres obtained by applying a phase-flip channel \(\mathcal{Z}_{p}(p=0.2)\) (light blue), and a generalised amplitude damping channel \(\mathcal{A}_{p\gamma}(p=0.5,\,\gamma=0.3)\), to a set of uniformly distributed pure states. Note the non-trivial intersection between the two ellipsoids. Figure 3: Average Test Fidelity (ATF) obtained at the end of training when optimising the neural networks with training sets of different cardinality. For each cardinality, we repeat the training process 5 times using different initialisation of the parameters and different training data, and report the mean value and the standard deviation of the resulting ATFs. (a) Reconstruction of single-qubit states undergoing a phase-flip channel \(\mathcal{Z}_{p}(p=0.2)\). (b) Reconstruction of two-qubit states undergoing the uncorrelated phase-flip channel \(\mathcal{Z}_{p}\otimes\mathcal{Z}_{p}(p=0.2)\). tween various fidelity measures [46] used as training loss functions and their impact on the resulting test fidelities. This pursuit aims to identify the fidelity metric best suited to the specific characteristics of the problem at hand. Such investigations promise to advance our understanding of quantum information processing and open new horizons for practical applications in quantum technology. ## V Acknowledgements A.R.M. acknowledges support from the PNRR MUR Project PE0000023-NQSTI. C.M. acknowledges support from the National Research Centre for HPC, Big Data and Quantum Computing, PNNR MUR Project CN0000013-ICSC, and from the EU H2020 QuantERA ERA-NET Cofund in Quantum Technologies project QuICHE." ## Appendix A Noise Channels In this appendix we present the noise quantum channels used in the simulations to generate the datasets containing noisy Bloch vectors. Bit-flip Channel.--The bit-flip channel flips the qubit state from \(\ket{0}\) to \(\ket{1}\) and vice versa with probability \(p\). Given the operator-sum representation in Eq. (2), its Kraus operators are \[E_{0}=\sqrt{1-p}\,\mathbb{I}\quad\,E_{1}=\sqrt{p}\,X,\quad\,X=\begin{pmatrix}0 &1\\ 1&0\end{pmatrix}. \tag{30}\] It is possible to represent the deformation that occurs on the Bloch sphere after the bit-flip noise occurred: the states on the \(\hat{x}\) axis are left alone while the \(\hat{y}-\hat{z}\) plane is uniformly contracted by a factor \(2p\). Phase-flip channel.--This channel changes the sign of the component associated to the element of the computational basis \(\ket{1}\) of the qubit state. The channel is represented by the operation elements \[E_{0}=\sqrt{1-p}\,\mathbb{I}\quad\,E_{1}=\sqrt{p}\,Z,\quad\,Z=\begin{pmatrix} 1&0\\ 0&-1\end{pmatrix}. \tag{31}\] On the Bloch sphere, while the \(\hat{x}-\hat{y}\) plane is contracted by a factor \(2p\), the states on the \(\hat{z}\) axis are left untouched. Bit-phase-flip channel.--This channel is a combination of a bit-flip and a phase-flip channel. Recalling that \(Y=iXZ\), the Kraus operators of this channel are \[E_{0}=\sqrt{1-p}\,\mathbb{I}\quad\,E_{1}=\sqrt{p}\,Y,\quad\,Y=\begin{pmatrix} 0&-i\\ i&0\end{pmatrix}. \tag{32}\] This noise acts on the Bloch sphere by leaving alone the states on the \(\hat{y}\) axis and contracting the \(\hat{x}-\hat{z}\) plane by a factor \(2p\). General Pauli channel.--The general Pauli channel is a combination of bit-, phase- and bit-phase-flip channels, each one with its own intensity on the correspondent axis [6]. In this case, the operators are \[E_{0}=\sqrt{p_{0}}\,\mathbb{I} E_{1}=\sqrt{p_{1}}\,X, \tag{33}\] \[E_{2}=\sqrt{p_{2}}\,Y E_{3}=\sqrt{p_{3}}\,Z, \tag{34}\] with \(p_{1},p_{2}\) and \(p_{3}\) probabilities of each error such that \(p_{0}+p_{1}+p_{2}+p_{3}=1\). Depolarizing Channel.--The depolarizing channel acts on a quantum state by leaving it untouched with probability \(1-p\), or replacing it with the completely mixed state \(\mathbb{I}/2\) with probability \(p\). Its Kraus operators \[E_{0}=\sqrt{1-\frac{3p}{4}}\,\mathbb{I} E_{1}=\frac{\sqrt{p}}{2}X \tag{35}\] \[E_{2}=\frac{\sqrt{p}}{2}Y E_{3}=\frac{\sqrt{p}}{2}Z\,. \tag{36}\] In this case the entire Bloch sphere is uniformly contracted by a factor which depends on \(p\). The channel can be generalized for \(d\)-dimensional (\(d=2^{n}\) for \(n\) qubits) quantum systems as \[\mathcal{E}(\rho)=(1-p)\rho+p\frac{\mathbb{I}_{d}}{d}\,. \tag{37}\] Generalized amplitude damping channel.--This channel can be used to describe the energy dissipation in a quantum system to an environment at finite temperature [30], and can be described with Kraus operators \[E_{0}=\sqrt{p}\begin{pmatrix}1&0\\ 0&\sqrt{1-\gamma}\end{pmatrix}, \tag{38}\] \[E_{1}=\sqrt{p}\begin{pmatrix}0&\sqrt{\gamma}\\ 0&0\end{pmatrix}, \tag{39}\] \[E_{2}=\sqrt{1-p}\begin{pmatrix}\sqrt{1-\gamma}&0\\ 0&1\end{pmatrix}, \tag{40}\] \[E_{3}=\sqrt{1-p}\begin{pmatrix}0&0\\ \sqrt{\gamma}&0\end{pmatrix}, \tag{41}\] where it is possible to define the stationary state \[\rho_{\infty}=\begin{pmatrix}p&0\\ 0&1-p\end{pmatrix}. \tag{42}\] This channel leads to a deformation of the Bloch sphere, where \(\gamma\) regulates the shrinking of each component of the Bloch vector and \(p\) indicates which is the fixed point. Correlated amplitude damping channel.--This is a two-qubit noise channel defined as the convex combination of two channels \(\mathcal{N}_{0}\) and \(\mathcal{N}_{1}\)[47] \[\mathcal{N}(\rho)=(1-\mu)\mathcal{N}_{0}(\rho)+\mu\mathcal{N}_{1}(\rho), \tag{10}\] where \(\mu\in[0,1]\) is a correlation parameter between the qubits, and \(\mathcal{N}_{0}=\sum_{j=0}^{3}A_{j}\rho A_{j}^{\dagger}\) and \(\mathcal{N}_{1}=\sum_{j=0}^{1}B_{j}\rho B_{j}^{\dagger}\) are noisy channels defined by Kraus operators \(A_{0}=E_{0}\otimes E_{0},A_{1}=E_{0}\otimes E_{1},A_{2}=E_{1}\otimes E_{0}\) and \(A_{3}=E_{1}\otimes E_{1}\) with \(E_{0}\) and \(E_{1}\) being the operators in Eqs. (10) and (11) with \(p=1\), and \[B_{0}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&\sqrt{\gamma}\end{pmatrix}, \tag{11}\] \[B_{1}=\begin{pmatrix}0&0&0&\sqrt{1-\gamma}\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{pmatrix}. \tag{12}\] ## Appendix B Fidelity of single qubit states As proved in [40], the fidelity formula (8) can be much simplified for one-qubit states and expressed in terms of Bloch vectors. For an hermitian \(2\times 2\) matrix \(M\) with positive eigenvalues, it holds \[\left(\operatorname{Tr}\sqrt{M}\right)^{2}=\operatorname{Tr}[M]+2(\det M)^{1 /2}, \tag{13}\] and with \(M=\sqrt{\rho}\,\sigma\,\sqrt{\rho}\) as in the definition of the fidelity (8), one obtains \[F(\rho,\sigma)=\operatorname{Tr}[\rho\sigma]+2(\det\rho\det\sigma)^{1/2}. \tag{14}\] Finally, expressing the density matrices in terms of their Bloch vectors, namely \(\rho=(\mathbb{I}+\mathbf{r}\cdot\mathbf{P})/2\) and \(\sigma=(\mathbb{I}+\mathbf{s}\cdot\mathbf{P})/2\) with \(\mathbf{P}=(X,Y,Z)\), by explicit calculation one arrives at \[F(\mathbf{r},\mathbf{s})=\frac{1}{2}\Big{(}1+\mathbf{r}\cdot\mathbf{s}+\sqrt{(1-\|\mathbf{r}\|^{2 })(1-\|\mathbf{s}\|^{2})}\Big{)}\,. \tag{15}\] With such expression at hand, it is then possible to prove that for pure single-qubit states minimizing the mean squared error loss is equivalent to minimizing infidelity. In fact, consider the Euclidean distance between the Bloch vectors \[\ell(\mathbf{r},\mathbf{s})=\left\|\mathbf{r}-\mathbf{s}\right\|^{2}=\left\|\mathbf{r}\right\|^{2 }+\left\|\mathbf{s}\right\|^{2}-2\,\mathbf{r}\cdot\mathbf{s}\,, \tag{16}\] and the infidelity \[I(\mathbf{r},\mathbf{s})=1-\frac{1}{2}\Big{(}1+\mathbf{r}\cdot\mathbf{s}+\sqrt{(1-\|\mathbf{r}\|^ {2})(1-\|\mathbf{s}\|^{2})}\Big{)}\,. \tag{17}\] Then, since we are computing the loss functions with Bloch vectors \(\mathbf{r}\) and \(\mathbf{s}\) of pure states --the ideal noise-free Bloch vector and the predicted one by the neural network, see (4)--, it holds that \(\|\mathbf{r}\|=\|\mathbf{s}\|=1\), thus obtaining \[I(\mathbf{r},\mathbf{s})=1-\frac{1}{2}(1+\mathbf{r}\cdot\mathbf{s})\longrightarrow\mathbf{r}\cdot \mathbf{s}=-2\,I(\mathbf{r},\mathbf{s})+1\,, \tag{18}\] and substituting in Eq. (16) one finally arrives at \[\ell(\mathbf{r},\mathbf{s})=4\,I(\mathbf{r},\mathbf{s}). \tag{19}\]
2309.03395
The Quiet Eye Phenomenon in Minimally Invasive Surgery
In this paper, we report our discovery of a gaze behavior called Quiet Eye (QE) in minimally invasive surgery. The QE behavior has been extensively studied in sports training and has been associated with higher level of expertise in multiple sports. We investigated the QE behavior in two independently collected data sets of surgeons performing tasks in a sinus surgery setting and a robotic surgery setting, respectively. Our results show that the QE behavior is more likely to occur in successful task executions and in performances of surgeons of high level of expertise. These results open the door to use the QE behavior in both training and skill assessment in minimally invasive surgery.
Alaa Eldin Abdelaal, Rachelle Van Rumpt, Sayem Nazmuz Zaman, Irene Tong, Anthony Jarc, Gary L. Gallia, Masaru Ishii, Gregory D. Hager, Septimiu E. Salcudean
2023-09-06T23:07:58Z
http://arxiv.org/abs/2309.03395v1
# The Quiet Eye Phenomenon in Minimally Invasive Surgery ###### Abstract In this paper, we report our discovery of a gaze behavior called Quiet Eye (QE) in minimally invasive surgery. The QE behavior has been extensively studied in sports training and has been associated with higher level of expertise in multiple sports. We investigated the QE behavior in two independently collected data sets of surgeons performing tasks in a sinus surgery setting and a robotic surgery setting, respectively. Our results show that the QE behavior is more likely to occur in successful task executions and in performances of surgeons of high level of expertise. These results open the door to use the QE behavior in both training and skill assessment in minimally invasive surgery. Eye gaze Tracking, Minimally Invasive Surgery, Robot-Assisted Surgery, Surgical Skill Assessment, Surgical Training ## I Introduction Studying how experts see their workspace when performing a task is one of the main research topics in motor skill learning. In particular, researchers in this area are interested in investigating where experts look in their workspace, when they look at critical cues, for how long, and how this behavior is coordinated with the body/hand motions of these experts. Answering these questions can lead to interesting insights that can then be used to train novices. Indeed, gaze-based training methods have shown great success in many fields including sports [1], law enforcement [2] and the military [3]. The general idea of these methods is to train novices to adopt the gaze behaviors of experts. The essence of these methods is to teach novices where and when to look at locations of interest and, for how long. The underlying argument is that the motor skills of novices can be improved if they adopt the "visual behavior" of experts. One promising gaze-based training method is called quiet eye (QE) training. The term "quiet eye" refers to the final fixation right before a performer carries out a critical movement to execute a task [4]. The period of this final fixation is called the QE period. Research in sports shows that a longer QE period is associated with a higher level of expertise [5]. This QE behavior has been discovered in experts' performances in targeting tasks in sports such as basketball free throws [6] and golf putting [7]. One theory explaining such behavior states that this long QE period is the period needed to change the neural connections in the performer's brain to absorb and process the important visual information and plan the critical movement of the task before actually carrying it out [8]. As many surgical tasks can be considered as targeting tasks, QE training has the potential to be applied in surgical training, inspired by the strong evidence of its effectiveness in other domains. Studies in this area have been limited and are focused solely on open surgery settings. In this work, we investigate the QE phenomenon in the context of minimally invasive surgery (MIS). In particular, the contributions of this work are: * To investigate the existence of QE behavior in surgeons' performances in two minimally-invasive surgery (MIS) settings. Our investigation is conducted on two independently collected data sets, one in a sinus surgery setting and the other in a robotic surgery setting using the da Vinci Surgical robot. * To observe and report the changes in the QE behavior in successful and unsuccessful tasks. * To observe and report the changes in the QE behavior between highly experienced and less experienced surgeons. These contributions provide the foundation to understand the QE behavior in MIS and open the door for further research on the effect of applying QE training in this setting. ## II Related Work Researchers in sports training have extensively studied the eye gaze of experts as a proxy of how they allocate their attention. To achieve that, they follow what is called the vision-in-action paradigm for data collection [6], where they collect the gaze data of experts as they actually perform the task. In addition, they also use cameras to capture synchronized views of the body of the experts as well as views of what they see in their workspace. The analysis of the collected data has been used to find interesting gaze patterns in experts and study how such patterns differ from novices. Such an approach has been successfully applied in many sports such as basketball [9], pistol shooting [10], badminton [11] and golf [7]. A phenomenon called Quiet Eye (QE) was discovered by Joan Vickers when analyzing the gaze behavior of experts in basketball [6] and it was identified and associated with experts' performances in many other sports later on as in [5, 12, 13]. QE is defined as the final fixation (or tracking gaze) right before a performer carries out a critical movement to execute a task [4]. The period of this behavior is called the QE period. Thus far, the QE phenomenon has been confirmed in three main task categories: targeting tasks [14] (such as basketball free throws and billiards), intercepting timing tasks [13] (such as responding to the serve in volleyball and table tennis) and tactical tasks [15] (such as soccer defence and offence). Adopting the QE behavior of experts has been shown to be effective in training novices in these three task categories. QE behavior can be identified by looking for the following measurable characteristics in the collected data [16]: * It is a fixation (or tracking gaze) on a specific target in the workspace which strictly means that the performer's gaze remains within 3 degrees of visual angle (or less) from the same target for at least 100 ms. * The beginning of the QE period (which is also called QE onset) begins right before the critical movement of the task. For instance, if it is a targeting task, then the QE period begins right before its aiming part. * The end of the QE period (or the QE offset) occurs when the fixation (or tracking gaze) deviates from the location/object of interest by more than 3 degrees of visual angle for longer than 100 ms. The successful use of the QE in sports encouraged its exploration in surgical training. For example, Causer et al. [17] conducted a user study to assess the efficiency of QE in open surgery setting in a knot tying task. In their study, the trainees were divided into two groups, a QE training group and a conventional training group. Their results show that the QE group performs better than the other one in terms of completion time and efficiency of the hand movements during the task. The results of another study [18] also show that the QE training group performance is more robust to high levels of pressure and anxiety compared with the other group. Through analyzing the gaze behavior, the authors argue that QE training directs the trainees' gaze towards a single point of focus, which increases the effectiveness of their attention allocation to the most important information for the task at hand. This frees up some mental processing resources that enable QE trainees to overcome high anxiety levels. To the best of our knowledge, the QE phenomenon has neither been identified in surgeons' gaze behavior nor applied to surgical training in MIS settings. We argue that many surgical tasks fall under the targeting task category. Findings from sports research suggest that there is a very high chance of finding the QE phenomenon in surgeons' performances of tasks in this category in our case. In addition, according to Newell's Constraints-Led Model [19], a change in the environment/setting is a significant factor to consider before studying (and later on applying) a motor skill learning phenomenon to other environments, which justifies the need to study the QE phenomenon in MIS settings (in contrast to open surgery settings as in previous work). In this work, we take the first step towards filling this gap by investigating the existence of the QE behavior in surgeons' performances in two different MIS settings. In addition, we report our observations regarding how the QE behavior changes with successful/unsuccessful task executions, and with surgeons' experience level. We believe that by doing so, this work provides a deeper understanding of the QE behavior in MIS, which, in turn, can be beneficial in applications such as training and skill assessment in MIS. ## III Methods and Hypotheses ### _Hypotheses_ In this work we have two hypotheses as follows: * Hypothesis 1: QE behavior occurs more frequently in successful tasks compared with unsuccessful ones. In addition, QE duration is longer in successful tasks than in unsuccessful ones. * Hypothesis 2: QE behavior occurs more frequently in performances from highly experienced surgeons compared with less experienced ones. In addition, QE duration is longer in the former case than the latter one. We verify the first hypothesis using a data set in a sinus surgery setting. The second hypothesis is verified using another data set in a robotic surgery setting using the da Vinci surgical robot. ### _Sinus Surgery Data Set_ The sinus surgery data set is a data set of a series of targeting tasks on a cadaver model [20]. The data set was collected from 20 surgeons at the annual resident endoscopic skull-base surgery course at Johns Hopkins University. The subjects were asked to visualize and touch nine endonasal structures using an endoscope and pointer. The experimental setup used for the data collection is shown in Fig. 1. An electromagnetic tracker was used to track the positions of the pointer and endoscope, and an eye gaze tracker was used to collect the subject's eye gaze position on the video monitor every 0.02 seconds. Videos of what subjects saw were recorded directly using a video capture card situated in a data collection computer. Temporal synchronization between all the collected data units was achieved using time stamps assigned to each of these units. Each video in the data set represents a targeting task by one surgeon, aiming for a predefined target inside the sinus anatomy. Each video was assessed by three independent assessors. Each assessor gave each video a task score of either zero, if the task was not successful, or one if it was successful. This makes the data set a suitable candidate to investigate the QE behavior and how it changes with the successful execution of these targeting tasks. The video annotations were carried out manually based on the QE three measurable characteristics mentioned in Section II. In particular, we annotate the target location in the first frame in each video and track it (along with its surrounding region) in the subsequent frames using OpenCV. This gives us the 2D pixel location of the target in each frame. The raw data contains the 2D pixel location of the surgeon's eye gaze in each frame and its corresponding time stamp. Using this information, we compute all the fixations on the target location, within three degrees of visual angle, that last for at least 100 ms. The three degrees of visual angle are represented on each frame as a circle whose center is the target pixel location. The radius of this circle is computed by projecting the three degrees of visual angle from the surgeon's eye onto the screen. To compute this radius in pixels, we use the distance between the surgeon's eye and screen, the size of the screen and its resolution. All these values are known from the data collection process. A fixation is detected as long as the surgeon's eye gaze location is within this circle for at least 100 ms. Using time stamps, we also compute the duration of each detected fixation. To find out if QE occurs or not, we first manually identify the frame number right before the start of the aiming part of the targeting task, and its associated time stamp. Using this information, we conclude that QE occurs in a video if and only if a fixation starts at or before this time stamp and lasts after the start of the aiming part of the task. The QE duration is the same as the duration of this fixation. The data set was annotated by two independent annotators and the interclass correlation between them is 1 for QE detection and 0.9903 for QE duration. The data set has a total of 302 videos and we excluded 28 of them because they were difficult to annotate due to the unstable visual feedback caused by frequent camera movement. For the remaining 274 videos, we annotated each video so that we can learn two things: * Whether the QE exists in the video. In this case, the video was assigned a binary value (e.g., one if the QE occurs in the video and zero otherwise). * The QE duration in milliseconds, only if QE exists in the video. A sample annotated frame from one of the videos where QE occurred is shown in Fig. 2. To verify the first hypothesis, we only considered the videos with unanimous task scores (either zero or one) from the three assessors. This represents a total of 187 videos from the entire data set. ### _Robotic Surgery Data Set_ The robotic surgery data set was collected from seven RAS surgeons operating on the da Vinci Si Surgical System [21]. The surgeons performed porcine and cadaver exercises. During each exercise, the estimated eye gaze for each surgeon's left and right eye was collected using EyeTech VT2 mini eye tracker (EyeTech Digital Systems, Mesa, AZ), that was placed within the da Vinci Si system. The surgeon's point of gaze was then calculated as the mean of the left and right eye locations. In addition, the instrument kinematics, system events, and endoscope videos were collected directly from the da Vinci platform, and were synchronized with the eye gaze data using the associated time stamps. Since we are interested in targeting tasks, we annotated the videos of all the suturing exercises in the data set. This is because the suturing task has a clear targeting component whenever the surgeon inserts the needle into the tissue. The annotation was performed manually, similar to the annotation of the sinus surgery data set. A sample annotated frame from one of the videos where QE occurs is shown in Fig. 3 In total, we had 33 videos of suturing exercises performed by two surgeons. The first surgeon has approximately 20 years of surgical experience and the second surgeon has less than 5 years of surgical experience. The remaining surgeons in the data set did not perform any suturing exercises. The results from these videos are used to verify our second hypothesis. ### _Performance Metrics_ We considered in our investigation two metrics: (i) the percentage of the videos that have QE for each considered task score/experience level, and (ii) the QE duration in these videos. Fig. 1: The experimental setup used for the sinus surgery data collection [20]. This figure is used with permission. Fig. 2: A sample annotated frame from one of the videos where QE occurred in the sinus surgery data set. The blue point represents the target location, the red point represents the surgeon’s eye gaze, and the green circle represents the area within 3 degrees of visual angle around the target (according to the definition of QE). A QE green flag appears on the top right of the view when the QE occurred in the video. ## IV Results ### _Sinus Surgery Data Set Results_ The results of the sinus surgery data set, as shown in Fig. 4 and Fig. 5, are used to verify our first hypothesis. Results show that videos with a unanimous task score of one have on average 22% more videos with QE compared with videos with a task score of zero as shown in Fig. 4. Furthermore, the QE duration in videos with a task score of one is on average 2.3 times longer than the QE duration in videos with a task score of zero as shown in Fig. 5. We perform hypothesis testing using a Mann-Whitney U test on the above results. The two results were statistically significant, with \(p<0.05\) in the two cases. These results verify our first hypothesis. They show that the QE behavior occurs more frequently in successful tasks than unsuccessful ones. In addition, the results show that QE duration is longer in successful tasks than unsuccessful ones. ### _Robotic Surgery Data Set Results_ The results of the robotic surgery data set are used to verify our second hypothesis. The results show that 40% of the more experienced surgeon's videos have QE, compared with 0% of the less experienced surgeon's videos. The average QE duration in the more experienced surgeon's videos is 867 \(\pm\) 480 ms, compared with zero ms in the less experienced surgeon's videos, since QE was not found in any of them. We conducted two hypothesis tests on the above results. A Chi-square test (with \(\alpha\) = 0.05) was used to assess whether the QE phenomenon occurs more often with the more experienced surgeon than with the less experienced one. The test was statistically significant with \(\chi^{2}(1)\) = 6.864 and \(p<\) 0.05. Moreover, a Mann-Whitney U test was used to assess whether QE duration is longer in the more experienced surgeon's case than the less experienced one. This test was also statistically significant with \(p<\) 0.05. These results verify our second hypothesis. ## V Discussion In this work, we report the discovery of the QE behavior in two independently collected data sets in two MIS settings. Our results show that the QE behavior is more likely to be observed in successful targeting tasks (compared with unsuccessful ones) and in targeting tasks performed by more experienced surgeons (compared with less experienced ones). In addition, the results show that QE duration is significantly longer in these two cases. Besides, our results show that the QE duration is potentially a stronger distinguishing factor between different performances (based on the success of the performance or the experience level of the surgeon) than the existence of the QE behavior on its own. These results are consistent with previous findings in the sports training literature [16]. One limitation, though, of this study is the small number of surgeons whose data were used in our investigation. We used data from 20 surgeons in the first data set and from two surgeons in the second data set. More data sets, with more surgeons and more targeting tasks, are needed to reach more conclusive results and investigate the QE behavior in a variety of other targeting tasks in MIS. Nevertheless, the potential impact of this study spans the areas of training, skill assessment and human-robot interaction in MIS. For training, this work opens the door to testing QE training [22] in MIS settings, with the evidence it provides that QE behavior does exist in this setting. Since the characteristics of the QE behavior are well defined, we think that it can be relatively easy to integrate QE training into existing surgical training methods. Furthermore, QE training has been shown to be robust against high levels of anxiety and pressure [14], which can make it even more valuable for surgical training in MIS. Fig. 4: The percentage of the videos that have QE for each considered task score. Fig. 5: The QE duration for each considered task score. Fig. 3: A sample annotated frame from one of the videos where QE occurred in the robotic surgery data set. The blue point represents estimated gaze for the surgeon’s right eye and the green point represents estimated gaze for the surgeon’s left eye. The blue circle represents the area within 3 degrees of visual angle around the target (according to the definition of QE). A QE green flag appears on the top middle of the view when the QE occurred in the video. The results of our study can be useful for surgical skill assessment in MIS. Learning how the QE behavior changes as the surgeon's skill/experience level increases can open the door to monitor these changes and use them as a tool to classify the skill levels of practicing surgeons based on their QE behavior. Furthermore, it would be interesting to investigate the effect of adding the QE behavior related features into existing machine learning based skill assessment systems [23] on their classification performance. Moreover, the tasks we considered in this study provide a clue of the family of the surgical tasks where using the QE behavior can be useful for skill assessment. Our study can lead to some important implications on the design of human-robot interfaces in RAS. In particular, our study highlights the importance of collecting and investigating eye gaze data in this setting. This provides additional evidence of the potential value of integrating eye gaze tracking in new and existing RAS platforms and simulation environments [24][25]. ## VI Conclusion and Future Work In this paper, we report our discovery of the QE phenomenon in MIS. We investigated the existence and duration of the QE behavior in two independently collected data sets in a sinus surgery setting and a robotic surgery setting. In two targeting tasks, we observed that the QE behavior occurs more frequently and for longer durations in successful task performances (compared with unsuccessful ones) and in performances by experienced surgeons (compared with less experienced ones). These observations are consistent with similar findings in sports training, where the QE behavior has been successfully used to improve the motor skills of novice trainees. This work opens up several new lines of research in surgical training and skill assessment in MIS. For instance, more investigations are needed to identify more tasks where the QE behavior would occur. These tasks can then be used to test the effect of integrating QE training into the traditional training approaches of these tasks. The integration of the QE training has the potential to improve the motor skill learning aspects of these tasks and can potentially make the traditional training methods more robust to situations of high anxiety and pressure. Furthermore, it would be interesting to include the well-defined characteristics of the QE behavior as novel features in skill assessment machine learning models and evaluate how they can improve their performance. ## Acknowledgment We would like to thank Maram Sakr and Tim Powers for their assistance with the statistical analysis of the results of this work.
2309.13186
Deep Learning with Photonic Neural Cellular Automata
Rapid advancements in deep learning over the past decade have fueled an insatiable demand for efficient and scalable hardware. Photonics offers a promising solution by leveraging the unique properties of light. However, conventional neural network architectures, which typically require dense programmable connections, pose several practical challenges for photonic realizations. To overcome these limitations, we propose and experimentally demonstrate Photonic Neural Cellular Automata (PNCA) for photonic deep learning with sparse connectivity. PNCA harnesses the speed and interconnectivity of photonics, as well as the self-organizing nature of cellular automata through local interactions to achieve robust, reliable, and efficient processing. We utilize linear light interference and parametric nonlinear optics for all-optical computations in a time-multiplexed photonic network to experimentally perform self-organized image classification. We demonstrate binary classification of images in the fashion-MNIST dataset using as few as 3 programmable photonic parameters, achieving an experimental accuracy of 98.0% with the ability to also recognize out-of-distribution data. The proposed PNCA approach can be adapted to a wide range of existing photonic hardware and provides a compelling alternative to conventional photonic neural networks by maximizing the advantages of light-based computing whilst mitigating their practical challenges. Our results showcase the potential of PNCA in advancing photonic deep learning and highlights a path for next-generation photonic computers.
Gordon H. Y. Li, Christian R. Leefmans, James Williams, Robert M. Gray, Midya Parto, Alireza Marandi
2023-09-22T21:24:50Z
http://arxiv.org/abs/2309.13186v1
# Deep Learning with Photonic Neural Cellular Automata ###### Abstract **Abstract:** Rapid advancements in deep learning over the past decade have fueled an insatiable demand for efficient and scalable hardware. Photonics offers a promising solution by leveraging the unique properties of light. However, conventional neural network architectures, which typically require dense programmable connections, pose several practical challenges for photonic realizations. To overcome these limitations, we propose and experimentally demonstrate Photonic Neural Cellular Automata (PNCA) for photonic deep learning with sparse connectivity. PNCA harnesses the speed and interconnectivity of photonics, as well as the self-organizing nature of cellular automata through local interactions to achieve robust, reliable, and efficient processing. We utilize linear light interference and parametric nonlinear optics for all-optical computations in a time-multiplexed photonic network to experimentally perform self-organized image classification. We demonstrate binary classification of images in the fashion-MNIST dataset using as few as 3 programmable photonic parameters, achieving an experimental accuracy of 98.0% with the ability to also recognize out-of-distribution data. The proposed PNCA approach can be adapted to a wide range of existing photonic hardware and provides a compelling alternative to conventional photonic neural networks by maximizing the advantages of light-based computing whilst mitigating their practical challenges. Our results showcase the potential of PNCA in advancing photonic deep learning and highlights a path for next-generation photonic computers. Deep learning models have demonstrated remarkable capabilities in numerous domains, ranging from computer vision to natural language processing, scientific discovery, and generative art [1, 2, 3, 4]. However, as the complexity and scale of these models continue to surge, a critical challenge emerges: the need for efficient and scalable hardware solutions to handle the ever-increasing computational demands. For example, recent trends show that the compute requirements for deep learning models are doubling approximately every 5-6 months [5]. This is far outpacing improvements in conventional digital electronic computers, which has spurred the use of application-specific hardware accelerators such as Graphics Processing Units and Tensor Processing Units [6]. In this context, the convergence of deep learning with photonics has emerged as a promising frontier, poised to redefine the landscape of neural network computation. By leveraging the distinct characteristics of light, photonic hardware can unlock unprecedented processing speeds, parallelism, and energy efficiencies that surpass the capabilities of traditional electronic architectures [7; 8]. To enable this new paradigm of photonic deep learning, much of the focus so far has been on developing the fundamental devices needed for crucial neural network operations. Indeed, there have been impressive demonstrations of photonics for linear operations such as matrix multiplication and convolutions [9; 10; 11], as well as nonlinear activation functions such as rectified linear unit [12; 13; 14]. These photonic building blocks are now comparable to or surpass their electronic counterparts in certain important computing metrics. However, there has been comparatively less attention devoted towards studying system-level architectures for photonic neural networks (PNNs). This is crucial since photonics and electronics operate in entirely different regimes [15]. The computational advantages of photonic building blocks can quickly diminish when used to implement conventional neural network architectures that were optimized for digital electronics [16]. Advancing photonic deep learning towards end-to-end and scalable photonic systems requires properly considering neural network architectures that can benefit from implementation with specific photonic hardware. For example, one important hurdle is that photonic devices are typically analog and noisy, requiring low effective bit-resolution to operate efficiently [17]. This is detrimental for conventional deep learning architectures such as Multi-layer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs), which have so far been mainstays for PNNs, because they are inherently susceptible to noise and small perturbations [18; 19]. We expect that this problem will become increasingly pronounced as PNNs grow beyond small-scale demonstrations. Moreover, MLPs and CNNs require densely-connected layers with large numbers of parameters, which are challenging to realize in typical photonic platforms and current demonstrations of PNNs contain relatively small numbers of programmable parameters. Finally, PNNs are usually operated with fixed weights that cannot be rapidly updated in real-time. This constraint makes it difficult for PNNs to efficiently implement the complex structures of modern deep learning models, and also poses reliability concerns when generalizing to out-of-distribution data. To overcome these apparent disparities between photonics capabilities and conventional neural network architectures, we propose and experimentally demonstrate a novel type of PNN based on Neural Cellular Automata (NCA) [20]. Cellular automata (CA) are computational models composed of a lattice of cells with states that follow an update rule, which defines how the state of a cell evolves over time based on the states of its neighboring cells [21; 22]. Inspired by biological systems, the local interactions between cells governed by the update rule gives rise to complex phenomena and emergent patterns at the global-scale [23]. Unlike conventional human-designed update rules, NCA harness the complex dynamics of cellular automata by using modern deep learning techniques to learn the local update rules needed to perform specific tasks such as regenerating patterns [20], self-classifying images [24], and texture generation [25]. Our Photonic Neural Cellular Automata (PNCA) combines the advantages of photonic hardware with NCA to achieve self-organized image classification. The PNCA leverages a completely different methodology for computer vision tasks compared to previous PNNs based on MLPs or CNNs. Crucially, this enables noise-robust and fault-tolerant processing, as well as convenient measures of uncertainty for identifying anomalies and out-of-distribution data. Furthermore, PNCA achieves parameter-efficient solutions since the photonic hardware can operate with fixed weights and only needs to encode the parameters for local update rules instead of global network weights. The proposed PNCA approach can be generalized to suit a wide variety of existing photonic hardware, which can potentially greatly increase the functionality of PNNs and addresses several important challenges facing photonic deep learning. ## Results ### PNCA architecture The key concepts of the general PNCA architecture are shown in Fig. 1, which can be adapted to suit a wide range of different photonic hardware platforms. For computer vision tasks, each pixel in the input image corresponds to a cell in the PNCA. Cells are designated as either _alive_ or _dead_ through an alive masking procedure. This can be done by setting a threshold for the initial pixel value, below which the cell is considered dead. Only alive cells are actively updated by the PNCA, whereas dead cells can influence the updates of alive cells but are otherwise quiescent. The cell state updates according to a rule that depends on the cells in a local \(m\)-cell neighborhood. For example, Fig. 1a shows the prototypical Moore neighborhood composed of the cell and the 8 cells that surround it. Other types of local cell neighborhoods are also possible. In the PNCA, the optical field corresponding to each cell is split into \(m\) optical paths to define the desired \(m\)-cell neighborhood for the local update rule. The local update rule for the PNCA is encoded by the photonic hardware, which accepts the \(m\) inputs given by the \(m\)-cell neighborhood and outputs the next cell state. Although Fig. 1a only shows each cell state having a single channel, this can also be extended to multiple channels (e.g. RGB color image channels) by increasing the inputs and outputs accordingly. In general, the programmable photonic hardware contains feed-forward layers with linear operations which can be implemented through meshes of Mach-Zehnder interferometers [9], photonic cross-bar arrays [10], micro-ring resonator weight banks [26], or other linear photonic devices [11; 14]. In addition, there must also be layers performing nonlinear activations such as photonic devices based on optoelectronic measurement-feedback [27; 14] or nonlinear-optical crystals [12; 13]. This kind of feed-forward programmable photonic hardware specifying a single input-output function has been used in previous PNNs. However, for PNCA, the key difference is that the photonic hardware only needs sparse connections and enough parameters to encode for the local update rule as shown in Fig. 1b, which is usually orders-of-magnitude fewer than the number of parameters needed to encode global network weights in fully-connected layers for MLPs or CNNs. In other words, the parameter-efficient PNCA architecture can enable existing PNN hardwares with relatively Figure 1: **Photonic Neural Cellular Automata.** (a) A single iteration of PNCA consists of alive cells that are encoded into an optical signal, \(m\) optical paths encoding a local \(m\)-cell neighborhood and perception vector for each cell, updating the state of each cell according to a local update rule represented by a neural network, and alive cell masking. (b) Photonic hardware encodes the local update rule, which includes linear operations implemented physically via light interference, and nonlinear operations implemented physically via nonlinear optics. (c) Backpropagation-through-time algorithm for training PNCA to learn a local update rule, which upon repeated iteration causes self-organization of cells for an image classification task. A cell-wise \(L_{2}\) loss is used for optimizing the photonic neural network parameters. few parameters to perform larger and more complicated tasks than otherwise possible in conventional neural network architectures. Furthermore, this local update rule can more easily tolerate the use of fixed-weights after training since every cell follows the same update rule. Finally, the output is recurrently fed back to update the cell state for the next iteration. This can be accomplished by photodetection and electro-optic feedback or by using all-optical feedback lines. Unlike conventional CA with discrete cell states [21], NCA use cell states that are continuous-valued [20], which allows the model to be end-to-end differentiable and compatible with gradient-descent based learning algorithms. In this work, we consider the task of self-organized image classification. The target output after the final iteration is to have every alive cell in the state that corresponds to the class label for the input image. The alive cells must form this collective agreement through only the local interactions defined by repeated iteration of the update rule. This can be interpreted as a kind of recurrent neural network, which can be trained using the standard backpropagation-though-time algorithm [28] as shown in Fig. 1c. Using a cell-wise \(L_{2}\) loss was found to give better performance compared to cross-entropy loss of labels, which is more commonly used for image classification tasks [20]. The training can either be done _in situ_ by performing the forward pass in PNCA to more accurately capture the physics, or completely digitally by simulating the photonic hardware with noise [29; 30]. ### Experimental realization of PNCA We used a time-multiplexed scheme and commercially-available optical-fiber components to experimentally demonstrate proof-of-concept for a simple version of PNCA as shown in Fig. 2. Each cell state is given by the amplitude of a laser light pulse generated by a mode-locked laser with a fixed repetition rate such that the cells are inputted one at a time in a flattened 1D lattice by raster scanning across the 2D image. In this way, each cell occupies a time-bin site in a synthetic temporal dimension [31]. Therefore, distances in a real-space lattice correspond to time-differences in the temporal dimension and cells at different lattice sites can be made to interact by using temporal delay lines. The pulse amplitude/phase representing the cell state is set using an electro-optic modulator (EOM), and the pulse is then split between 3 temporal optical delay lines with relative delays \(T_{1}\) and \(T_{2}\) chosen to enforce the desired 3-cell local neighborhood shown in Fig. 2b. In this simple example, the local update rule is encoded by a single perceptron neuron shown in Fig. 2c, which consists of a linear dot product followed by a nonlinear activation function. The dot product is achieved by coherent interference of the optical delay lines, each equipped with a variable optical attenuator (VOA) to program the desired weights. The nonlinear activation is performed using depleted second harmonic generation in Figure 2: **Experimental setup for PNCA.** (a) Schematic of the experimental setup. Pulses of light produced by a mode-locked laser pass through an electro-optic modulator (EOM) and are split into optical fiber delay lines (blue lines) with relative delays \(T_{1}\) and \(T_{2}\). Linear dot product weights are programmed by tuning the variable optical attenuator (VOA) in each delay line. Nonlinear activation using a periodically-poled lithium niobate (PPLN) waveguide is performed following the coherent interference of light pulses, with the resultant amplitudes stored on a field-programmable gate array (FPGA) and reinjected (black lines) to drive the input EOM for the next iteration. (b) Local 3-cell neighborhood enforced by relative delays \(T_{1}\) and \(T_{2}\). (c) The local update rule is encoded by a single perceptron with 3 programmable parameters. (d) PPLN nonlinear activation function. (e) Cells representing pixels of an image are encoded by the amplitude of light pulses with repetition period \(T_{R}\) in a synthetic temporal dimension. For example, pulses can be coupled using optical delay lines with \(T_{1}=+1T_{R}\) and \(T_{2}=+28T_{R}\) to implement the local 3-cell neighborhood shown in (b) for fashion-MNIST images. a reverse-proton exchange periodically-poled lithium niobate waveguide [32]. This produces a sigmoid-like function as shown in Fig. 2d. Thus, the computations in the local update rule are achieved all-optically. Overall, the local update rule contains only 3 programmable parameters, but can still perform complex tasks. Finally, the cell state is measured using a photodetector, stored on a field-programmable gate array (FPGA), and electro-optically re-injected for the next iteration after alive-cell masking. A crucial aspect of photonic hardware is that it is analog and noisy. A key advantage of the PNCA architecture is that it is fault-tolerant and robust to noise due to the self-organizing nature of the cell states. We rigorously characterized the noise and errors in our PNCA implementation, which arises from three main operations: (1) the input cell state due to thermal and electronic noise in the EOM, (2) the linear dot product due to phase noise and imperfect pulse temporal overlap in the coherent interference, and (3) the nonlinear activation due to thermal noise and photorefractive effects in the PPLN. We characterized these errors using 200 test images. The expected vs. measured amplitudes of alive cells in these images are shown in Fig. 3. The mean and standard deviation of the errors (expected amplitude \(-\) measured amplitude) achieved in our system are typical of photonic hardware, and we show that this is tolerable for the PNCA architecture due to its noise-robustness. Figure 3: **Measurements of noise and errors in PNCA operations.** Expected vs. measured light amplitude for (a) input cell state by EOM, (b) linear dot product by coherent intereference and (c) nonlinear activation by PPLN. Each scatter point represents an alive cell from the 200 images tested. The top right insets show the histograms for the error (expected amplitude \(-\) measured amplitude) in each case and the bottom right shows the mean and standard deviation, respectively. ### Self-organized image classification We trained the experimental PNCA to perform binary image classification using the fashion-MNIST dataset consisting of \(28\times 28\) pixel gray-scale images of clothing items [33]. For example, Fig. 4a shows how the PNCA can classify images of sneakers and trousers. The alive cell masking is performed by designating any pixel with initial value \(\alpha>0.1\) as an alive cell, and all other pixels as dead cells with constant value of zero. Each input image was iterated for \(t=21\) time steps in the PNCA, which was sufficient for the cells to reach an approximate global agreement. The alive cells self-organize to have state values close to zero (unity) for images of sneakers (trousers). Finally, the predicted image label is obtained in postprocessing by performing global average pooling of the final alive cell states followed by softmax classification. In this case, a global average closer to zero (unity) indicates that the predicted image label is sneaker (trouser). Figure 4: **Experimental results for fashion-MNIST binary image classification.** (a) Information flow for the PNCA trained to classify images of sneakers and trousers, beginning with alive cell masking, followed by \(t=21\) iterations of the trained PNCA. The predicted image label is obtained by global average pooling and softmax classification of the final self-organized alive cells. Confusion matrices for (b) idealized simulation model, (c) noisy simulation model, and (d) experiment. The training procedure was performed digitally using an idealized simulation model of the PNCA that had no noise. The confusion matrix for the idealized model is shown in Fig. 4b, which yielded a final test accuracy of 99.4%. Next, the trained model parameters were frozen, and the model was tested again but with additional simulated Gaussian noise for each operation, matching the noise characteristics shown in Fig. 3. The confusion matrix for the noisy model is shown in Fig. 4c, which has a slightly lower final test accuracy of 97.7%. The trained model parameters were implemented in the experimental PNCA by appropriately tuning the VOAs. The confusion matrix for the experimental result is shown in Fig. 4d and has a final test accuracy of 98.0%. This experimental test accuracy is in close agreement with the simulated noisy model, which shows that the PNCA operates as desired and can successfully tolerate the use of noisy photonic hardware. No special training or noise regularization techniques were used for the PNCA. We emphasize that the robustness emerges through the local interactions between cells forming a global agreement. Therefore, even if one cell fails, the collective state can still persist. This is in contrast to conventional neural network architectures such as MLPs and CNNs, which are highly susceptible to noise and adversarial attacks [18]. Figure 5: **Recognizing out-of-distribution data.** Histograms of alive cell averages for (a) initial condition and (b) final iteration of test images of sneakers (blue), trousers (red), and out-of-distribution bags (yellow). ### Out-of-distribution data Furthermore, conventional neural networks are prone to making overconfident predictions and failing to generalize to out-of-distribution data [34]. This lack of reliability is especially problematic for photonic deep learning in which the weights are fixed and online learning is not practical. The NCA approach addresses this shortcoming by using the average state value of all alive cells as a built-in measure of uncertainty. We experimentally demonstrated this for PNCA by using the same network as before that was trained on images of sneakers and trousers. Now, we test the PNCA on images of bags, which is an out-of-distribution class that the PNCA was not exposed to during training. The distributions for the alive cell averages of the sneaker, trouser, and bag classes are shown for the initial test images in Fig. 5a. It clearly shows that the initial distributions for alive cell averages closely overlap between all classes. Upon iteration of the local update rule that was learned during training, the PNCA is able to successfully separate the distributions for sneaker and trouser, with final alive cell averages of 0.1743 and 0.8742, respectively, as shown in Fig. 5b. In this case, the difference between the final alive cell average and zero/one indicates the uncertainty in the prediction. However, the final alive cell average for out-of-distribution test images of bags is 0.5682, which is close to 0.5 and means that the cells did not reach a global agreement. This shows that the PNCA can use the alive cell average as a proxy for uncertainty and to detect out-of-distribution data. Unlike for conventional neural network architectures, neither special training/inference techniques nor additional training data is required. ## Discussion In summary, we have proposed and experimentally demonstrated a novel approach to photonic deep learning based on PNCA. It addresses several system-level challenges in previous photonic neural networks, and can serve as a general architecture for a wide variety of photonic hardware platforms. In particular, we showed that PNCA enables noise-robust fault-tolerant image classification through local interactions between cells with an inherent measure of uncertainty based on alive cell averages. Moreover, the efficient PNCA model encoding requires orders of magnitude fewer parameters compared to MLPs or CNNs. Our single perceptron neuron rule encoding can be straightforwardly extended to a shallow neural network with a greater number of programmable parameters to perform more complicated and larger-scale computer vision tasks. For example, we focused on binary image classification for simplicity, but it is possible to perform image classification with more classes if the number of output neuron channels is increased. Furthermore, we only used standard backpropagation training and did not employ any special training or regularization techniques. More advanced noise-aware or physics-aware training schemes [29] are also compatible with the PNCA architecture and may further increase performance. We used a time-multiplexed photonic network based on a synthetic temporal dimension, however, it is also possible to use an analogous PNCA approach based on other synthetic dimensions such as frequency dimensions [35]. Our work therefore highlights a clear path to advancing photonic deep learning based on PNCA and paves the way for next-generation photonic computers.
2309.14831
Comparison of $\bar{\hbox{N}}\hbox{N}$ optical models
We compare the strong part of the $\bar{\hbox{N}}\hbox{N}$ interaction obtained by the Nijmegen partial wave analysis and the results of some of the most popular $\bar{\hbox{N}}\hbox{N}$ optical potentials in configuration space. We have found severe discrepancies in most of the partial waves, especially above $p_{Lab}$=400 MeV/c where the partial wave analysis displays a resonant-like structure in the $^{31}$S$_0$ and $^{33}$P$_0$ waves. Some theoretical difficulties to interpret this behaviour in terms of dynamical resonances are pointed pout and an alternative explanation is suggested. A much better stability is observed in the low energy parameters, apart from some discrepancies due to the presence of near-threshold quasi-bound states in particular waves. Large deviations have also been found between the corresponding potentials, at short and medium-range ($r\gtrsim 1$ fm) distances.
Jaume Carbonell, Guillaume Hupin, Sławomir Wycech
2023-09-26T10:54:17Z
http://arxiv.org/abs/2309.14831v1
# Comparison of \(\bar{\rm N}\)N optical models ###### Abstract We, compare the strong part of the \(\bar{\rm N}\)N interaction obtained by the Nijmegen partial wave analysis and the results of some of the most popular \(\bar{\rm N}\)N optical potentials in configuration space. We have found severe discrepancies in most of the partial waves, especially above \(p_{Lab}\)=400 MeV/c where the partial wave analysis displays a resonant-like structure in the \({}^{31}\)S\({}_{0}\) and \({}^{33}\)P\({}_{0}\) waves. Some theoretical difficulties to interpret this behaviour in terms of dynamical resonances are pointed pout and an alternative explanation is suggested. A much better stability is observed in the low energy parameters, apart from some discrepancies due to the presence of near-threshold quasi-bound states in particular waves. Large deviations have also been found between the corresponding potentials, at short and medium-range (\(r\gtrsim 1\) fm) distances. Keywords:Low energy antiproton physics Optical models Phase shifts PW analysis Protonium + Footnote †: journal: Eur. Phys. J. A ## 1 Introduction In comparison with the Nucleon-Nucleon (NN) case, the Antinucleon-Nucleon- (\(\bar{\rm N}\)N) interaction remains poorly known. The reason for that is, on one hand the relatively limited number of \(\bar{\rm N}\)N low-energy data and on the other hand the intrinsic difficulty of theoretically describing a system which has hundreds of open annihilation many-body channels at rest. See for instance [1; 2; 3] and references therein. A rigorous theoretical approach of this physical problem in its full complexity is far beyond our possibilities, both formal and computational, and it will remain so probably for a long time. There are however phenomenological ways to model the low energy \(\bar{\rm N}\)N physics and obtain a reasonable description of the existing experimental data, provided one renounces to describe each particular annihilation channel and by introducing a relatively large number of parameters. A successful example is provided by the \(\bar{\rm N}\)N optical models, which date from the early days of antiproton physics [4], and whose main properties have been recently reviewed in [5; 6]. The first accurate description of the \(\bar{\rm p}\)p experimental results was provided by the energy-dependent partial wave analysis of Nijmegen group [7] (NPWA) which presents an almost perfect description (\(\chi^{2}\approx 1\)) of the existing data below \(p_{Lab}<\)925 MeV/c, although after applying a severe rejection criteria. In this analysis, the long- and medium-range \(\bar{\rm N}\)N interaction is given by a one- plus two-pion exchange potential (\(V_{\pi}\equiv V_{1\pi}+V_{2\pi}\)) at N2LO chiral EFT detailed in [8]. This potential is matched at \(b\)=1.2 fm to a state and energy-dependent complex boundary conditions which parametrise the short range physics, in particular the very complex annihilation dynamics. This is realised by fixing, for each energies E and partial wave \(\alpha=\{T,L,S,J\}\), the logarithmic derivative of the corresponding wave function at \(r\)=\(b\): \(P_{\alpha}(E)\)=\(b\big{(}\Psi^{\prime}_{\alpha}/\Psi_{\alpha}\big{)}_{r=b}\). The parameters of the NPWA, i.e. the low energy constants (LEC's) of \(V_{\pi}\) (\(c_{1},c_{3},c_{4}\)) and the complex boundary conditions \(P_{\alpha}\), were determined in [7] by a fit to the \({\rm p\bar{p}}\) scattering data. The LEC's found in this way, were compatible with previous determinations from \(pp\)[8] and a combined fit of \(pp\) and \(pn\) scattering data [9]. The possibility of performing a PW analysis of the \(\bar{\rm N}\)N data has been questioned [10; 5], as it requires the determination of, at least, twice as many parameters as in the NN case, the number of \(\bar{\rm N}\)N partial waves is higher than for NN, and the available \(\bar{\rm N}\)N data are orders of magnitude less abundant. For instance, the \(\bar{\rm N}\)N S-matrix for a tensor uncoupled states, is no longer determined by a real parameter \(\delta\) as in the unitary case (\(S=e^{2i\delta}\)), but by a complex quantity \(\delta_{C}=\delta_{R}+i\delta_{I}\) whose (positive) imaginary part \(\delta_{I}\) controls the inelastic processes through the parameter \(\eta=\mid S\mid=e^{-2\delta_{I}}\) (\(0<\eta<1\)), thus allowing the same formal expression for the S-matrix (\(S=e^{2i\delta_{C}}\)). This criticism is sound and can eventually rise some questions about the uniqueness of the solution, especially when the inelasticity parameter \(\eta\), and so the S-matrix itself, are very small. There is, however, no doubt that the results presented in [7] provide an excellent description of the selected data set and constitute at least one reliable solution in the domain 100 MeV/c \(<p_{Lab}<\)1000 MeV/c. Once determined the parameters of the \(\bar{\rm p}\)P PW analysis, the authors of Ref. [7] removed the Coulomb potential and the \(n-p\) mass difference (\(\Delta_{0}\equiv m_{n}-m_{p}\)), and obtained the strong \(\bar{\rm N}\)N phase-shifts in the isospin symmetry, which are in fact the non trivial and interesting part of the interaction. These results, which can be considered to a large extent as being model independent, are extremely useful for a critical comparison between the different models, without directly relying on the experimental observables. The former, involve usually contributions of many partial waves and can hide eventual significant disagreements among the different interaction models. The strong \(\bar{\rm N}\)N Nijmegen phase shifts constitute the basis of our further analysis. The strong \(\bar{\rm N}\)N phase-shifts provided by the Nijmegen group were also the starting point to determine the parameters of the most recent NN Julich potential [12]1. This potential is based on the G-parity transform of a previously established chiral EFT NN potential at N3LO [14]. It contains contributions from one- and two-pion exchange and of contact terms with up to four derivatives. The annihilation part is taken into account by introducing imaginary contact terms in each partial wave, regularized by gaussian form factors. The potential is inserted in a relativistic Lipmann-Schwinger equation to obtain the phase-shifts. The low energy constants of the pion-exchange part were taken from the pion-nucleon dynamics and the remaining ones, as well as the annihilation constants, were adjusted to reproduce to the strong phase shifts and inelasticity parameters of the Nijmegen PWA in the isospin basis. Supplemented with the Coulomb and \(\Delta m\) term, this model provides an equally good description of the \(\bar{\rm p}\)p data as in the NPWA. Furthermore it has been extended to describe the zero energy protonium results as well as the existing \(\bar{\rm n}\)p data (\(T=1\)). Footnote 1: It is worth mentioning that, under this denomination, one can include a series of previous works on \(\bar{\rm N}\)N interactions developed since the 90’s, based on the G-parity transform of the meson-exchange NN Bonn model [13] and even the N2LO version of the chiral EFT \(\bar{\rm N}\)N potential [11]. For shortness of the notation, we will hereafter denote Jülich \(\bar{\rm N}\)N potential as the one described in Ref. [12] The Julich potential constitutes nowadays the most complete and accurate description of the \(\bar{\rm N}\)N data, would it be at the price of a considerable number of parameters (\(\approx 90\)). However, it has been derived in momentum space what makes difficult its implementation to study more complex systems, in particular the very peripheral and loosely bound hydrogenic orbits of the \(\bar{\rm p}\)-A systems. These Coulomb-like states constitute the cornerstone of the PUMA research project [15] that requires reliable theoretical predictions of the annihilation probabilities from some of them. A recent application to the simple \(\bar{\rm p}\)-d case has been recently obtained [16] using a simplified (local) form of the Julich potential. They led to significantly different predictions with respect to other existing models and it is not clear which part of these differences is a genuine prediction of the potential itself or results from the simplifications. On the other hand the strong non local character of the Julich potential makes it difficult to be used in configuration space calculations, where the few-nucleon scattering problem can be more easily solved. In view of further applications, but also for the sake of a theoretical consistency, it is of the highest interest to examine the predictive power of some of the most popular \(\bar{\rm N}\)N optical models formulated in configuration space, by comparing them at the level of strong phase shifts as well as with the recent - phase equivalent - Nijmegen PW and Julich results. A previous comparison devoted to protonium level shifts and scattering lengths was published many years ago [17; 18] but to our knowledge no systematic study has been performed at non zero energies. Our goal in the present paper is thus to establish a detailed comparison of the Nijmegen PW analysis and Julich results with some selected optical models widely used in the literature and formulated in configuration space. To this aim we will consider the last updated version (2009) of the Paris potential [19], the Dover-Richard models [20; 21] published in 1980-82 and the Khoo-Weise [22] potential formulated in 1986. They represent different degrees of complexity in the theoretical description and, apart from being formulated in configuration space, they have in common that: _(i)_ they make full use of conventional (not EFT) meson exchange theory, _(ii)_ they were constructed before the NPWA [7], and _(iii)_ they were adjusted to a restricted data set. At the end, they obtain a less accurate description of the experimental results than NPWA and Julich model, but they use a much smaller number of parameters. As we will see in what follows, there exist huge disagreements among the partial wave predictions of the considered optical models, often hidden when considering only integrated cross sections. They claim for an urgent clarification of the \(\bar{\rm N}\)N interaction at the two-body level, both from the theoretical as well as from the experimental point of view, before intending a minimal model independent description of more complex systems, furthermore involving off-shell properties of the interaction. We will sketch in section 2 the main ingredients of the theoretical \(\bar{\rm N}\)N formalism used in Refs. [7; 12] as well as in our own calculations with Paris, DR and KW optical models. Section 3 is devoted to compare the strong phase shifts for the S and P partial waves, low energy parameters and S- and P-wave protonium level shifts of these different models. Section 4 contains some concluding remarks. ## 2 The formalism for \(\bar{\rm N}\)N optical models The strong part of the \(\bar{\rm N}\)N force is derived in the isospin basis (\(V_{T}\)). However, this basis is not adapted to computing low-energy \(\bar{\rm p}\)p scattering processes due to the relevant role of Coulomb interaction and, in a less extent, to the \(n\)-\(p\) mass difference (\(\Delta_{0}\equiv m_{n}-m_{p}\)) which couples, even asymptotically, the isospin states. One uses, instead, the so called particle-basis where the \(\mid p\bar{p}\rangle\) and \(\mid n\bar{n}\rangle\) states are coupled only by the short range "charge-exchange" potential. By adopting the isospin conventions [23; 17] \[N=\begin{pmatrix}p\\ n\end{pmatrix}\quad\bar{N}=\begin{pmatrix}-\bar{n}\\ +\bar{p}\end{pmatrix}\equiv\begin{array}{c}|1/2,+1/2>\;=\;-|\bar{n}>\\ |1/2,-1/2>\;=\;+|\bar{p}>\end{array} \tag{1}\] the particle basis is expressed in terms of \(\bar{\rm N}\)N isospin states \(\mid T,T_{3}>\) as \[\begin{array}{l}|p\bar{p}>\;=\;+\frac{1}{\sqrt{2}}\left\{|00>+|10>\right\} \\ |n\bar{n}>\;=\;+\frac{1}{\sqrt{2}}\left\{|00>-|10>\right\}\\ |p\bar{n}>\;=\;-|1,+1>\\ |\bar{p}n>\;=\;+|1,-1>\end{array} \tag{2}\] The \(\mid p\bar{p}>\) and \(\mid n\bar{n}>\) states can be cast into a single state vector \[\mid\Psi\rangle=\begin{pmatrix}\Psi_{p\bar{p}}\\ \Psi_{n\bar{n}}\end{pmatrix}\] which, in the \(\bar{\rm N}\)N models that we will consider, obeys the non-relativistic Schrodinger equation \[(E-H_{0})\mid\Psi\rangle=\hat{V}\;\mid\Psi\rangle \tag{3}\] where \(E\) is the (non-relativistic) \(\bar{\rm p}\)p energy in the center of mass. The potential matrix \[\hat{V}=\begin{pmatrix}V_{p\bar{p}}&V_{ce}\\ V_{ce}&V_{n\bar{n}}\end{pmatrix} \tag{4}\] is expressed in terms of the isospin components \(V_{T}\) and the \(\bar{\rm p}\)p Coulomb potential \[V_{c}(r)=-\frac{\alpha}{r}\] as \[V_{p\bar{p}} = V_{N\bar{N}}+V_{c} \tag{5}\] \[V_{n\bar{n}} = V_{N\bar{N}}+2\Delta_{0}\] (6) \[2V_{N\bar{N}} = V_{0}+V_{1}\] (7) \[2V_{ce} = V_{0}-V_{1} \tag{8}\] The kinetic energy is assumed to be channel-diagonal with the \(p\)-\(n\) averaged mass \(m\) \[H_{0}=-\frac{\hbar^{2}}{m}\Delta\qquad m=\frac{m_{p}+m_{n}}{2} \tag{9}\] After performing the PW expansion, the reduced radial wave functions \(u_{i}\) obey a set of \(n_{c}\) coupled differential equations \[u_{i}^{\prime\prime}+q_{i}^{2}u_{j}-\sum_{j=1}^{n_{c}}v_{ij}u_{j}=0 \tag{10}\] Figure 1: Integrated strong \(\bar{\rm N}\)N cross sections – elastic \(\sigma_{e}\) (black), annihilation \(\sigma_{a}\) (red ), charge-exchange \(\sigma_{ce}\) (green) and their sum \(\sigma_{t}\) (blue) – as functions of the \(\bar{\rm N}\) laboratory momenta for DR2 (dashed dotted line), KW (dashed line) and Paris 2009 (solid line) optical models. The results of the Nijmegen Partial Wave analysis [7] are indicated by filled circles. where \(i,j\) encodes the channel indexes \(\{\bar{p}p,\bar{n}n\}\) as well as the quantum number \(\alpha=\{L,S,J\}\), \(q_{i}\) the channel momenta in the center of mass and \[v_{ij}=mV_{ij}\] We use natural units (\(\hbar=c=1\)) along the paper. For the tensor uncoupled states (\({}^{1}\)S\({}_{0}\),\({}^{1}\)P\({}_{1}\),\({}^{3}\)P\({}_{0}\),...) \(n_{c}\)=2 and for the tensor coupled (\({}^{3}\)SD\({}_{1}\),\({}^{3}\)PF\({}_{2}\),...) \(n_{c}\)=4. The relations between the channel momenta are obtained by assuming the same c.m. total energy (\(\sqrt{s}\)), which leads to: \[\frac{s}{4}=m_{\alpha}^{2}+q_{\alpha}^{2}\] This gives \[q_{pp}^{2} = mE \tag{11}\] \[q_{\bar{n}n}^{2} = q_{\bar{p}p}^{2}-2\Delta_{0}\;m \tag{12}\] We will denote hereafter by \(q\equiv q_{\bar{p}p}\) the c.o.m. momenta of the \(\bar{\rm p}\)p driving channel. Notice than when using the differential form (10), the \(n-p\) mass difference \(\Delta_{0}\) is already included in the channel momenta and must be removed from the potential (6). In the numerical calculations we used \(m\)= 938.28 MeV and \(\Delta_{0}=m_{n}-m_{p}\)=1.2933 MeV. The \(\bar{\rm n}\)n channel is open at the \(\bar{\rm p}\)p center of mass energy \(E\geq\)2.5866 MeV, i.e. \(q\)=0.2497 fm\({}^{-1}\). The relation with the laboratory momenta is given by \[p_{Lab}=2q\sqrt{1+\left(\frac{q}{m}\right)^{2}}\approx 2q\] that is \(p_{Lab}\)=98.54 MeV/c. The strong \(\bar{\rm N}\)N potentials that we have considered in this work take the form \[V(r)=U(r)+W(r) \tag{13}\] where real \(U\) is a G-parity transform of a NN potential regularised below some cut-off radius \(r_{c}\), and \(W\) is the complex potential (eventually containing also a real part) accounting for the annihilation. The Paris NN model [19; 25; 26; 27] is based on the G-parity transform of the Paris NN potential [28; 29]. It contains one- and two-pion exchange (the latter via dispersion relations), plus \(\omega\) and \(A_{1}\) potentials as part of three-pion exchange. The real part \(U\) has a central, spin-spin, spin-orbit, tensor and quadratic spin-orbit terms. The first two are energy-dependent, what results into seven scalar amplitudes for a given isospin \(T\). They are regularized below some distance \(r_{c}\) (\(r_{c}\)=0.84 fm or \(r_{c}\)=1.0 fm) by a cubic polynomial, whose coefficients introduce adjustable parameters. All together this gives nine parameters for each isospin. The annihilation potential \(W\), derived in [24], is purely imaginary and has a similar spin structure as the real part. It depends on six parameters for each \(T\). A particularity of the Paris potential is the short range character of W (\(r_{a}=1/2m_{N}\approx 0.1\) fm). The total number of parameters of the model is \(\approx 30\) and that ensures a fairly good description (\(\chi^{2}\)/datum \(\approx\) 5) of most of the existing data, without any selection criteria, including differential cross sections and polarization observables. In Dover and Richard models - DR1 version [20] and DR2 version [21] - \(U\) is taken from a simplified version of the NN Paris potential [28] containing \(\pi,2\pi\) and \(\omega\) regularized below \(r_{c}=0.8\) fm. DR models were adjusted to reproduce some analytic parametrisations of the total, elastic, charge-exchange and annihilation experimental integrated cross sections in the range \(0.4<p_{Lab}<0.9\) GeV/c with a \(\chi^{2}\)/data\(\approx\)0.5 for DR1. In addition to \(r_{c}\), there are only four parameters which control the annihilation potential \(W\). In Kohno-Weise model [22], U is taken from the NN Ueda potential [30] with \(\pi,\rho,\omega,\sigma\) meson contributions, regularized below \(r_{c}\)=1 fm by a \(C^{1}\) matching to a Woods-Saxon potential. As for DR, the parameters come only from W and are adjusted to reproduce the \(\bar{\rm p}\)p total (\(\sigma_{t}\)), elastic (\(\sigma_{e}\)) and charge exchange (\(\sigma_{ce}\)) cross sections in the region \(200<p_{Lab}<700\) MeV/c. In this way this model provides a good description of the forward \(\bar{\rm p}\)p elastic differential cross sections at \(p_{Lab}\)=400, 500, 600 MeV/c, and \(\bar{\rm p}\)p elastic differential cross sections at \(p_{Lab}\)=390,490,590 MeV/c as well as of differential ce at 490 and 590 MeV/c. No \(\chi^{2}\) is given in this analysis. In DR and KW models, the annihilation potential \(W\) is local, energy- and state-independent. It has the common form \[W(r)=-\frac{W_{0}}{1+e^{\frac{r-R}{m}}} \tag{14}\] with the parameters given in Table 1. These three optical models differ by their meson contents, the value of the cut-off radius \(r_{c}\), the regularization procedure as well as by their annihilation potentials. They generate the very different potentials presented in Appendix A. As an illustrative example, let us consider Figure 15 from this Appendix, corresponding to the real part of the \({}^{11}\)S\({}_{0}\) potentials. They \begin{table} \begin{tabular}{l l l l} & DR1 & DR2 & KW \\ \hline W\({}_{0}\) (GeV) & 21+20i & 0.5+ 0.5i & 1.2i \\ R (fm) & 0 & 0.8 & 0.55 \\ a (fm) & 0.2 & 0.2 & 0.2 \\ \end{tabular} \end{table} Table 1: Parameters of the Dover-Richard (DR1 and DR2 versions) and Khono-Weise (KW) NN optical models. Figure 3: \({}^{3}\)SD\({}_{1}\)\(\bar{\rm N}\) bare phase shifts and inelasticities (upper panel) and mixing parameters (lower panel), as functions of the \(\bar{\rm N}\) laboratory momenta. Figure 2: \(\bar{\rm N}\)N \({}^{1}\)S\({}_{0}\) scattering phase shifts (degrees) as functions of the \(\bar{\rm N}\) laboratory momenta and for different optical models. Left panel for T=0 state (\({}^{11}\)S\({}_{0}\)) and right one for T=1 (\({}^{31}\)S\({}_{0}\)). Solid lines correspond to the real part \(\delta_{R}\) and dashed lines to the (positive) imaginary part \(\delta_{I}\). have in common a strong attraction (200-400 MeV at \(r=0.8\) fm) in T=0 channel, which seems not required by the NPWA. On the other hand, the Paris potential displays a strong repulsion below \(r\approx 0.6\) fm as well as a repulsive barrier at \(r\approx 1\) fm that are absent in the other models. Despite of that, they provide quite similar results for the integrated elastic (\(\sigma_{e}\)), annihilation (\(\sigma_{a}\)) and charge exchange (\(\sigma_{ce}\)) cross sections. This can be seen in Figure 1, where the integrated strong cross sections of these three models (together with their sum \(\sigma_{t}=\sigma_{e}+\sigma_{a}+\sigma_{ce}\)) are compared to each other as well as to the NPW results [7]. The same agreement was observed in the protonium S- P- and D- level shifts and widths as well as for the strong and \(\bar{\rm p}\)p scattering lengths (see Refs. [21; 31; 17; 18]). However, no comparison has been done at the level of phase-shifts. For the three considered models, Paris 2009, DR2 and KW, we have computed the S-matrix in the energy range \(0<p_{Lab}<1000\) MeV/c and for each PW state. We have extracted the S-matrix real parameters and compared them with the results of the Nijmegen PW analysis (Tabs VII-IX-X from Ref. [7]). The comparison with Julich potential, adjusted to reproduce the former, would be redundant except for the low energy parameters that were not given in the NPWA [7] and that have been included in our discussion. For the uncoupled states, the \(\bar{\rm N}\)N S-matrix is determined by a complex phase shift \(\delta_{C}=\delta_{R}+i\delta_{I}\) whose (positive) imaginary part \(\delta_{I}\) is unambiguously defined by the modulus of the S-matrix, the inelasticity parameter \(0<\eta<1\), according to \[S=e^{2i\delta_{C}}=e^{2i\delta_{R}}\;e^{-2\delta_{I}}\qquad\delta_{I}=-\frac{ 1}{2}\ln\eta \tag{15}\] Notice that the annihilation cross section in a given PW, is entirely determined by \(\eta\) as \[\sigma_{a}=(2J+1)\frac{\pi}{4q^{2}}\left[1-\eta^{2}\right] \tag{16}\] For the tensor-coupled states (e.g. \({}^{3}\)SD\({}_{1}\)), the 2\(\times\)2 S-matrix can be parametrised by 6 real parameters - two "bare phase shifts" \(\delta_{n}\), two mixing parameters \(\epsilon,\omega\) and two inelasticities \(\eta_{n}\) - according to the Bryan and Klarsfeld factorisation [32; 33; 34] \[\left(\begin{array}{cc}S_{11}&S_{12}\\ S_{21}&S_{22}\end{array}\right)=\left(\begin{array}{cc}e^{i\tilde{\sigma}_{ 1}}&0\\ 0&e^{i\tilde{\sigma}_{2}}\end{array}\right)\;M\;\left(\begin{array}{cc}e^{i \tilde{\sigma}_{1}}&0\\ 0&e^{i\tilde{\sigma}_{2}}\end{array}\right) \tag{17}\] where \[M=\left(\begin{array}{cc}\cos\epsilon&i\sin\epsilon\\ i\sin\epsilon&\cos\epsilon\end{array}\right)H\left(\begin{array}{cc}\cos \epsilon&i\sin\epsilon\\ i\sin\epsilon&\cos\epsilon\end{array}\right) \tag{18}\] and the matrix \(H\), real and symmetric, contains the inelastic parameters \(\eta_{i}\) as eigenvalues \[H=\left(\begin{array}{cc}\cos\omega&-\sin\omega\\ \sin\omega&\cos\omega\end{array}\right)\left(\begin{array}{cc}\eta_{1}&0\\ 0&\eta_{2}\end{array}\right)\left(\begin{array}{cc}\cos\omega&\sin\omega\\ -\sin\omega&\cos\omega\end{array}\right) \tag{19}\] Unitary models are defined by the condition \(SS^{\dagger}={\bf 1}\) to be fulfilled by (15) and (17). For uncoupled states, this implies \(\eta\equiv 1\) (or equivalently \(\delta_{I}\equiv 0\)). For tensor-coupled states, this implies \(\omega=0\), \(\eta_{1}=\eta_{2}=1\) and so \(H={\bf 1}\). In this case, (17) takes the usual Stapp-Ypsilantis-Metropolis (SYM) form defining the standard bare phase shifts \(\bar{\delta}_{n}\) and mixing parameter \(\bar{\epsilon}\) of the unitary case [35]. It is worth mentioning that, in the non unitary case, the definition of complex phase shifts for the coupled-channel states (\({}^{3}\)SD\({}_{1}\),\({}^{3}\)PF\({}_{2}\),...) is not free from ambiguities. In fact, the inelasticity parameters can be negative: they are only limited by the so-called "unitarity condition" [33] \[{\rm Tr}(1-SS^{\dagger})=2-{\rm Tr}(M^{2})=2-\eta_{1}^{2}-\eta_{2}^{2}>0\] which presumes nothing about their sign. We have found that in some of the considered models one of the inelasticity parameters is indeed negative. This happen at relatively high energy, when the mixing angles \(\epsilon\) and \(\omega\) are large. The natural extension of the uncoupled case (15) to each inelasticity parameter \(\delta_{I,n}=-\frac{1}{2}\ln\eta_{n}\) poses a problem. There are alternative ways to define the complex phase shifts, e.g. the straightforward extension of the SYM parametrisation with complex parameters. However, though being totally consistent, the relation with respect to the previously defined parameters is not clear, even for the real part of the phase shifts. Again, the differences appears at large values of the mixing parameters, i.e. beyond the zero energy region. Because of that, the definition of the low-energy parameters (scattering length and effective range) remains unambiguous. To get rid of this ambiguities, and to keep as closer as possible to the results of NPWA, we have only displayed in the tensor-coupled case the real bare shifts, together with the inelasticities and mixing parameters. The practical determination of these parameters is more involved than in the tensor decoupled case and we have followed the procedure described in Sect VII of Ref. [7]. Together with the phase shifts, the corresponding effective range functions have been also computed and the corresponding Low Energy Parameters (LEP) have been extracted. ## 3 Results ### Phase shifts We first present the strong \(\bar{\rm N}\)N complex phase shifts \(\delta_{C}=\delta_{R}+i\delta_{I}\) for the lowest partial waves as functions of the \(\bar{\rm N}\) laboratory momentum \(p_{Lab}\). The different states \(\alpha\equiv\{T,S,L,J\}\) are alternatively denoted in the spectroscopic notation by \(\alpha\equiv^{2T+1,2S+1}L_{J}\). The values corresponding to Nijmegen PW analysis are taken from Tab. 8 of Ref. [7]. The other models KW, DR2 and Paris (2009) have been computed by the authors directly from the potentials, with the original model parameters. We emphasize that the Julich model [12] results are, by construction, adjusted to the Nijmegen PW and there is no need to included them. There is an \(\pm n\pi\) ambiguity in the definition of the phase shift \(\delta\) which is formally solved by imposing \(\delta(E\rightarrow+\infty)=0\) and by keeping the same determination when the energy is decreased. Due to the sizable strengths of the \(\bar{\rm N}\)N potentials, this recipe is however of little practical interest since it imposes to start with the solution at very high energy and go inwards in energy. In a unitary model (hermitian hamiltonian), another way to fix the determination is by imposing the value at the origin to be \(\delta_{\alpha}\)(E=0)=\(n\pi\), where \(n\) is the number of bound states in channel \(\alpha\). This impose the full knowledge of the spectrum for each partial wave. On the other hand, the validity of this result, known as Levinson theorem, is not well established in the non unitary systems like the optical models we are considering in this work. Thus, and for the sake of comparison, we have conventionally adjusted all the computed phase shifts to the determination given in the Nijmegen PW analysis [7]. **Figure 2 contains the results for the \({}^{1}\)S\({}_{0}\) state**, left panel for isospin T=0 (\({}^{11}\)S\({}_{0}\)) and right panel for T=1 (\({}^{31}\)S\({}_{0}\)). The real part \(\delta_{R}\) is in solid line and the (positive) imaginary part \(\delta_{I}\) in dashed line, both in degrees. Different colours have been used to disentangle the different models: black for the NPW, blue for KW, red for Paris-2009. As one can see, there are major differences between them, specially in \(\delta_{R}\), which deserve some comments. For T=0 state (left panel), the real phase shifts of KW and DR models are close to the NPWA ones up to \(p_{Lab}\approx 700\) MeV/c and they both depart dramatically from the Paris-2009 starting at very low energy. We recall the reader that the slope of the phase shift at the origin is the scattering length since \(\delta_{\alpha}(q)\approx-a_{\alpha}q\) where \(q\) is the center of mass momentum, related to \(p_{Lab}\) by \(p_{Lab}=2q\). This difference could be due to the fact that Paris potential has a near-threshold quasi-bound state in this channel, absent in KW and Julich interactions. Its binding energy was \(E=-4.8-26i\) MeV in Ref. [19]. By using two independent methods, we confirm this state with a slightly different energy \(E=-10.2-23.2i\) MeV. As we will see in next subsection, the existence of this quasi-bound state is supported by a different sign in the corresponding scattering lengths (see Table 2). In this respect, a similar quasi-bound state is also present in DR2 model with \(E=-138-320i\) MeV, much deeper in energy and leaving no trace in the scattering region. It is worth emphasizing, however, that there is no univocal relationship between the sign of the scattering length (real part) and the existence of bound states. For real potentials, the positive sign can be either a consequence of a repulsive interaction or of an attractive interaction having one (or several) bound states. The negative sign indicates always the existence of an attraction but tells us nothing about the existence or non-existence of a bound state, which will actually depend on the strength of the attraction. The situation is even more delicate when using complex potentials and additional informations are required to draw consistent conclusions. In particular, we would like to notice that the very existence of a quasi-nuclear state in the \({}^{1}\)S\({}_{0}\)\(\bar{\rm N}\)N state appears as a consequence of the sign of the measured \(\bar{\rm p}\)p scattering length [38]. It was shown (See Figure 7 of this reference) that for weak (and attractive) \(\bar{\rm p}\)p interactions - i.e. using large values of the cutoff radius, \(r_{c}>1.7\) fm - the sign of Re\([a_{p\bar{p}}]\) scattering length is negative. It becomes positive - and in agreement with experiment - only when, by decreasing \(r_{c}\), the interaction is strong enough to create the first \(\bar{\rm p}\)p bound state. The \(a_{p\bar{p}}\) involves however both isospin components and, from this single quantity, it is not possible to conclude in which of the component it appears. Notice also that the properties of such states, in particular their width, strongly depend on the annihilation dynamics. When using short range annihilation potential, as in Paris potential and in some coupled-channel unitary models (UCCM), the widths are much smaller than when using annihilation potentials type eq. (14), as in DR2, KW and Julich potentials. One can find a discussion in Refs. [41] for KW and [38] for UCCM. The particular E-dependence of the Paris potential in the \({}^{11}\)S\({}_{0}\) state is also observed in the imaginary phase shift \(\delta_{I}\), which remains in a reasonable agreement with other models only up to \(p_{Lab}\approx 200\) MeV/c and displays a maximum at \(p_{Lab}\approx 600\) MeV/c. When studying the \(J/\Psi\to\gamma p\bar{p}\) decay of BES collaboration, the authors of Ref. [36; 37] interpreted the first peak in the \(\bar{\rm p}\)p invariant mass in terms the above mentioned \({}^{11}\)S\({}_{0}\) quasi-bond state, and the second one in terms of a resonant state \({}^{11}\)S\({}_{0}\) at \(\approx 250\) MeV above the threshold, i.e. \(p_{Lab}\approx 950\) MeV/c. The peculiar form of the Paris \({}^{11}\)S\({}_{0}\) potential depicted in Fig. 15, displaying a deep attractive well and repulsive pocket at \(r\approx\)1 fm, can indeed accommodate an S-wave resonance, but the height of the repulsive barrier is two times smaller than the supposed resonance energy. On the other hand, no clear evidence of this state is seen in the corresponding phase shifts. Only a vague structure is noticeable in \(\delta_{R}\) (red solid line) in the vicinity of \(p_{Lab}\approx 600\) MeV/c, with a maxima of \(\delta_{I}\) at almost the same energy, that could be related. For T=1 state (right panel), the same discrepancy in \(\delta_{R}\) between the Paris result and the other models is observed. However there is a major difference between the NPWA and the other optical models: the sharp resonant-like structure that the former manifests at \(p_{Lab}\approx 600\) MeV/c, both in the real and in the imaginary phase shifts. The existence of an S-wave shape resonance would require a \({}^{31}\)S\({}_{0}\) potential with a repulsive bump at finite distance, like for instance the one exhibited in Figure 15 for the Paris 2009 model, in the T=0 channel. But none of the considered models exhibit such a behaviour (See Appendix). The possibility for the NPWA to generate a resonance in this partial wave with a purely attractive (single-channel) potential is difficult to understand. Indeed, in their analysis the long- and medium-range part of the \(\bar{\rm N}\)N interaction was parametrized by the one-pion (\(V_{1\pi}\)) plus the two-pion (\(V_{2\pi}\)) exchange potentials. According to the recent work from the Idaho group on the EFT-NN interaction at N3LO [39], \(V_{2\pi}\) for S=0 states is strongly attractive in both isospin channels, in agreement with previous works [40]. Let us remind that when going from NN to \(\bar{\rm N}\)N system, \(V_{1\pi}\) contribution change the sign while \(V_{2\pi}\) remains unchanged. We have displayed in Figure 6 the \(V_{1\pi}\) and \(V_{2\pi}\) contributions to \(\bar{\rm N}\)N potential in spin singlet states for Figure 5: \({}^{3}\)P\({}_{0}\)\(\bar{\rm N}\)N scattering phase shifts (degrees) as functions of the \(\bar{\rm N}\) laboratory momenta. We use the same conventions as in Fig. 2. both T components. As one can see, the only repulsion appears for T=0 at NLO, as in Paris potential, but becomes increasingly attractive at higher orders. In NPWA, the attractive pion tail (\(1\pi+2\pi\)) is prolonged at \(b=\)1.2 fm with a boundary condition corresponding to an also attractive square well (see Tab 1 from [7]). Thus, the overall \({}^{31}\)S\({}_{0}\) potential is attractive, as it is the case of the other (KW, DR2, Paris) examined potentials. See Figure 15 in the Appendix. Furthermore the resonant-like bump takes place at center of mass energy of \(E\approx 100\) MeV, which is quite a high energy for a single-channel S-wave to produce a visible bump in the phase shifts. It must be pointed out that other mechanism to mimic resonant like structures in S-wave exist. For instance when a bound state, generated by the real part of the interaction, moves into the positive energy region due to the annihilation potential. The trajectory from the bound state to the continuum region in the complex energy plane was examined in our previous work with the KW potential [41]. This can be illustrated by the complex energy trajectory of the \({}^{11}\)S\({}_{0}\) state as function of the annihilation strength \(W_{0}\). When \(W_{0}=0\) there is a bound state at E=-54.7 MeV. When the annihilation is switched on, its imaginary part of the energy increases linearly with \(W_{0}\) and the state is pushed out into the continuum. It reaches Re(E)\(>\)0 at \(W_{0}\approx 0.47\) GeV and has a width \(\Gamma\approx 230\) MeV. The energy imaginary part continues to increase until the model value \(W_{0}=\)1.2 GeV. Thus, the widths of the positive energy states (\(E\sim\)100 MeV) thus obtained within this mechanism are very large (few hundreds of MeV) and cannot generate structures like the one displayed in the right panel of Fig. 2. All these reasons above, make it difficult to understand the structure displayed in Figure 2 (right panel) in terms of a resonance in the \({}^{31}\)S\({}_{0}\) PW, especially at 100 MeV above threshold. On the other hand, it is worth noticing that the Julich potential nicely reproduces the \({}^{13}\)S\({}_{0}\) phase shifts of the NPWA, that was attributed to an S-wave resonance, although no further explanation in terms of the underlying potential was given in their manuscript. This is a non trivial achievement, that worked also reasonably well in their N2LO version [11], and shows the extreme flexibility of the EFT potential. Interestingly, the same group, came to the conclusion [42; 43] that in order to reproduce the BES results on the \(J/\Psi\to\gamma p\bar{p}\) decay they were forced to slightly modify their original \({}^{31}\)S\({}_{1}\) potential. Once readjusted, the corresponding phase shifts do not reproduce anymore the NPWA structure of Figure 2 (right panel) but are very close to the - smoothly varying - KW results (blue curve). This happens in the N2LO [11] as well as in the N3LO versions of Julich potential [12]. The authors conclude that the origin of the near-threshold peaks observed in the BES experiment, may be ex Figure 6: One- (\(V_{\pi}\)) and two-pion (\(V_{2\pi}\)) exchange potentials in singlet \(\bar{\rm N}\)N states. Results are taken from G-parity transform of the EFT inspired NN Idaho potential [39]. The different orders up to N3LO, are plotted separately for both isospin (T) channels. The sum \(V_{\pi}\equiv V_{1\pi}+V_{2\pi}\) is strongly attractive. Only the T=0 states at NLO shows a short range repulsion. Figure 7: Pole trajectory of a \({}^{11}\)S\({}_{0}\) bound state in KW model as function of the strength of the imaginary part \(W_{0}\). In absence of annihilation potential (\(W_{0}=0\)), there is a bound state with E=-54.7 MeV. The effect of the annihilation potential is to generate a width and to pull out the state into the continuum. When \(W_{0}\approx 0.47\) GeV, one has \(Re(E)=0\) and \(Im(E)\approx 115\) MeV. With the model parameter \(W_{0}\)=1.2 GeV, the width of the state is \(\sim 400\) MeV. plained by assuming the existence of a \(\bar{\rm N}\)N \({}^{1}\)S\({}_{0}\) quasi-bound state, but in the T=1 rather than in the T=0 channel, as claimed in [36; 37]. Its energy was estimated to be \(E=-36.9-47.2i\) MeV. It could be relevant to notice that all the examined models have indeed a bound state in this PW when Im(V)=0. The imaginary part of the potential pushes KW one into continuum while the DR2 and Paris ones remain still bound, although sizeably deeper than in the Julich potential : E=-430-346i MeV for DR2 and E=-184-171i MeV for Paris. The possible origin of the resonant-like structure manifested in the NPWA, which is also manifested in other partial waves, will be discussed later. Our last comment on this \({}^{1}\)S\({}_{0}\) state, is to remark that the imaginary phase shifts \(\delta_{I}\) agree reasonably well with each other up to \(p_{Lab}\approx 400\) MeV/c, where the resonant-like behaviour of the NPWA starts showing up. **Figure 3** displays the S-matrix bare phase shifts and inelasticities (upper panel) and the mixing parameters (lower panel) of the triplet tensor coupled \({}^{3}\)SD\({}_{1}\) state. The \({}^{3}\)S\({}_{1}\) bare phase shifts \(\delta_{S}\) seem to be more stable than the \({}^{1}\)S\({}_{0}\) ones, for both isospin states, although in this case the best agreement is among KW, DR2 and Paris. For T=0, the bare shifts \(\delta_{S}\) are very close to each other up to \(p_{Lab}\)=700 MeV/c, where DR2 displays some structure, crossing 90 degrees in increasing, what suggest a standard resonance driven by D-waves due to its centrifugal barrier. The same structure is manifested in \(\delta_{D}\) and the mixing parameters. The inelasticities are less stable: \(\eta_{S}\) (left panel) departs sharply from NPW at \(p_{Lab}\approx\)500 MeV/c while \(\eta_{D}\) the dispersion among the models starts already at \(p_{Lab}\approx\)300 MeV/c. The mixing parameter \(\epsilon\) shows also strong deviations above 300 MeV/c while \(\omega\)'s are relatively close to each other. For T=1 (right panel) the S-wave inelasticity of Paris potential departs sensibly from the other models from the zero energy region. The dispersion in the mixing parameters is huge, with NPW and DR23 displaying a peculiar energy dependence. **The \({}^{1}\)P\({}_{1}\) phase shifts are displayed in Fig. 4**, using the same line and colour conventions as in Fig. 2. At first glance, it seems that in this case the global behaviour of the models is quite similar, at least at low energy. However, we will see in the next section that this is not really the case: major deviations exist from the origin but these differences are hidden in this representation due to \(q^{3}\) behaviour at small \(q\). **The \({}^{3}\)P\({}_{0}\) results are displayed in Fig 5**, with the same colour convention as in Fig. 2 and 4. In this partial wave, the differences between the model predictions (KW, DR2 and Paris-2009 ) - relatively close to each other - and the results of the NPWA are dramatic. For T=0, the deviations in \(\delta_{R}\) start already at \(q\approx 0\), displaying a different concavity. This corresponds to a negative effective range for the NPW (see Table 3). The imaginary phase shifts \(\delta_{I}\), start differing at \(p_{Lab}=200\) MeV/c and the differences increase with the energy. For T=1, the low energy phase shifts of all models are quite in agreement, including NPW results, what is manifested in the LEP's displayed in Table 3. Deviations start above \(p_{Lab}=200\) MeV/c, where the NPW results display the same resonant-like structure that the one observed in the \({}^{31}\)S\({}_{0}\) state, and practically at the same value of \(p_{Lab}\). Contrary to the \({}^{31}\)S\({}_{0}\) case, the \({}^{33}\)P\({}_{0}\) potential has a centrifugal barrier which can indeed acommodate a resonance, provided that the interaction is attractive enough. However, it is not the case in any of the examined models - KW, DR2 and Paris - which are globally repulsive in this channel (See Fig. 19 from the Appendix). This is in agreement with the corresponding scattering volumes in Table 3, including the values of the Julich potential : they are very stable and have very small imaginary parts, as it corresponds to a repulsive interaction. Nevertheless, the inner part of the NPW potential, used to define the boundary conditions at b=1.2 fm, is an attractive square well with 160 MeV depth. Even if beyond \(r=b\) the long range part \(V_{\pi}\) is slightly repulsive, the effective potential (V+ centrifugal) in the vicinity of \(r=1\) fm remains globally attractive (\(-80\) MeV) and we cannot exclude that a resonance could indeed be produced. This will be in strong tension with all the potentials, but this is not the only reason to be cautious with such a possibility. On one hand the maximum of the centrifugal barrier (60 MeV at \(r\)=1.2 fm) is sensibly smaller than the resonance energy (\(p_{Lab}=600\) MeV/c, \(E_{cm}\approx 95\) MeV). On the other hand, the \(\delta_{R}\) and \(\delta_{I}\) curves of both T=1 states, \({}^{31}\)S\({}_{0}\) and \({}^{33}\)P\({}_{0}\), can be over-imposed in the vicinity of the peak. Since it is difficult to attribute such a coincidence to a dynamical effect, occurring in two independent PW at the same energy, this behaviour suggest to look for an alternative explanation. We will come back to this important point at the end of this section. **The \({}^{3}\)P\({}_{1}\) results are displayed in Fig 8** with the colour convention of the previous Figures. For the T=0 state (left panel), all the phase shifts display quite a similar behaviour up to 300 MeV/c but the considered optical models start to depart dramatically from NPWA at \(p_{Lab}\approx\)400 MeV/c. Below that, it is one of the most stable channels. For T=1 (right panel), the deviations are more sizeable already from the zero energy region, specially in \(\delta_{R}\) with Paris 2009. This difference is due to the existence of a near-threshold quasi-bound state with \(E=-3.6-12.42i\) MeV, (slightly different form the value E=-4.5-9.0i MeV given in [19]) that is absent in the other models and is responsible for a different sign of the scattering volume (see Table 3). Finally, we show in Figure 9 the bare phase shifts and mixing parameters for the \({}^{3}\)PF\({}_{2}\) partial wave. For T=0, and in spite of some stability in the scattering lengths, the results of \(\delta_{P}\) falls in two different families: on one side the NPW and DR2 which are attractive and on the other KW and Paris that are repulsive. This qualitative difference remains such in all the considered energy domain. The same splitting is observed in the mixing parameters above \(p_{Lab}\approx\)200 MeV/c. For T=1, a similar situation happens, with the \(\delta_{P}\) values of NPW evolving in the opposite direction than the rest of the models. Remarkably, the mixing parameter \(\epsilon\) remains stable up to \(p_{Lab}\approx\)600 MeV/c while \(\omega\) values start diverging at 300 MeV/c. As we already mentioned, the NPW results for the \({}^{31}\)S\({}_{0}\) and \({}^{33}\)P\({}_{0}\) states, display the same kind of non trivial structure at \(p_{Lab}\approx\) 600 MeV/c both for the real and the imaginary phases, while they are absent in the examined optical models, with the exception of Julich potential which reproduces it well. It seems however unlikely, although not impossible, that a dynamical effect could generate two resonances at the same energy, in different partial waves having the same parameters, one in S-wave and other in P-waves. Looking for a possible explanation of these structures, we noticed that this energy region corresponds to a sharp maximum of \(\delta_{I}\), that is to a minimum of the inelasticity parameter \(\eta\), which turns to be - in this particular waves - very small. For the \({}^{31}\)S\({}_{0}\), for instance, one has \(\eta\approx 0.01\) at the minima, that is one order of magnitude smaller than for the \({}^{11}\)S\({}_{0}\). The same is true for the \({}^{33}\)P\({}_{0}\) state, when compared to other P-waves. This can be seen in Fig. 10, where the inelasticity parameter \(\eta\) is plotted as function of \(p_{Lab}\) for several states and where the peculiarity of the \({}^{11}\)S\({}_{0}\) and \({}^{33}\)P\({}_{0}\) states is manifested. Since \[\mid S_{\alpha}(E)\mid=e^{-2\delta_{I}^{\alpha}(E)}=\eta_{\alpha}(E)\] the scattering matrix of \({}^{11}\)S\({}_{0}\) is in modulus \(\sim 10^{-2}\). And similar values for the \({}^{33}\)P\({}_{0}\) one. Given the (not estimated) errors of the PW results in the region of the resonant-like structure, the S-matrix of these particular waves is in fact compatible with zero, quite a different situation than for a real resonance where the S-matrix would have rather a pole. It corresponds actually to the "black sphere model" scattering, which is quite different from a resonant scattering, although it produces indeed some structure in the phase shifts. It remains to be seen wether this structures are an artefact of the analysis or if they remain unavoidable conclusions of it. In this respect, it could be pertinent to notice that both states where they occur are \(J=0\), they have very little statistical weight and could be affected by large errors in their determination. Figure 8: \({}^{3}\)P\({}_{1}\)\(\bar{\rm N}\)N scattering phase shifts (degrees) as functions of the \(\bar{\rm N}\) laboratory momenta, using same conventions than in Fig. 2 Furthermore the - very small - inelasticity parameter \(\eta\) enters quadratically as in the annihilation cross sections (16) from where it should, in principle, be extracted. However, the possibility to extract from the data analysis a significant signal of the order of \(10^{-4}\) seems unrealistic. This is specially true in an observable largely dominated by \({}^{3}\)SD\({}_{1}\) and \({}^{3}\)PF\({}_{2}\) partial waves, at the considered values of \(p_{Lab}\approx 600\) Mev/c. On another hand to generate a model exhibiting zeros of the S-matrix, practically at the same point of the real axis, in different partial waves appears to be extremely artificial. For all these reasons, we believe that the possibility for a PWA, to move from one solution to another one in the vicinity of the minima of the inelasticity parameter, should not be disregarded. One can thus conjecture that the phase shifts in the vicinity of the \(\delta_{I}\) peak, could, in fact, be continued without exhibiting any resonant-like behaviour, as it happens in all the discussed models, and as it was considered in Refs. [42; 43] for the \({}^{1}\)S\({}_{0}\) state. This would not eliminate all the before mentioned inconsistencies among the NN optical models but will greatly simplify the analysis in these two partial waves. While the above presentation of the phase shifts has some interest for a global understanding of the interaction, it is not very useful for a detailed comparison of the models at low energy. Apart from the poor determination of the phase shifts themselves, the \(L>0\) states have a low-energy behaviour like \(\delta(q)\approx-a_{L}q^{2L}\) which hides their contribution in this energy region. One can remove the "centrifugal term" by redefining the reduced phase shifts \(\bar{\delta}=\delta/q^{2L}\), as it was done in [41], but we believe it is more instructive to compare the effective range functions \(Z_{\alpha}\) and their dependence on the center of mass momenta \(q\) for the different partial waves. This will be done in the following section. Figure 9: \({}^{3}\)PF\({}_{2}\) NN bare phase shifts and inelasticities (upper panel) and mixing parameters (lower panel), as functions of the N laboratory momenta. ### The Effective range functions and low energy parameters For the tensor decoupled states the effective range functions \(Z_{\alpha}\) take the form \[Z_{\alpha}(q^{2})=q^{2L+1}\cot\delta_{\alpha}=-\frac{1}{a_{\alpha}}+\frac{1}{2}r _{\alpha}q^{2}+o(q^{4}) \tag{20}\] It is interesting to plot this quantity as function of \(q^{2}\), for it easily allows to determine the validity region of the effective range expansion (ERE), explicited in eq. (20), and given by the linearity domain near the origin. Notice that the value at the origin \(Z(0)\) \[Z(0)=Z(0)_{R}+i\;Z(0)_{I}=\frac{-a_{R}+ia_{I}}{\mid a\mid^{2}} \tag{21}\] is related to the scattering length as \[a_{R}= -\mid a\mid^{2}Z(0)_{R}=-\frac{Z(0)_{R}}{\mid Z(0)\mid^{2}}\] \[a_{I}= \mid a\mid^{2}Z(0)_{I}=\frac{Z(0)_{I}}{\mid Z(0)\mid^{2}} \tag{22}\] In particular one has imperatively \(Z(0)_{I}<0\). In what follows we will display \(Z_{\alpha}(q^{2})\) for all the considered PW. First (left panel), in the full energy domain \(q^{2}\in[0,3]\) fm\({}^{-2}\), which correspond to \(p_{Lab}\leq 700\) MeV/c. Next (right panel of the same figure), presents a zoom in the low energy region \(q^{2}\in[0,0.2]\) fm\({}^{-2}\) (\(p_{Lab}\leq 180\) MeV/c) to better exhibit the linearity domain and determine the low energy parameters (\(a_{\alpha},r_{\alpha}\)), abusively denoted scattering "length" and effective "range". The scattering length is given by the \(Z(0)\) value, following eq. (21), and the effective range from (twice) the slope at the origin. The colour and model conventions are the same as those used for the phase shifts: real part of \(Z\) is plotted by solid lines and the imaginary part by dashed lines. The extracted LEP values are collected in Tables 2 and 3 for the considered models and PWs. The NPW results given in [7] were limited to \(p_{Lab}\leq 100\) MeV/c. We have quadratically extrapolated their values at the origin by using the 3 lowest points. They are denoted by Nijm* in Tables 2 and 3 and may have only an indicative value, in particular by comparing them to Julich results. Despite of this naive extrapolation, they all fulfill \(Z(0)_{I}<0\). **The results for the \({}^{1}\)S\({}_{0}\)** state are displayed in Fig. 11. The upper panel correspond to T=0 and the lower one to T=1. The particular behaviour of Paris results is manifested, both in the real as well as the imaginary phase shifts in all the energy domain. One finds however a qualitative agreement among the other models including NPW. As one can see from the right upper figures, the \(Z(q^{2})\) dependence of the T=0 state for the Paris potential is totally flat in the full domain. For the other models, the ERE is valid only at relatively low energy \(q^{2}\leq 0.05\) fm\({}^{-2}\), i.e is \(p_{Lab}\approx 120\) MeV/c. Beyond this energy, the \(q^{4}\) terms neglected in (20) become relevant, and any linear extrapolation in \(q^{2}\) would lead to wrong results. The \(Z(q^{2})\) dependence for the T=1 state (lower panels) is fairly smooth, with no visible trace of the NPW resonant-like behaviour manifested in the corresponding phase shift (right panel of Fig 2) at \(p_{Lab}\approx 600\) MeV/c (\(q^{2}\approx 2.25\) fm\({}^{-2}\)). The differences among the models are much smaller than for T=0 (except for Paris potential) and lead to scattering length values which are positive and consistent to each other within 15 % (see Table 2). **The effective range functions for the P-waves are displayed in Figures 12 to 14.** In this representation the low-energy part is magnified with respect to the phase shifts and one can see that, as it was the case for S-waves, sizeable differences among the models themselves and with respect to the NPWA emerge. **Figure 12 shows \(Z(q)\) for the \({}^{1}\)P\({}_{1}\) state**. For T=0, the results of the real part (left upper panel) have a similar qualitative behaviour: monotonously increasing from the origin until a maximum value and decreasing with a zero crossing towards negative region. However, although the Paris, DR and KW models are close to each other up to \(q^{2}\)=0.5 fm\({}^{-2}\) (\(p_{Lab}\approx 300\) MeV/c), Figure 10: Inelasticity parameter \(\eta\) in the NPW as function of the \(\bar{\rm N}\) laboratory momenta. they differ significantly at high energy, specially with respect the NPW results. For T=1 (lower panels) the dispersion is even larger and starts at low energy. Notice that the DR2 model displays a fast increasing near the origin which suggest a near-threshold resonant state. It manifests also in the lower right panel with an ERE breaking below \(q^{2}=0.05\) fm\({}^{-2}\). Despite these differences in the model predictions, it is worth noticing the remarkable stability of the real part of the scattering volumes for both isospin states. They remain all between a 15% difference band, as one can see in Table 3. A possible reason of this stability will be discussed at the end of the section. The essential differences between models are in fact given by their absorptive parts. These could, in principle, be settled by additional measurements of the full fine structure in antiprotonic atoms. Unfortunately this measurement still waits for its turn [50]. **The results for \({}^{3}\)P\({}_{0}\) wave are shown in Figure 13**. Again, for T=0 the real phase shifts a general qualitative agreement is observed among DR, KW and Paris models in a more extended energy region, with a departure from NPW already at \(q=0\). The validity of the ERE expansion (upper right panel) extends up to \(q^{2}=0.2\). Except for the Julich model, the corresponding scattering volumes have a large real part, attributed in [41] to the existence of a near-threshold bound or resonant state, and are in a very close agreement (\(<\)2%). For T=1, the NPW resonance-like structure displayed in Fig 5 at \(p_{Lab}\approx 600\) MeV/c leaves no trace in the effective range function \(Z\). However a similar structure - absent at the level of phase shifts - is seen in \(Z\) at \(q^{2}=0.5\) fm\({}^{-2}\), breaking any possible agreement with the other optical models. The corresponding scattering lengths (real part) remain close within 15% and the imaginary part is very small. Figure 11: Effective range function (20) for the \(\bar{\rm N}\)N \({}^{1}\)S\({}_{0}\) states as function of the center of mass momentum squared (in fm\({}^{-2}\)). Upper figures correspond to T=0 and the lower ones to T=1. In both cases, the right figure is a zoom of the left one, restricted to the low energy domain \(q^{2}\in[0,0.2]\) fm\({}^{-2}\) (\(p_{Lab}\leq 180\) MeV/c), where the ERE of (20) is manifested by the linear behaviour near the origin. **The \({}^{3}\)P\({}_{1}\) effective range functions \(Z(q^{2})\) are displayed in Figure 14** for both isospin channels. The T=0 state (upper panel) is the most stable partial wave, for the real as well as for the imaginary part. This is probably due to the \({}^{13}\)P\({}_{1}\) potential, repulsive in all the models, which also explains the small imaginary part The quantitative disagreements start only above \(q^{2}\approx\)1 fm\({}^{-2}\) (\(p_{Lab}\approx\)400 MeV/c) and increase with the energy. The ERE (right panel) works perfectly in all the domain and the LEPs (both \(a_{1}\) and \(r_{1}\)) are in close agreement, with an almost vanishing imaginary part. For T=1, the main difference comes Paris 2009, which displays a different qualitative behaviour in all the energy domain, including the LEPs. As mentioned, this particular feature is due to a quasi-bound state generated in this model at E=-3.6-i12.4 MeV. The other optical models are in quite a good agreement. The low energy parameters (\(a_{L}\) and \(r_{L}\)) of the examined partial waves are collected in Table 2 for S-waves and Table 3 for P-waves. In view of these results some general remarks can be drawn. They are in order: 1. Despite the huge differences in the phase shifts described in this and the previous sections, there is a remarkable stability in the "qualitative" zero-energy predictions, mainly the scattering lengths and volumes. We mean by that their "repulsive" or "attractive character", more precisely the sign of their real part, the almost vanishing imaginary part of \({}^{13}\)P\({}_{1}\) and \({}^{33}\)P\({}_{0}\) states, or the relatively small values of the \({}^{3}\)PF\({}_{2}\). This is specially true if one take into account that none of these models has been adjusted in order to reproduce the zero energy protonium results and that the potential themselves are very different. For the S-waves, only the \({}^{11}\)S\({}_{0}\) result of Paris and DR2 potentials have different sign. For the P-waves, the only exception in this general qualitative agreement is the \({}^{33}\)P\({}_{1}\) state, again due to Paris potential which has a different sign. As we have already noticed, the reason for this difference Figure 12: Effective range function (20) for the \(\bar{\rm N}\)N \({}^{1}\)P\({}_{1}\) state as function of the center of mass momentum squared (in fm\({}^{-2}\)), with the same convention as in figure 11. is in both cases related to a near threshold quasibound state in the corresponding PW. Furthermore, in most of the sates, this agreement is not only qualitative but there is a reasonable quantitative agreement (between 10-20% with respect their averaged) in the numerical values, except in some particular states and models that we will detail below. 2. A possible explanation for this astonishing stability in the real parts could be the one-pion exchange dominance, as it was suggested by Ericson and Weise for the NN case (see Sect 3.8 of Ref. [45]). Indeed, for a tensor uncoupled state, the integral form for the scattering "length" can be written as \[a_{L}=\lim_{q\to 0}\ \frac{1}{q^{2L+2}}\ \int_{0}^{\infty}dr\;\hat{j}_{L}(qr)\;v(r)\;u_{L}(r)\] (23) where \(v=\frac{m}{\hbar^{2}}V\) is the corresponding potential, \(\hat{j}_{L}\) the reduce regular spherical Bessel and \(u_{L}\) the reduced radial solution that behaves asymptotically as \[u_{L}(r)=\hat{j}_{L}(qr)+\tan\delta_{L}\;\hat{n}_{L}(qr)\] According to these authors, a good approximation of \(a_{L}\) (for L\(>\)0) is provided by the Born approximation of the one-pion potential tail \(v_{\pi}\), that is: \[a_{L}^{B}(\pi)=\lim_{q\to 0}\ \frac{1}{q^{2L+2}}\ \int_{0}^{\infty}dr\;\mid\hat{j}_{L}(r)\mid^{2}\;v_{\pi}(r)\] (24) By inserting the one-pion potential \[V_{\pi}(x)=c_{\pi}\left[\sigma\cdot\sigma+S_{12}\chi_{T}(x)\right]\;Y(x)\; \tau\cdot\tau\] (25) with \(x=\frac{m_{\pi}\tau}{\hbar^{2}}\), \[c_{\pi}=\frac{m_{\pi}}{3}\frac{g^{2}}{4\pi}\left(\frac{m_{\pi}}{2M}\right)^{2},\] \[Y(x)=\frac{e^{-x}}{x},\] and \[\chi_{T}(x)=1+\frac{3}{x}+\frac{3}{x^{2}}\] into eq. (24) one gets: \[a_{L}^{B}(\pi)=\frac{c_{\pi}}{(2L+1)!!^{2}}\,\left(\frac{M}{\hbar^{2}}\right) \,\left(\frac{\hbar}{m_{\pi}}\right)^{2L+3}(\tau\cdot\tau)\] Figure 13: Effective range function (20) for the \(\bar{\rm N}{\rm N}\)\({}^{3}{\rm P}_{0}\) state as function of the center of mass momentum squared (in fm\({}^{-2}\)) and the same conventions as in figure 11. \[\left\{(\sigma\cdot\sigma)(2L+1)!+S_{12}\Big{[}(2L+1)!+3[(2L)!+(2L-1)!]\Big{]}\right\}\] The first remark about the later expression is the "\(\tau\cdot\tau\) rule", i.e the fact that the ratio of the scattering lengths of two isospin components of the same PW is given by the value of \(\tau\cdot\tau\) operator: \(\tau\cdot\tau\)=-3 for T=0 and \(\tau\cdot\tau\)=1 for T=1. Indeed the real part of the scattering volumes displayed in Table 3 (uncoupled states) roughly fulfil this requirement. There are two exceptions: the results of Nijmegen-Julich \({}^{3}\)P\({}_{0}\) (in relative sizes) and the Paris \({}^{3}\)P\({}_{1}\) (in relative sizes and signs). By using the numerical values \(m_{\pi}\)=138.039 MeV, M=938.9183 MeV (averaged pion and N masses) and \(g^{2}/4\pi\)=14.4, one obtains for the NN states the results displayed in Table 4. The \({}^{3}\)P\({}_{2}\) results is decoupled from the \({}^{3}\)F\({}_{2}\) tensor partner. With the recommended NPW \(\pi NN\) coupling constant \(g^{2}/4\pi\)=13.9, a reduction factor 0.965 must be used. For \({}^{1}\)P\({}_{1}\) state, the Born pion values are close (15%) to the full results from Table 3, except for KW where the difference is twice larger. For \({}^{33}\)P\({}_{0}\), the differences are of the same order. Only the \({}^{13}\)P\({}_{0}\) Julich result shows a large discrepancy in the \(\tau\cdot\tau\) rule. For \({}^{33}\)P\({}_{1}\), the agreement is even better, except for the instructive Paris result which differs substantially. Indeed the "one-pion exchange dominance" is based on the assumption that the scattering solution \(u_{L}\) is close to the free wave \(\hat{j}_{L}\) in the dominant part of the integral (23). In case of the existence of a bound state, as in the Paris model, \(u_{L}\) has a node and change its sign with respect to the free solution. For \({}^{33}\)PF\({}_{2}\), the Born results from eq. (24) cannot directly be applied since they were stablished for tensor uncoupled states. However for the single \({}^{3}\)P\({}_{2}\) state they predict a vanishing Re(a) and the small value of the full results from Table 3 could be a trace of this compensation. To close this remark, we would like to mention that while the "one-pion exchange dominance" is justified in the NN case, where it was stablished, it has an uncertain applicability in the NN physics. Figure 14: Effective range function (20) for the NN 3P1 state as function of the center of mass momentum squared (in fm\({}^{-2}\)), with the same convention as in figure 11. Apart from disregarding the annihilation physics, this approach will fail in presence of one or several bound or resonant states, as it is the case in most of \(V_{\bar{N}N}\) models. We have seen an illustrative example in the \({}^{33}\)P\({}_{1}\) case with Paris results having different sign. However the same breakdown of the "one-pion exchange dominance" can happen if there are two bound states, although keeping the same sign. This can be the case of the \({}^{13}\)P\({}_{1}\) state with KW or the \({}^{13}\)P\({}_{0}\) with Julich where the \(\tau\cdot\tau\)-rule is badly violated. 3. The imaginary part of S-waves is also remarkably stable within quite narrow limits Im[a(\({}^{11}\)S\({}_{0}\))]=1.18\(\pm\) 0.17 fm, Im[a(\({}^{31}\)S\({}_{0}\))]=0.60\(\pm\)0.03 fm and Im[a(\({}^{13}\)SD\({}_{1}\))]=0.82\(\pm\)0.05 fm. Only the \({}^{33}\)SD\({}_{1}\) state presents some dispersion essentially due to Paris result, with Im[a(\({}^{33}\)SD\({}_{1}\))]=0.73\(\pm\)0.30 fm. The imaginary part of P-waves is much less stable, although some common features are shared like the small values for the \({}^{13}\)P\({}_{1}\), due to its repulsive character. 4. Of particular interest is the \({}^{13}\)P\({}_{0}\) state, with a very large real part \(\sim\) 9 fm\({}^{3}\) shared by DR, KW and Paris models (Julich results are 3 times smaller), and confirmed by protonium data. This large and negative value was attributed in [41] to the existence of a near-threshold state. However it finds also a "natural" explanation in terms of the "pion dominance" in Table 4, which predicts Re(a)=-9.3 fm\({}^{3}\). This, at first glance, puzzling situation can be reconciled if one takes into account the result of Ref. [46], according to which, in the chiral limit (\(m_{\pi}\)=0), the NN \({}^{3}\)P\({}_{0}\) state (so T=1) has a zero energy virtual state, with a diverging scattering volume. The existence of such a NN bound state, as well as some related consequences in nuclear matter, is prevented by the short range NN repulsion, which is however absent in the NN case and leave open such a possibility. 5. The most stable states are those with a repulsive potential. These are the \({}^{13}\)P\({}_{1}\) and \({}^{33}\)P\({}_{0}\). They have in common a small imaginary part both in \(a_{1}\) and \(r_{1}\) since they are little sensitive to annihilation dynamics. For \({}^{13}\)P\({}_{1}\), all models agree with a real part of 4.9\(\pm\)0.3 fm\({}^{3}\) and an imaginary part smaller than 0.1 fm\({}^{3}\) For \({}^{33}\)P\({}_{0}\), they all agree with a real scattering volume 2.5\(\pm\) 0.25 fm\({}^{3}\) and a small imaginary part \(\sim\) 0.1 fm\({}^{3}\). The Paris potential is a particular case: \(V_{{}^{33}P_{1}}\) is attractive with a quasi-bound state previously discussed. However \(V_{{}^{13}P_{1}}\) is even more attractive than the former but has a positive Re[\(a_{{}^{13}P_{1}}\)] as for the repulsive models. This suggest the existence of a second bound state for the \({}^{13}\)P\({}_{1}\) state which plays the role of an effective repulsion. 6. Concerning the effective range values \(r_{L}\) there is no any trace of stability in the model predictions, which translate the fact that beyond the zero energy region the examined NN optical models display larger differences. ### Hydrogen atoms The measurement of level shifts and widths in Hydrogen atoms is an alternative way to access the \(\bar{\rm p}\)p scattering lengths and volumes. A formula derived by Trueman [47] finds a connection between the protonium complex level shifts and the Coulomb corrected \(\bar{\rm p}\)p scattering lengths. In the case of antiprotonic hydrogen, due to large Bohr radius (\(B\approx\) 57 fm), this relation is essentially linear [18]. The \(\bar{\rm p}\)p scattering lengths are obtained by coupling both T components by Coulomb and \(\Delta m\) corrections. One obtains however a reasonable approximation, denoted \(a_{\bar{N}N}\) to distinguish it from the exact value \(a_{\bar{p}p}\), by neglecting this coupling and isospin-averaging the results of Tables 2 and 3, i.e: \[2\;a_{\bar{N}N}=a_{T=0}+a_{T=1} \tag{26}\] Table 5 shows the comparison between the computed values - \(a_{\bar{N}N}\) and \(a_{\bar{p}p}\) - and those extracted from the atomic measurements [48] via the Trueman relation. Notice that the inclusion of Coulomb and \(\Delta m\) can represent up to a \(\approx\) 30% difference between \(a_{\bar{N}N}\) and \(a_{\bar{p}p}\) values. For S-waves the discrepancies existing in Table 2 within the different models, mainly concerning the \begin{table} \begin{tabular}{l l l l l} & \(a_{0}\) & \(r_{0}\) & \(a_{0}\) & \(r_{0}\) \\ \hline T=0 & \({}^{11}\)S\({}_{0}\) & & & \({}^{13}\)SD\({}_{1}\) & \\ \hline Nijm* & -0.17 -1.01i & -6.9-2.9 i & – & – \\ Julich & -0.21 -1.23i & – & & 1.42-0.88i & – \\ Paris 09 & 1.27 -1.18i & -0.53+0.14i & 1.20-0.80i & – \\ KW & -0.03 -1.35i & -4.7-7.9i & 1.23-0.77i & – \\ DR2 & 0.10 -1.07i & -11-6.2i & 1.28-0.78i & – \\ \hline T=1 & \({}^{31}\)S\({}_{0}\) & & & \({}^{33}\)SD\({}_{1}\) & \\ \hline Nijm* & 1.02 -0.60i & 0.7-1.2i & – & – \\ Julich & 1.05 -0.58i & – & 0.44-0.96i & – \\ Paris 09 & 0.76 -0.56i & 0.9-3.9i & 0.61-0.44i & – \\ KW & 1.07 -0.62i & 0.7-1.9i & 0.78-0.80i & – \\ DR2 & 1.20 -0.57i & 0.6-1.6i & 0.89-0.71i & – \\ \end{tabular} \end{table} Table 2: S-wave NN low energy parameters (in fm) for the considered optical models: Julich results are taken from Tab 3 of Ref. [12], KW and DR2 from [18], Paris 2009 have been recomputed and are in agreement with [44]. The values of Nijmegen are obtained by extrapolating the phase shifts from Figures 2 and 3. state, are smeared out in the T and S-averaged value, which is found to be in a nice agreement among them and with the experimental value. A remarkably good agreement is also observed in the, non trivial, \({}^{3}\)SD\({}_{1}\) state. The major problem to improve the situation for S-waves is the \({}^{11}\)S\({}_{0}\) near-threshold quasi-bound state, present in Paris and probably DR2 models but absent in KW and Julich ones. It results into a negative value of the corresponding scattering length and that generates a factor 2 in the real parts. For P-waves, little is known experimentally. The measurement of the isolated \({}^{3}\)P\({}_{0}\)\(\bar{\rm p}\) level shift [50] seems to confirm the large value of the \({}^{13}\)P\({}_{0}\) scattering volume displayed in Table 3, predicted by Paris, KW and DR2 models. In fact the large value of Re[\({}^{13}\)P\({}_{0}\)]\(\approx\)-9 fm\({}^{3}\) that they predict, and that is averaged with a positive Re[\({}^{33}\)P\({}_{0}\)]\(\approx\)2.5 fm\({}^{3}\), is still underestimated for reproducing the experimental result. One would rather need Re[\({}^{13}\)P\({}_{0}\)]\(\approx\) -13 fm\({}^{3}\). This is clearly in tension with the Julich prediction which is one order of magnitude smaller than the other models and the experimental value. Since the large values of Re[\(a_{13P0}\)] are predicted by the "pion dominance" described in the previous section, one could find an explanation of this disagreement in the particular form of the one-pion potential (eq. 2.1 of [12]), which includes a non-local relativistic correction and results into a smaller effective \(\pi NN\) coupling constant. As polarisation experiments are missing, the atoms offer a unique possibility to study the spin structure of interactions. Again, there are sizeable differences between the models. Unfortunately these happen also in the absorptive parts which are vital for the PUMA experiment. These should be resolved on the side of theory and more important on the side of experiments. The priority, we believe, should be given to the full resolution of the atomic fine structure in Hydrogen and Deuterium. In particular the 2\(P\) state in Hydrogen displays a clear \({}^{3}\)P\({}_{0}\) state, indicated in Table 5 and three other states lumped together and difficult to resolve. See Ref [50] for a dedicated review. An improvement of this resolution would be extremely helpful to eliminate the model differences in the partial waves. ## 4 Conclusion We have compared the strong NN phase shifts obtained in the Nijmegen Partial Wave Analysis [7], used to construct the chiral EFT Julich optical potential at N3LO [12], with some of the currently used NN optical models in configuration space: Dover-Richard (DR2) [20; 21], Kohno-Weise [22] and Paris (updated version from 2009) [19]. For all these models we have computed the strong phase shifts and extracted the low energy parameters (scattering lengths and effective ranges). The corresponding potentials are included in the Appendix. This comparison is limited to the S and P waves. \begin{table} \begin{tabular}{l c c c c} & \(\sigma\cdot\sigma\) & \(\tau\cdot\tau\) & \(S_{12}\) & Re[\(a_{L}\)] \\ \hline \({}^{11}\)P\({}_{1}\) & -3 & -3 & 0 & -3.09 \\ \({}^{31}\)P\({}_{1}\) & -3 & 1 & 0 & 1.03 \\ \hline \({}^{13}\)P\({}_{0}\) & 1 & -3 & -4 & -9.27 \\ \({}^{33}\)P\({}_{0}\) & 1 & 1 & -4 & 3.09 \\ \hline \({}^{13}\)P\({}_{1}\) & 1 & -3 & 2 & 6.18 \\ \({}^{33}\)P\({}_{1}\) & 1 & 1 & 2 & -2.06 \\ \hline \({}^{13}\)P\({}_{2}\) & 1 & -3 & -2/5 & 0 \\ \({}^{33}\)P\({}_{2}\) & 1 & 1 & -2/5 & 0 \\ \end{tabular} \end{table} Table 4: N\(\bar{\rm N}\) scattering volumes (fm\({}^{3}\)) as predicted by the pion dominance from [45] \begin{table} \begin{tabular}{l c c c c c c c} & \(a_{1}\) & \(r_{1}\) & \(a_{1}\) & \(r_{1}\) & \(a_{1}\) & \(r_{1}\) & \(a_{1}\) & \(r_{1}\) \\ \hline T=0 & \({}^{11}\)P\({}_{1}\) & & \({}^{13}\)P\({}_{0}\) & & \({}^{13}\)P\({}_{1}\) & & \({}^{3}\)PF\({}_{2}\) & \\ \hline Nijm* & -3.34-1.22i & 9.3-1.2i & -3.06-7.23i & -1.7-1.5i & 4.36-0.00i & -3.5-0.0i & – & – \\ Julich & -2.87-0.36i & – & -2.83-7.82i & – & 4.61-0.05i & – & -0.74-1.13i & – \\ Paris 09 & -3.62-0.34i & 3.8-0.8i & -8.78-4.99i & 0.23-1.1i & 5.12-0.02i & -3.4-0.02 & -0.49-0.87i & – \\ KW & -3.36-0.62i & 3.7-1.6i & -8.83-4.45i & 0.25-0.97i & 4.73-0.08i & -3.5-0.1i & -0.46-1.09i & – \\ DR2 & -3.28-0.78i & 4.2-2.3i & -8.53-3.50i & 0.63-1.0i & 5.14-0.09i & -3.4-0.1i & -0.59-0.85i & – \\ \hline T=1 & \({}^{31}\)P\({}_{1}\) & & \({}^{33}\)P\({}_{0}\) & & \({}^{33}\)P\({}_{1}\) & & \({}^{3}\)PF\({}_{2}\) & \\ \hline Nijm* & 0.66-0.18i & 3.3-20i & 2.33-0.92i & -10-0.7i & -2.02-0.70i & 4.7-2.8i & – & – \\ Julich & 0.80-0.34i & – & 2.18-0.19i & – & -2.04-0.55i & – & -0.48-0.34i & – \\ Paris 09 & 1.00-0.77i & -3.7-9.8i & 2.74-0.00i & -5.2-0.01i & 0.28-4.11i & -3.0-2.0i & -0.13-0.21i & – \\ KW & 0.71-0.47i & -8.3-21i & 2.43-0.11i & -5.8-0.43i & -2.17-0.59i & 2.7-3.5i & -0.30-0.45i & – \\ DR2 & 1.02-0.43i & -11-10i & 2.67-0.15i & -5.4-0.53i & -2.02-0.70i & 4.6-3.9i & -0.04-0.53i & – \\ \end{tabular} \end{table} Table 3: P waves N\(\bar{\rm N}\) low energy parameters (in fm\({}^{3}\)) for the considered optical models: Julich results are taken from Tab 3 of Ref. [12], KW and DR2 from [18], Paris 2009 have been recomputed and are in agreement with [44]. The values of Nijmegen are obtained by extrapolating the phase shifts from Figures 2 and 3. In spite of providing very close elastic, annihilation and charge-exchange integrated cross sections (Figure 1), these models are not phase-equivalent: large and systematic differences have been observed in almost all the partial waves, among them and with respect to the NPWA. In the low energy region one observes some stability in the scattering lengths and volumes (Tables 2 and 3), in particular the "repulsive" or "attractive" character, i.e. the sign of Re[\(a_{L}\)], which is respected by all models in almost all partial waves. For P-waves this stability could be related to the "one-pion exchange dominance", that is scattering volumes roughly determined by the pion Born term. It is however also manifested in S-waves, like the surprising stability of the low energy parameters of the tensor coupled \({}^{13}\)SD\({}_{1}\) state. Exceptions are the \({}^{11}\)S\({}_{0}\) and \({}^{33}\)P\({}_{1}\) partial waves, due to the presence of a neartheshold quasi-bound state in DR2 and Paris, and the \({}^{13}\)P\({}_{0}\) result of Julich model which underestimates the protonium experimental measurements. Despite these isolated differences, the isospin- and spin-averaged values for S-wave are in close agreement among the models themselves as well as with the measured quantities (Table 5). The later concern mainly the \(\bar{\rm p}\)p measurements and so are unable to disentangle a selected isospin component. The differences worsen when increasing the energy, as it is already manifested with the dispersion in the effective range values and more explicitly in the phase-shifts (Figures 2, 3, 4, 5, 8 and 9) and zoomed in the corresponding effective range functions. These increasing differences cannot be explained by the relativistic kinematics implemented in the Nijmegen Partial Waves Analysis, relating \(p_{Lab}\) to the center of mass momentum, or in the Julich relativistic dynamical equation. Our main conclusion in this work is that if the Nijmegen Partial waves analysis must be considered as a reference, as it was the case for the Julich model [12], none of the examined optical potentials is compatible with these results and require quite a severe adjustment. This could be easily achieved if one limits to \(p_{Lab}\leq\)400 MeV/c, the main obstacle lies in the position of the near-threshold quasi-bound states. On the other hand, we have pointed out some anomalous behaviours of the Nijmegen Partial Waves Analysis, also reported into the Julich potential. They manifest as a resonant-like structures of the phase shifts in the \({}^{31}\)S\({}_{0}\) and \({}^{33}\)P\({}_{0}\) states which takes place at the same - relatively high - energy and which are difficult to interpret as having a dynamical origin, in particular in terms of resonant states. Furthermore they coincide with an almost zero of the S-matrix modulus (or inelasticity parameter) that can introduce a bias in the analysis or a spurious change from one solution to another. The existence of such structures in the phase shifts constitutes one of the major differences with respect to the examined optical models. It would be of the greatest interest to clarify this point or to better understand the underlying dynamics of these states. It would also be interesting to decrease the lowest energy value (\(p_{Lab}\)=100 MeV/c) and eventually incorporate the protonium zero-energy data. This will not provide a magic solution of the observed discrepancies but will clearly facilitate the agreement of the models, in particular above \(p_{Lab}=\)500 MeV/c. To have at our disposal a model-independent extraction of the strong \(\bar{\rm N}\)N phase shifts, the non trivial part of the interaction, is of paramount importance to the field. In this respect it would be also suitable to have at our disposal an independent Partial Wave analysis, as it has been always the case in the simpler NN case. All the examined models roughly reproduce the experimental \(\bar{\rm p}\)p elastic, inelastic and charge-exchange total cross sections, including some differential cross sections. Unfortunately, these observables are computed at relatively high energy, result from a coherent and incoherent sum of many partial waves and hide the existing differences among them that have been evidenced here. \begin{table} \begin{tabular}{|l l|c|c|c|c|c|} \hline state & & Exp & Paris 2009 & Julich & KW & DR2 \\ \hline \({}^{1}\)S\({}_{0}\) & \(\bar{\rm N}\)N & & 1.02 - i 0.87 & 0.42 - i 0.91 & 0.52 - i 0.99 & 0.65 - i 0.82 \\ & \(\bar{\rm p}\)p & 0.493(92) - i 0.732(146) & 0.92 - i 0.67 & 0.50 - i 0.71 & 0.57 - i 0.77 & 0.68 - i 0.64 \\ \({}^{3}\)SD\({}_{1}\) & \(\bar{\rm N}\)N & & & 0.91 - i 0.62 & 0.93 - i 0.92 & 1.01 - i 0.79 & 1.09 - i 0.75 \\ & \(\bar{\rm p}\)p & 0.933(45) - i 0.604(51) & 0.82 - i 0.50 & 0.90 - i 0.74 & 0.92 - i 0.63 & 0.98 - i 0.59 \\ S-averaged & NN & & 0.94 - i 0.68 & 0.80 - i 0.92 & 0.89 - i 0.84 & 0.98 - i 0.77 \\ & \(\bar{\rm p}\)p & 0.823(57) - i 0.636(75) & 0.85 - i 0.54 & 0.80 - i 0.74 & 0.83 - i 0.67 & 0.90 - i 0.60 \\ \hline \({}^{3}\)P\({}_{0}\) & \(\bar{\rm N}\)N & & -3.02 - i 2.50 & -0.32 - i 4.01 & -3.20 - i 2.28 & -2.93 - i 1.83 \\ & \(\bar{\rm p}\)p & -5.68(123) - i 2.45 (49) & -2.74 - i 2.46 & -0.32 - i 3.85 & -2.81 - i 1.99 & -2.53 - i 1.62 \\ \hline \end{tabular} \end{table} Table 5: Isospin averaged (\(a_{NN}\)) and \(\bar{\rm p}\)p scattering lengths are compared with those obtained from hydrogen atom level shifts and widths, in fm for S and fm\({}^{3}\) for P states. The \(\bar{\rm p}\)p values including Coulomb and \(\Delta m\) corrections are taken from [18] for DR2 and KW, from [19] for Paris and from [12] for Julich model. The statistical averaged value for S-wave is defined as (\({}^{1}\)S\({}_{0}\)+3 \({}^{3}\)S\({}_{1}\))/4 and is given with averaged errors. It would be of the highest interest to the community to develop, in parallel with more ambitious projects, an experimental program to measure the most complete set of \(\bar{\rm N}\)N observables at energies \(p_{Lab}<\)200 MeV/c allowing to determine the main partial waves (S,P,D) that control the low energy structure calculations. In this energy domain we are not only faced to a bad "quality of data", but to a total lack of experimental results. As far as we will not have at our disposal a reliable determination of the \(\bar{\rm N}\)N strong phase shifts for the lowest partial waves, would they be limited to a restricted energy domain of few tens of MeV, any prediction concerning more complex systems, like those of interest in PUMA project, could be strongly model dependent. ## Acknowledgement We are grateful for the support of the "Espace de Structure et de reactions Nucleaires Theorique" (ESNT, [https://e.organizing](https://e.organizing) a workshop and welcoming the visit of S. Wycech at CEA Saclay This work was started during our visit to National Center for Nuclear Research in Warsaw. We thank the staff members of the theory group for their warm hospitality. We acknowledge the support of the CNRS/IN2P3 French-Polish COPIN agreement. We are thankful to Benoit Loiseau, Johann Haidenbauer and Ruprecht Machleidt for enlightening discussions and for providing us with their respective potentials. ## Appendix A Potentials in configuration space Although not being observable we believe it could be instructive to compare the potentials of the different models in a given partial wave. The Julich model being in momentum space and non local is not included. The Paris potential is E-dependent and, except for the tensor-coupled states, we selected some positive and negative arbitrary values of \(E\). For the NPWA, the inner part corresponds to the square well defining the boundary conditions at \(r\)=1.2 fm. Beyond this value it is continued with the one- plus two-pion (N2LO) exchange potentials. As one can see in the following figures, the \(\bar{\rm N}\)N potentials exhibit quite dramatic differences, making even difficult to asses wether the \(\bar{\rm N}\)N interaction in a given PW is attractive or repulsive. This is in sharp contrast with the NN case. ### \({}^{1}\)S\({}_{0}\) This partial wave is globally attractive in both isospins for all models, and much stronger than for the NN case, specially in T=1. However in the NPWA, there is no any need of short-range attraction in T=0. Paris potential presents two peculiar differences with respect to the other potentials: the strong short range repulsion, claimed to be imposed by phenomenology, and the repulsive peak at 1 fm, which cannot be justified in terms of pion- or omega-exchanges since they are shared by all models. ### \({}^{3}\)S\({}_{1}\) The S-wave tensor-coupled state presents also some striking differences: the \({}^{13}\)S\({}_{1}\) potentials are strongly attractive wells, going from 500 MeV to several GeV depth, while the NPWA is limited to 130 MeV. \(V_{{}^{33}S_{1}}\) is also deeply attractive in all models but turns to be slightly repulsive (\(\approx\) 50 MeV)in the NPWA. The \({}^{3}\)S\({}_{1}\rightarrow^{3}\)D\({}_{1}\) transition potentials have in common that they are all very strong but they display also sizeable differences. Notice that in the DR and KW models the couplings don't vanish in the limit \(r\to 0\), what spoils the usual \(r^{L+1}\) behaviour of the (reduced) radial wave functions. Figure 15: Real parts of \({}^{1}\)S\({}_{0}\) potentials for both isospins (T). \({}^{1}\)P\({}_{1}\) Apart form the centrifugal barrier, this potential is very close to the \({}^{1}\)S\({}_{0}\) one in all models. Their difference is due to the, attractive, Quadratic Spin-Orbit term (\(Q_{12}\)), present in Paris and DR2 models but absent in KW. In the short range part of NPWA, the vanishing \({}^{11}\)S\({}_{0}\) potential displayed in Figure 15, vanishes also in the \({}^{11}\)P\({}_{1}\) state, indicating that there is no any \(Q_{12}\) contribution. However the strong (500 MeV) attraction present in the \({}^{31}\)S\({}_{0}\) state has now totally disappeared indicating rather an unexpected repulsion. ### \({}^{3}\)P\({}_{0}\) All models agree with a huge attraction in T=0 state, \(\sim\) 1 GeV at \(r\)=1 fm. The NWPA does not require such a large attraction and the fit is done with a potential depth of \(\approx\) 100 MeV in the internal region, although matched with a pion potential of 350 MeV. For T=1, and in view of the repulsive scattering lengths, there is also a general agreement in the repulsive character of the interaction, although the direct inspection of the potentials requires some caution. In KW model, the \({}^{33}\)P\({}_{0}\) potential is repulsive everywhere, while DR2 has an attractive pocket below \(r\)=0.7 fm which is fully compensated by the centrifugal term. Paris potential (at E=0) has also a deep attractive pocket (-260 MeV) between \(r\)=0.5 fm and \(r\)=0.7 fm. It is almost totally compensated by the centrifugal barrier, but there remains a shallow attractive pocket (-35 MeV) between 0.56 and 0.63 fm. Under these dynamical conditions there is no room for developing a resonance, especially taking into account the repulsive E-dependent amplitude at positive energies. The examined models are, thus, globally repulsive. However, the NPWA requires an overall attractive short range contribution of \(\approx\) 150 MeV, though matched at r=1.2 fm with a repulsive \(V_{\pi}\). ### \({}^{3}\)P\({}_{1}\) This PW has repulsive scattering length in both isospins states. For T=0, KW, DR are indeed repulsive (once the centrifugal barrier is included) but NPW has an attractive pocket and Paris remains strongly attractive (2 GeV at \(r\)=0.5 fm). Figure 16: Real parts of \({}^{3}\)S\({}_{1}\) and \({}^{3}\)D\({}_{1}\) potentials for both isospins (T). Figure 17: \({}^{3}\)S\({}_{1}\rightarrow^{3}\)D\({}_{1}\) transition potentials for both isospins (T). They are real in DR and KW models. Figure 18: Real parts of \({}^{1}\)P\({}_{1}\) potentials for both isospins (T). \({}^{3}\)P\({}_{2}\) The P-waves tensor coupled state is the one exhibiting the largest differences among models. The \({}^{3}\)P\({}_{2}\) component is attractive in bot isospin states for all models but large differences in the strength are observed among them. The \({}^{3}\)F\({}_{2}\) is attractive and huge in all models but vanishes in the NPWA. For T=1, the potential is repulsive in Paris 09 but attractive in all the other models: the unique case where NPWA requires an attraction stronger than in all other potentials Concerning the \({}^{3}\)P\({}_{2}\rightarrow^{3}\)F\({}_{2}\) transition potentials, the same remarks as for \({}^{3}\)SD\({}_{1}\) partial wave are in place.
2310.20349
A Low-cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks
We present a highly compact run-time monitoring approach for deep computer vision networks that extracts selected knowledge from only a few (down to merely two) hidden layers, yet can efficiently detect silent data corruption originating from both hardware memory and input faults. Building on the insight that critical faults typically manifest as peak or bulk shifts in the activation distribution of the affected network layers, we use strategically placed quantile markers to make accurate estimates about the anomaly of the current inference as a whole. Importantly, the detector component itself is kept algorithmically transparent to render the categorization of regular and abnormal behavior interpretable to a human. Our technique achieves up to ~96% precision and ~98% recall of detection. Compared to state-of-the-art anomaly detection techniques, this approach requires minimal compute overhead (as little as 0.3% with respect to non-supervised inference time) and contributes to the explainability of the model.
Florian Geissler, Syed Qutub, Michael Paulitsch, Karthik Pattabiraman
2023-10-31T10:45:55Z
http://arxiv.org/abs/2310.20349v1
A Low-cost Strategic Monitoring Approach for Scalable and Interpretable Error Detection in Deep Neural Networks ###### Abstract We present a highly compact run-time monitoring approach for deep computer vision networks that extracts selected knowledge from only a few (down to merely two) hidden layers, yet can efficiently detect silent data corruption originating from both hardware memory and input faults. Building on the insight that critical faults typically manifest as peak or bulk shifts in the activation distribution of the affected network layers, we use strategically placed quantile markers to make accurate estimates about the anomaly of the current inference as a whole. Importantly, the detector component itself is kept algorithmically transparent to render the categorization of regular and abnormal behavior interpretable to a human. Our technique achieves up to \(\sim\)96% precision and \(\sim\)98% recall of detection. Compared to state-of-the-art anomaly detection techniques, this approach requires minimal compute overhead (as little as 0.3% with respect to non-supervised inference time) and contributes to the explainability of the model. ## 1 Introduction Deep neural networks (DNNs) have reached impressive performance in computer vision problems such as object detection, making them a natural choice for problems like automated driving [1]. However, DNNs are known to be highly vulnerable to faults. For example, even small changes to the input such as adding a customized noise pattern that remains invisible to the human eye, can stimulate silent prediction errors [8]. Similarly, modifying a single out of millions of network parameters, in the form of a bit flip, is sufficient to cause severe accuracy drops [14]. Because DNNs are being deployed in safety-critical applications such as autonomous vehicles (AVs), we need efficient mechanisms to detect errors that cause such silent data corruptions (SDC). Beyond the functional part, trust in the safety of the application requires that the error detectors are interpretable by the user, so that he/she can develop an intuitive understanding of the regular and irregular behavior of the network [2]. In an AV, for example, a user who does not trust an automated perception component due to its opaque decision-making, will not trust a black-box fault monitor either. Therefore, it is important to build interpretable error detectors for DNNs. The goal of error detection is to supervise a small, yet representative subset of activations - during a given network inference - for comparison with a previously extracted fault-free baseline. This leads to three key challenges: **(1)** How can one compress the relevant information into efficient abstractions? **(2)** How can one efficiently perform the anomaly detection process, for complex patterns? **(3)** Can the anomaly detection decision be understandable to a human, so that insights are gained about the inner workings of the network? Unfortunately, no existing approach satisfactorily addresses all three of the above challenges (Sec. 2). This paper presents an solution using a monitoring architecture that taps into the intermediate activations only at selected strategic points and interprets those signatures in a transparent way, see Fig. 1. Our approach is designed to detect SDC-causing errors due to input corruptions _or_ hardware faults in the underlying platform memory. Our main observation that underpins the method is that an SDC occurs when a fault _either_ increases the values of a few activations by a large margin (referred to here as an _activation peak shift_), or the values of many activations each by a small margin (_activation bulk shift_). As Fig. 2 shows, the former is observed typically for platform faults, while the latter is observed for input faults. We then use discrete quantile markers to distill the knowledge about the variation of the activation distribution in a given layer. Conceptually, within a faulty layer, we can expect a large change of only the top quantiles for activation peak shifts, and small changes of the lower and medium quantiles for bulk shifts (Fig. 2). This idea allows us to produce discriminative features for anomaly detection from a small number of monitored elements, with a single detector. Figure 1: Monitoring architecture for quantile shift detection. In summary, we make the following contributions in this paper: * We demonstrate that even for complex object detection networks, we can identify anomalous behavior from quantile shifts in only a few layers. * We identify minimal sets of relevant features and discuss their universality across models. * We efficiently differentiate input and hardware fault classes with a single detector. * We show that the anomaly detection process can be achieved with algorithmically transparent components, such as decision trees. The article is structured as follows: Sec. 2 discusses related work, while Sec. 3 describes our experimental setup. We present our method in Sec. 4, and the results of our evaluation in Sec. 5. Figure 2: **(a)** The feature map appearance is slightly changed with noise and massively affected by the memory FI. **(b)** Noise causes a small shift of multiple quantiles from the affected layer onwards (activation bulk shift). **(c)** The layer with the memory FI shows a large shift of the maximum quantile (activation peak shift), which then propagates to other quantiles. ## 2 Related Work There are three main categories of related work. **Image-level Techniques**: Input faults can be detected from the image itself (i.e., before network inference), in comparison with known fault-free data, resulting for example in specialized blur detectors [15]. However, these techniques do not necessarily relate to SDC in the network, as image-level corruptions may be tolerated by the model. **Activation Patterns**: Methods to extract activation patterns range from activation vectors [5] to feature traces [24, 25]. However, these techniques do not scale well to deeper models as they result in a massive number of monitored features and large overheads. Zhao et al. [26] attempt to reduce the monitoring effort by leveraging only activations from selected layers and compressing them with customized convolution and pooling operations. This leads to a rather complex, non-interpretable detector component, and the selection of monitored layers remains empirical. **Anomaly Detection** techniques establish clusters of regular and anomalous data to efficiently categorize new input. In single-label problems, such as image classification, fault-free clusters are typically formed by samples that belong to the same individual label [12], suggesting that those samples also share common attributes in the space of intermediate activations. This technique does not generalize to multi-label problems though, such as object detection, as many objects (in the form of bounding boxes and labels) are represented in the same image. More abstracted clustering rules such as the maximum activation range per layer have been proposed [4, 18]. However, these detectors omit more subtle errors within the activation spectrum, for example resulting from input faults. In other work [24, 25, 26], a secondary neural network is trained to perform the detection process. This comes at the cost that the detector then does not feature algorithmic transparency [2] and hence the anomaly decision is not understandable to a human. The same limitations are found in the context of detector subnetworks that are trained to identify adversarial perturbations [19]. **Summary**: We see that none of the prior techniques satisfactorily address the challenges outlined earlier. We present a new technique to overcome this problem in this paper. ## 3 Experimental Setup and Preliminary Study **Models and Datasets:** We use the three classic object detection networks Yolo(v3), Single Shot Detector (SSD), and RetinaNet from the _open-mmlab_[3] framework, as well as the two standard image classification networks ResNet50 and AlexNet from _torchvision_[21]. Object detection networks are pretrained on Coco [20] and were retrained on Kitti [6], with the following AP50 baseline performances: Yolo+Coco: 55.5%, Yolo+Kitti: 72.8%, SSD+Coco: 52.5%, SSD+Kitti: 66.5%, RetinaNet+Coco: 59.0%, RetinaNet+Kitti: 81.8%. Image classification models were pretrained on ImageNet [17], providing accuracies of 78.0% (ResNet) and 60.0% (AlexNet) for the test setup. The data was split in a ratio of 2:1 for detector training and testing. All models are run in _Pytorch_ with the IEEE-standard FP32 data precision [16]. **Fault Modes:** Input faults are modeled using _torchvision_[21] transform functions and are applied in three different magnitudes to the raw RGB images. We select three perturbation patterns that are popular in computer vision benchmarks such as ImageNet-C [11] for our analysis: i) _Gaussian noise_ due to low lighting conditions or noise in the electronic signal of the sensor device. Low (0.1), medium (1), and high (10) noise is tested. ii) _Gaussian blur_, reflecting for example a camera lens being out of focus. We choose a kernel size of \((5,9)\) and a symmetric, variable standard deviation \((0.3,1,3)\). iii) _Contrast reductions_ simulate poor lighting conditions or foggy weather. We adjust the contrast by a factor between zero (no contrast, gray image) and one (original image). The selected models have different vulnerabilities to input faults, for example, the two image classification models ResNet and AlexNet are highly sensitive to contrast adjustments, but are rather robust to noise and blur faults. For the remaining models, the trend is reversed. Hardware faults are modeled as single bit flips in the underlying memory and injected using _PytorchAlfi_[9]. Such flips can occur randomly either in the buffers holding temporary activation values (_neuron_ fault), or in dedicated memory which holds the parameters of the network (_weight_ faults). We group both neuron and weight faults into a single class _memory_ fault. This approach is in line with previous work [23, 24, 25, 13, 7, 18, 4]. We target all convolutional layers. **Fault Metrics:** First, detectable uncorrectable errors (DUE) can occur when invalid symbols such as _NaN_ or _Inf_ are found among the activations at inference time. During fault injection, we observe DUE events only for memory faults, with rates \(<\)1% across all models. DUEs can be generated also at the detector stage, in the process of adding up feature map sums that contain platform errors. The rates for such events vary between 0.2% and 5.1% with our method. While DUE errors may affect the system's availability, they are considered less critical as they are readily detectable and there is no need for further activation monitoring [7]. In this article, we are concerned therefore only with silent data corruption (SDC), events that lead to a silent alteration of the predicted outcome. For image classification networks, this is represented by a change in the top-1 class prediction. For object detection systems, we use an asymmetric version of the IVMOD metric [23] as SDC criterion, i.e., an image-wise increment in the FP or FN object numbers is counted as SDC. Each experiment was done with a subset of 100 images of the test data set, running 100 random FIs on each image individually. For hardware faults, SDC rates are typically low (\(\sim\)1\(-\)3%) since drastic changes will result only from bit flips in the high exponential bits of the FP32 data type [18, 7]. Therefore, an additional 500 epochs with accelerated FI only into the three highest exponential bits are performed for both flavors of memory faults. Overall, the faulty and fault-free data is found to be balanced at a ratio of about 2:1. ## 4 Model **Notational Remarks:** We use the range index convention, i.e., a vector is given as \(\mathbf{x}=(x_{i})=(x^{i})\), a matrix reads \(\mathbf{A}=(A_{ij})\), and similarly for higher-dimensional tensors. **Monitoring Approach:** Let us denote a four-dimensional activation tensor that represents an intermediate state of a convolutional neural network as \(\mathbf{T}=(T_{n,c,h,w})\in\mathds{R}^{N\times C\times H\times W}\), where \(N\) is the sample number, \(C\) the number of channels, \(H\) the height, and \(W\) the width. We list \(n\) as running global sample index, where samples may be further grouped in batches. An output tensor of a specific layer \(l\in[1,\dots L]\) shall be given as \(\mathbf{T}^{l}\), with \(L\) being the total number of monitored layers of the model. Subsets of a tensor with fixed \(n,c\) are called feature maps. Our monitoring approach first performs the summation of individual feature maps and subsequently calculates quantile values over the remaining kernels, see Fig. 1, \[\left(F_{n,c}\right)^{l} =\sum_{h,w}(T_{n,c,h,w})^{l}, \tag{1}\] \[\left(q_{n}\right)_{p}^{l} =\left(Q_{p}((F_{n,c})^{l})_{n}\right). \tag{2}\] Here \(Q_{p}\) is the quantile function for the percentile \(p\) which acts on the \(n\)-th row of \((F_{n,c})^{l}\). In other words, \(Q_{p}\) reduces the kernel dimensions \(c\) to a set of discrete values where we use the 10-percentiles, i.e., \(p\in[0,10,20,30,\dots,90,100]\). The result is a quantile value set, \(q_{p}\), for a given image index \(n\) and layer \(l\). Note that both the summation and the quantile operations (and hence the detector) are invariant under input perturbations such as image rotations. **Supervised Layers:** We intercept the output activations of all convolutional layers, as those layers provide the vast majority of operations in the selected computer vision DNNs. Yet, the same technique can be applied to any neural network layer. **Reference Bound Extraction:** Applied to a separate data set \(D_{\text{bnds}}\), the above technique is used pre-runtime to extract reference bounds which represent the minimum and maximum feature sums during fault-free operation: \[\begin{split} q_{p,\min}^{l}&=\min_{n\in D_{\text{ bnds}}}\left((q_{n})_{p}^{l}\right),\\ q_{p,\max}^{l}&=\max_{n\in D_{\text{bnds}}}\left(( q_{n})_{p}^{l}\right),\end{split} \tag{3}\] For \(D_{\text{bnds}}\), we randomly select 20% of the training data [4]. **Anomaly Feature Extraction:** For a given input during runtime, Eqs. (1) to (2) are used to obtain the quantile markers of the current activation distribution. Those are further processed to a so-called _anomaly feature vector_ which quantifies the similarity of the observed patterns with respect to the baseline references of Eq. 3, \[q_{p}^{l}\rightarrow\frac{1}{2}\left(f_{\text{norm}}(q_{p}^{l},q_{p,\min}^{l}, q_{p,\max}^{l})+1\right). \tag{4}\] Here, \(f_{\rm norm}\) normalizes the monitored quantiles to a range of \((-1,1)\) by applying element-wise (\(\epsilon=10^{-8}\) is a regularization offset) \[f_{\rm norm}(a,a_{\rm min},a_{\rm max})=\begin{cases}\tanh\left(\frac{a-a_{\rm max }}{|a_{\rm max}|+\epsilon}\right)&\text{if $a\geq a_{\rm min}$},\\ \tanh\left(\frac{a_{\rm min}-a}{|a_{\rm min}|+\epsilon}\right)&\text{if $a<a_{\rm min }$}.\end{cases} \tag{5}\] Intuitively, the result of Eq. 5 will be positive if \(a\) is outside the defined minimum (\(a_{\rm min}\)) and maximum (\(a_{\rm max}\)) bounds (approaching \(+1\) for very large positive or negative values). The function is negative if \(a\) is within the bounds (lowest when barely above the minimum), and will become zero when \(a\) is of the order of the thresholds. In Eq. 4, a shift brings features to a range of \((0,1)\) to facilitate the interpretation of feature importance. Finally, all extracted features are unrolled into a single anomaly feature vector \(\mathbf{q}=((q^{l})_{p})=[q^{1}_{0},q^{2}_{0},\ldots q^{L}_{0},q^{1}_{10}, \ldots q^{L}_{100}]\), that will be the input to the anomaly detector component. **Anomaly Detector:** We use a decision tree [10] approach to train an interpretable classifier, leveraging the _sklearn_ package [22]. The class weights are inversely proportionally to the number of samples in the respective class to compensate for imbalances in the training data. As a measure of the split quality of a decision node we use the Gini index [10]. To avoid overfitting of the decision tree, we perform cost-complexity pruning [22] with a factor varying between \(1\times 10^{-5}\) and \(2\times 10^{-5}\), that is optimized for the respective model. To investigate fault class identification, we study three different detector modes with varying levels of fault class abstractions and quantify each mode \(x\in\{cls,cat,sdc\}\) by precision, \(\rm P_{x}=TP_{x}/(TP_{x}+FP_{x})\) and recall \(\rm R_{x}=TP_{x}/(TP_{x}+FN_{x})\). Here we abbreviated true positives (TP), false positives (FP), and false negatives (FN). In the class mode (\(cls\)), we consider only those detections as true positives where the predicted and actual fault modes (see Sec. 3) coincide exactly. Cases where SDC is detected correctly but the fault class does not match will be counted as either FP or FN in this setting. In the category mode (\(cat\)), those SDC detections are considered true positives where the predicted and actual fault class fall into the same category of either _memory fault_ or _input fault_\(=\{noise,blur,contrast\}\). That means, fault class confusions within a category will not reduce the performance in this mode. The final precision and recall values for the class and category mode are given as the average over all classes or categories, respectively. Finally, in the mode \(sdc\), we consider all cases as true positives where SDC was correctly identified regardless of the specific fault class. This reflects a situation where one is only interested in the presence of SDC overall, rather than the specific fault class. ## 5 Results ### Detector Performance **Error Detection:** Tab. 1 shows the precision, recall, and decision tree complexity for the studied detectors and models. When all extracted features are leveraged by the decision tree classifier (referred to as _full_ model), the average class-wise detection precision varies between 93.9% (ResNet) and 97.5% (RetinaNet+Kitti), while the recall is between 97.1% (RetinaNet+Coco) and 99.1% (Yolo+Kitti). If only the fault category needs to be detected correctly, we find \(\text{P}_{\text{cat}}>95\%\) and \(\text{R}_{\text{cat}}>94\%\). Correct decisions about the presence of SDC only are done with \(\text{P}_{\text{sdc}}>96\%\) and \(\text{R}_{\text{sdc}}\geq 98\%\). Across models, we observe (not shown in Tab. 1) that the most common confusion are false positive noise detections, leading to a reduced precision in the individual _noise_ class (worst case 75.8% for ResNet). The recall is most affected by memory faults (lowest individual class recall 90.6% for RetinaNet+Coco). The detection rates of the full model in Tab. 1 outperform the ones reported in the comparable approach of Schorn et al. [25] (using feature map tracing) \begin{table} \begin{tabular}{c|c c c|c c c|c} \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**P(\%)**} & \multicolumn{3}{c|}{**R(\%)**} & \multicolumn{2}{c|}{**DT**} \\ \cline{2-9} & \(\text{P}_{\text{cls}}\) & \(\text{P}_{\text{cat}}\) & \(\text{P}_{\text{sdc}}\) & \(\text{R}_{\text{cls}}\) & \(\text{R}_{\text{cat}}\) & \(\text{R}_{\text{sdc}}\) & \(\text{N}_{\text{ft}}/\text{N}_{\text{l}}\) \\ \hline \hline **Yolo+Coco** & & & & & & & \\ full & 95.8 & 96.4 & 96.1 & 98.2 & 98.6 & 98.4 & 825/75 \\ red (avg) & 93.3 & 94.6 & 93.4 & 97.4 & 96.3 & 96.7 & 2/2 \\ \hline **Yolo+Kitti** & & & & & & & \\ full & 97.3 & 97.5 & 97.4 & **99.1** & 99.3 & 99.2 & 825/75 \\ red (avg) & 92.6 & 92.1 & 92.0 & 97.3 & 96.4 & 96.8 & 3/2 \\ \hline **SSD+Coco** & & & & & & & \\ full & 96.6 & 97.2 & 96.6 & 98.2 & 98.5 & 98.3 & 429/39 \\ red (avg) & 95.2 & 96.3 & 94.9 & 96.5 & 94.5 & 95.9 & 3/3 \\ \hline **SSD+Kitti** & & & & & & & \\ full & 96.0 & 97.1 & 96.2 & 98.4 & 98.7 & 98.6 & 429/39 \\ red (avg) & 92.8 & 94.6 & 92.1 & 98.0 & 97.7 & 98.2 & 2/2 \\ \hline **RetinaNet+Coco** & & & & & & & \\ full & 96.6 & 95.7 & 96.9 & 97.1 & 94.9 & 98.0 & 781/71 \\ red (avg) & **96.6** & 96.6 & 96.5 & 97.0 & 94.6 & 98.2 & 2/2 \\ \hline **RetinaNet+Kitti** & & & & & & & \\ full & **97.5** & 97.3 & 97.5 & 98.6 & 98.2 & 98.7 & 781/71 \\ red (avg) & 96.2 & 96.6 & 95.9 & **98.6** & 97.8 & 98.9 & 2/2 \\ \hline **ResNet+Imagenet** & & & & & & & \\ full & 93.9 & **98.3** & **97.6** & 98.1 & **99.6** & **99.4** & 583/53 \\ red (avg) & 92.1 & **97.6** & **96.7** & 98.3 & **99.6** & **99.5** & 3/3 \\ \hline **AlexNet+Imagenet** & & & & & & & & \\ full & 96.1 & **98.3** & 97.3 & 98.4 & 99.2 & 99.0 & 55/5 \\ red (avg) & 93.2 & 96.8 & 95.0 & 98.0 & 99.0 & 98.8 & 4/3 \\ \hline \end{tabular} \end{table} Table 1: Precision (\(P\)), Recall (\(R\)), and decision tree (DT) complexity - given as the number of used features (\(\text{N}_{\text{ft}}\)) and monitored layers (\(\text{N}_{\text{l}}\)) - for different setups. Every detector was retrained 10 times with different random seeds and the averages across all runs are given. Errors are shown when relevant to the given rounding. We list both the classifiers making use of all extracted quantiles (_full_) and the averaged reduced (_red_) detector models, where guided feature reduction was applied, see Fig. 3. Best-in-class detectors are highlighted in each column. and the blur detection in Huang et al. [15] in terms of precision and recall. When using alternative metrics (not shown in Tab. 1) for comparison with other detector designs, we find that our method achieves class-wise misclassification rates ranging between 0.7% and 2.0%, depending on the model, which is on par with the results for example in Cheng et al. [5]. Similarly, the calculated class-wise true negative rates vary between 99.6% and 99.8%, reaching or exceeding the classifier performance in Zhao et al. [26]. Note that all mentioned references are limited to image classification networks. **Feature Reduction:** The number of monitored features can be drastically reduced without significantly affecting the detection performance. This means that many quantiles represent similar information and further distillation can be applied. For feature reduction, we follow two steps: First, all quantile features of the full model are ranked according to their Gini importance [22] in the decision tree. Then, we retrain the classifier with a successive number of features, starting from the most important one only, to the two most important ones, etc. A reduced model is accepted as efficient if it recovers at least 95% of both the precision and recall performance of the original model with all features. Fig. 3 shows the results of the feature reduction. Inspecting performance trends from larger to smaller feature numbers, we observe that the detection rate stagnates over most of the elimination process, before dropping abruptly when the number of used features reduces beyond a limit. On average, the number of monitored features and layers that are required to maintain close-to-original performance (as defined above) are as few as 2 to 4 and 2 to 3, respectively. For a model like Yolo, this means that only 2 out of the 75 convolution layers have to be supervised. The average characteristics of the resulting detector models is shown in Tab. 1 as reduced (_red_) model. ### Minimal Monitoring Features **Minimal Feature Search:** The feature reduction process in Sec. 5.1 demonstrates that only few strategic monitoring markers are needed to construct an Figure 3: Precision and recall of class-wise SDC detection when reducing the number of monitored features (average of 10 independent runs). efficient detector model. In this section, we elaborate further to what extent the model can be compressed, and which features are the most relevant. We apply the following strategy, starting from a full classifier model using all quantile features: 1) Apply the feature reduction technique described in Sec. 5.1 to identify minimal monitoring features that maintain at least 95% of the original precision and recall. This combination of features is added to a pool of minimal model candidates. 2) A new instance of the full model is initiated and all feature candidates from the pool are eliminated. Return to the first step to find alternative candidates until a certain search depth (we choose 24) is exhausted. **Universal Trends:** The identified minimal feature combinations are shown in Fig. 4. We find that just 2 features from 2 different layers are sufficient to constitute an efficient error detector for all studied models except for AlexNet (4 features from 3 layers). Almost universally, one of the monitored layers needs to be among the very last layers of the deep neural network. Since memory faults are injected randomly across the network, errors in the last layers would go unnoticed otherwise. Only for SSD models, it turns out that most of the SDC faults occur in earlier layers, so that a supervision of the last layers is less crucial to achieve a similar statistical detection performance. We observe that it is favorable to supervise a higher percentile (e.g., \(q_{100}\)) in the later layers, especially in more shallow networks (AlexNet and SSD). This is because in shallow networks, peak shifts have a shorter propagation path and hence it is more important to intercept faults directly. This can only be achieved by the highest percentiles. In models with ReLU activation functions (all except Yolo here), the minimum quantile does not serve as a meaningful peak shift monitor as negative activations are clipped automatically. A second monitoring marker should to be set in the first half of the network layer stack. This helps to identify input faults (which are interceptable from the very first layer) and discriminate them from memory faults. Either a low or high percentile can be chosen for supervision. **Explainability:** Given the above generalizable trends and the fully transparent nature of the classifier, we can make statements about the inner workings of the DNN that correlate a given input with an anomalous or regular outcome. Those statements can be interpreted intuitively by a human as a proxy of a decision, and hence qualify as an explanation [2]. ### Overhead We measure the average inference time per image when running the supervised model on random input, using the _Torch profiler_[21]. The profiled overall self compute time in Fig. 5 is shared between CPU and GPU. Compared to the feature map tracing method of Schorn et al. [24, 25], the quantile operation introduces additional compute, but at the same time saves the time of storing large tensors, due to the compression of many feature sums into only a few quantiles. Between these two opposing trends, full quantile monitoring turns out to be _faster_ than feature map tracing for all the studied models except for the shallow AlexNet, as shown in Fig. 5. If only selected layers are monitored to create a reduced classifier, the overhead can be decreased significantly. We find that the impact of minimal quantile monitoring on the overall inference time is between +0.3% and +1.6% for all studied object detection DNNs. For the image classification networks, on the other hand, quantile monitoring imposes a more significant overhead of +10.7% (ResNet) and +53.8% (AlexNet). This is because those networks have a much smaller number of parameters such that the relative impact of quantile extraction with respect to the total number of operations is higher. Across all models, minimal quantile monitoring is \(>10\%\) faster than feature map tracing. In absolute numbers, the respective saving in inference time can be up to \(\sim\)10ms, which is a significant improvement for applications operating at real-time, for example object detection in a self-driving vehicle. Figure 4: Minimal combinations of features as identified by the search process in Sec. 5.2. All combinations in (a)-(e) constitute a reduced classifier model with at least 95% of the performance of the respective full model. Inset numbers designate the percentile numbers (or combinations thereof if multiple combinations are equally valid). ### Comparison with Other Detector Approaches Alternative to a decision tree, we can deploy a linear machine learning model for error detection (similar to [24]). We study the feasibility of doing so in this section. For this setup, we select Yolo+Kitti to train a classifier for 1000 epochs using the Adam optimizer and cross entropy loss. A batch size of 100 and learning rates optimized between \(1\times 10^{-4}\) and \(5\times 10^{-3}\) were chosen. In the simplest form, with a multi-layer-perceptron, the algorithmic transparency is preserved and we find \(\mathrm{P_{cls}}=86.0\%\) and \(\mathrm{R_{cls}}=95.7\%\). If more hidden linear layers are added, higher detection rates can be achieved at the cost of explainability. For example, including one extra hidden layer with 64 neurons [24], we find a performance of \(\mathrm{P_{cls}}=88.9\%\) and \(\mathrm{R_{cls}}=96.3\%\), with three such extra layers we obtain \(\mathrm{P_{cls}}=91.7\%\) and \(\mathrm{R_{cls}}=95.1\%\). Compared to decision trees, however, this strategy suffers from more complex hyperparameter tuning and large training times. Therefore, decision trees are a better fit for our use case. ## 6 Summary and Future Work In this paper, we show that critical silent data corruptions in computer vision DNNs (originating either from hardware memory faults or input corruptions) can be efficiently detected by monitoring the quantile shifts of the activation distributions in specific layers. In most studied cases, it is sufficient to supervise two layers with one quantile marker each to achieve high error detection rates up to \(\sim\)96% precision and \(\sim\)98% recall. We also show that the strategic monitoring location can be associated with the concept of intercepting bulk and peak activation shifts, which gives a _novel, unifying perspective on the dependability of Figure 5: Average inference time per image accumulated over CPU and GPU. We compare the original inference, reduced and full quantile monitoring, and feature map tracing (method of [24, 25]). In the setup, we run 100 random images with a batch size of 10 (with GPU enabled) and repeat 100 independent runs. System specifications: Intel® CoreTM i9-12900K, Nvidia GeForce RTX 3090. _DNNs_. Due to the large degree of information compression in this approach, the compute overhead of the approach is in most cases only between 0.3% and 1.6% compared to the original inference time, and outperforms the comparable state of the art. In addition, we show that the method contributes to the model's explainability as the error detection decision is interpretable and transparent. For future work, we can further guide the search for optimized minimal feature combinations, for example, by taking into account specifics of the model architecture. **Acknowledgement:** We thank Neslihan Kose Cihangir and Yang Peng for helpful discussions. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 956123. This work was partially funded by the Federal Ministry for Economic Affairs and Climate Action of Germany, as part of the research project Safe-Wahr (Grant Number: \(19A21026C\)), and the Natural Sciences and Engineering Research Council of Canada (NSERC).
2309.05082
New Multivariate Dimension Polynomials of Inversive Difference Field Extensions
We introduce a new type of reduction of inversive difference polynomials that is associated with a partition of the basic set of automorphisms $\sigma$ and uses a generalization of the concept of effective order of a difference polynomial. Then we develop the corresponding method of characteristic sets and apply it to prove the existence and obtain a method of computation of multivariate dimension polynomials of a new type that describe the transcendence degrees of intermediate fields of finitely generated inversive difference field extensions obtained by adjoining transforms of the generators whose orders with respect to the components of the partition of $\sigma$ are bounded by two sequences of natural numbers. We show that such dimension polynomials carry essentially more invariants (that is, characteristics of the extension that do not depend on the set of its difference generators) than standard (univariate) difference dimension polynomials. We also show how the obtained results can be applied to the equivalence problem for systems of algebraic difference equations.
Alexander Levin
2023-09-10T17:04:28Z
http://arxiv.org/abs/2309.05082v1
###### Abstract ###### Abstract We introduce a new type of reduction of inversive difference polynomials that is associated with a partition of the basic set of automorphisms \(\sigma\) and uses a generalization of the concept of effective order of a difference polynomial. Then we develop the corresponding method of characteristic sets and apply it to prove the existence and obtain a method of computation of multivariate dimension polynomials of a new type that describe the transcendence degrees of intermediate fields of finitely generated inversive difference field extensions obtained by adjoining transforms of the generators whose orders with respect to the components of the partition of \(\sigma\) are bounded by two sequences of natural numbers. We show that such dimension polynomials carry essentially more invariants (that is, characteristics of the extension that do not depend on the set of its difference generators) than standard (univariate) difference dimension polynomials. We also show how the obtained results can be applied to the equivalence problem for systems of algebraic difference equations. **New Multivariate Dimension Polynomials of Inversive Difference Field Extensions** Alexander Levin The Catholic University of America Washington, D. C. 20064, USA [email protected] [https://sites.google.com/a/cua.edu/levin](https://sites.google.com/a/cua.edu/levin) **Key words:** difference polynomial, dimension polynomial, reduction, effective order, characteristic set. ## 1 Introduction This paper is dedicated to the memory of my dear teacher, Alexander Vasilyevich Mikhalev, who has made profound contributions to several areas of mathematics, especially to various branches of algebra including the ring theory, homological algebra, differential and difference algebra, computer algebra, algebraic \(K\)-theory, topological algebra, and coding theory. In his works on differential and difference algebraic structures [5], [6], [28], [18] - [24], [25] - [28] and in some other papers A. V. Mikhalev obtained a number of fundamental results on differential and difference rings and modules, characteristic sets of differential and difference polynomials, and computational analysis of systems of algebraic differential and difference equations. He has also presented excellent expositions of ideas and methods of differential and difference algebra in his books [6], [28] and papers [25] and [27]. Of special note is the paper [26] where A. V. Mikhalev and E. V. Pankratiev discovered a very interesting relationship between E. Kolchin's differential dimension polynomials and A. Einstein's concept of strength of a system of algebraic differential equations. Actually, the authors showed that the strength of such a system in the sense of A. Einstein is expressed by certain differential dimension polynomial associated with the system. (The concept of a differential dimension polynomial was introduced in [3]; many properties of such polynomials can be found in [4].) Furthermore, they showed how the algebraic technique for computing differential dimension polynomials can be applied to the computation of the strength of fundamental systems of differential equations of mathematical physics. A similar interpretation of difference dimension polynomials and examples of computation of the strength of systems of algebraic difference equations can be found in [6, Section[6.4], [10] and [12, Section 7.7]. In addition to the fact that a difference dimension polynomial associated with a system of algebraic difference equations expresses the strength of such a system in the sense of A. Einstein (the significant role of this characteristic in the theory of equations of mathematical physics is described in [2]), the important role of difference dimension polynomials is determined by at least three more factors. First, a difference dimension polynomial of a finitely generated difference field extension (or of a system of algebraic difference equations that defines such an extension) carries certain invariants, i.e., characteristics of the extension that do not change when we switch to another system of difference generators (with the corresponding change of the defining equations), see, for example, [6, Chapter 6] and [12, Chapter 4]. In this connection, one should mention the results on multivariate difference dimension polynomials associated with partitions of the basic set of translations, see [10], [11], [14], and [12, Chapter 3]. It turned out that they carry more such invariants than their univariate counterparts. (See also [16] where the results on multivariate difference dimension polynomials are generalized to the difference-differential case.) Second, properties of difference dimension polynomials associated with prime difference polynomial ideals provide a powerful tool in the dimension theory of difference algebras, see [6, Chapter 7], [12, Section 4.6], and [15]. Finally, the results on difference dimension polynomials can be naturally extended to algebraic and differential algebraic structures with a finitely generated commutative group action, see [13], [18], and [20]. In this paper we introduce a reduction of inversive difference polynomials associated with a fixed partition of the set of basic translations. This reduction takes into account the effective orders of inversive difference polynomials with respect to the elements of the partition (we generalize the concept of the effective order of an ordinary difference polynomial defined in [1, Chapter 2, Section 4]). Note that the idea of using a generalized effective order for (non-inversive) difference polynomials to obtain bivariate difference dimension polynomials of a new type was first explored in [17]. We consider a new type of characteristic sets that are associated with the introduced reduction and use their properties to prove the existence of a multivariate dimension polynomial of a finitely generated inversive difference field extension that describes the transcendence degrees of intermediate fields obtained by adjoining transforms of the generators whose orders with respect to the elements of the given partitions lie between two given natural numbers. This dimension polynomial is a polynomial in \(2p\) variables where \(p\) is the number of subsets in the partition of the basic set of translations. We determine invariants of such polynomials, that is, numerical characteristics of the extension that are carried by any its dimension polynomial and that do not depend on the system of difference generators the polynomial is associated with. Furthermore, we show that the introduced multivariate dimension polynomials carry essentially more invariants of the corresponding inversive difference field extensions than the univariate dimension polynomials of inversive difference modules and field extensions introduced in [9]. Note that while the study of difference algebraic structures deals with their endomorphisms and power products of basic translations with nonnegative exponents, inversive difference rings, fields and modules are considered together with the free commutative group generated by a set of basic automorphisms. Therefore, while the dimension theory of difference rings and modules is close to its differential counterpart, the study of inversive difference algebraic structures (including the study of dimensional characteristics of such structures) encounters many problems caused by the fact that one has to consider negative powers of basic translations. ## 2 Preliminaries Throughout the paper, \(\mathbb{N}\), \(\mathbb{Z}\), \(\mathbb{Z}_{\leq 0}\), and \(\mathbb{Q}\) denote the sets of all non-negative integers, non-positive integers, integers, and rational numbers, respectively. If \(S\) is a finite set, then \(\operatorname{Card}S\) denotes the number of elements of \(S\). For any positive integer \(m\), \(\leq_{P}\) will denote the product order on \(\mathbb{N}^{m}\), that is, a partial order such that \((a_{1},\ldots,a_{m})\leq_{P}(a_{1}^{\prime},\ldots,a_{m}^{\prime})\) if and only if \(a_{i}\leq a_{i}^{\prime}\) for \(i=1,\ldots,m\). The lexicographic order will be denoted by \(\leq_{\mathrm{lex}}\). By a ring we always mean an associative ring with unity. Every ring homomorphism is unitary (maps unity to unity), every subring of a ring contains the unity of the ring, and every algebra over a commutative ring is unitary. Every field considered in this paper is supposed to have zero characteristic. \(\mathbb{Q}[t_{1},\ldots,t_{p}]\) will denote the ring of polynomials in variables \(t_{1},\ldots,t_{p}\) over \(\mathbb{Q}\). By a _difference ring_ we mean a commutative ring \(R\) considered together with a finite set \(\sigma=\{\alpha_{1},\ldots,\alpha_{m}\}\) of mutually commuting injective endomorphisms of \(R\) called _translations_. The set \(\sigma\) is called the _basic set_ of the difference ring \(R\), which is also called a \(\sigma\)-ring. If \(R\) is a field, it is called a _difference field_ or a \(\sigma\)-field. (We will often use prefix \(\sigma\)- instead of the adjective "difference".) If all translations of \(R\) are automorphisms, we set \(\sigma^{*}=\{\alpha_{1},\ldots,\alpha_{m},\alpha_{1}^{-1},\ldots,\)\(\alpha_{m}^{-1}\}\) and say that \(R\) is an _inversive difference ring_ or a \(\sigma^{*}\)-ring. If a difference (respectively, inversive difference) ring \(R\) is a field, it is called a _difference_ (or \(\sigma\)-) field (respectively, an _inversive difference_ (or \(\sigma^{*}\)-) field). If \(R\) is an inversive difference ring with a basic set \(\sigma=\{\alpha_{1},\ldots,\alpha_{m}\}\), then \(\Gamma\) will denote the free commutative group of all power products of the form \(\gamma=\alpha_{1}^{k_{1}}\ldots\alpha_{m}^{k_{m}}\) where \(k_{i}\in\mathbb{Z}\) (\(1\leq i\leq m\)). The _order_ of such an element \(\gamma\) is defined as ord \(\gamma=\sum_{i=1}^{m}|k_{i}|\); furthermore, for every \(r\in\mathbb{N}\), we set \(\Gamma(r)=\{\gamma\in\Gamma\,|\) ord \(\gamma\leq r\}\). A subring (ideal) \(R_{0}\) of a \(\sigma\)-ring \(R\) is said to be a difference (or \(\sigma\)-) subring of \(R\) (respectively, difference (or \(\sigma\)-) ideal of \(R\)) if \(R_{0}\) is closed with respect to the action of any translation \(\alpha_{i}\in\sigma\). A \(\sigma\)-ideal \(I\) of a \(\sigma\)-ring \(R\) is called _reflexive if the inclusion \(\alpha_{i}(a)\in I\) (\(a\in R\), \(\alpha_{i}\in\sigma\)) implies the inclusion \(a\in I\). (If \(R\) is an inversive difference (\(\sigma^{*}\)-) ring, this property means that \(I\) is closed with respect to every automorphism from the set \(\sigma^{*}\)). If a prime ideal \(P\) of \(R\) is closed with respect to the action of any \(\alpha_{i}\in\sigma\), it is called a _prime difference_ (or \(\sigma\)-) _ideal_ of \(R\). If \(R\) is an inversive difference ring and a prime \(\sigma\)-ideal is reflexive, it is referred to as a prime \(\sigma^{*}\)-ideal of \(R\). If \(R\) is a \(\sigma\)-ring and \(S\subseteq R\), then the intersection \(I\) of all \(\sigma\)-ideals of \(R\) containing the set \(S\) is the smallest \(\sigma\)-ideal of \(R\) containing \(S\); it is denoted by \([S]\). If the set \(S\) is finite, \(S=\{a_{1},\ldots,a_{r}\}\), we say that the \(\sigma\)-ideal \(I\) is finitely generated (we write this as \(I=[a_{1},\ldots,a_{r}]\)) and call \(a_{1},\ldots,a_{r}\) difference (or \(\sigma\)-) generators of \(I\). If the \(\sigma\)-ring \(R\) is inversive, then the smallest \(\sigma^{*}\)-ideal of \(R\) containing a subset \(S\) of \(R\) is denoted by \([S]^{*}\). Elements of the set \(S\) are called \(\sigma^{*}\)-generators of this ideal; if \(S=\{a_{1},\ldots,a_{r}\}\), we write \([a_{1},\ldots,a_{r}]^{*}\) and say that the \(\sigma^{*}\)-ideal is finitely generated and call \(a_{1},\ldots,a_{r}\) its \(\sigma^{*}\)-generators. Clearly, \([S]^{*}\) is generated, as an ideal, by the set \(\{\gamma(a)\,|\,a\in S,\,\gamma\in\Gamma\}\). (In what follows we will often write \(\gamma a\) instead of \(\gamma(a)\).) If \(R\) is a \(\sigma^{*}\)-ring, then an expression of the form \(\sum_{\gamma\in\Gamma}a_{\gamma}\gamma\), where \(a_{\gamma}\in R\) for any \(\gamma\in\Gamma\) and only finitely many elements \(a_{\gamma}\) are different from \(0\), is called a \(\sigma^{*}\)_-operator_ over \(R\). It is an endomorphism of the additive group of \(R\); if \(C=\sum_{\gamma\in\Gamma}a_{\gamma}\gamma\) and \(f\in R\), then \(C(f)=\sum_{\gamma\in\Gamma}a_{\gamma}\gamma(f)\). Two \(\sigma^{*}\)-operators \(\sum_{\gamma\in\Gamma}a_{\gamma}\gamma\) and \(\sum_{\gamma\in\Gamma}b_{\gamma}\gamma\) are considered to be equal if and only if \(a_{\gamma}=b_{\gamma}\) for any \(\gamma\in\Gamma\). The set of all \(\sigma^{*}\)-operators over \(R\) will be denoted by \({\cal E}_{R}\). This set, which has a natural structure of an \(R\)-module generated by \(\Gamma\), becomes a ring if one sets \(\gamma a=\gamma(a)\gamma\) for any \(a\in R\), \(\gamma\in\Gamma\) and extends this rule to the multiplication of any two \(\sigma^{*}\)-operators by distributivity. The resulting ring \({\cal E}_{R}\) is called the ring of \(\sigma^{*}\)-operators over \(R\). Clearly, if \(I\) is a \(\sigma^{*}\)-ideal of \(R\), \(I=[f_{1},\ldots,f_{k}]^{*}\), then every element of \(I\) is of the form \(\sum_{i=1}^{q}C_{i}(f_{i})\) (\(q\in\mathbb{N}\)) where \(C_{1},\ldots,C_{q}\in{\cal E}_{R}\). If \(L\) is a difference (\(\sigma\)-) field and its subfield \(K\) is also a \(\sigma\)-subring of \(L\), then \(K\) is said to be a difference (or \(\sigma\)-) subfield of \(L\); \(L\), in turn, is called a difference (or \(\sigma\)-) field extension or a \(\sigma\)-overfield of \(K\). In this case we also say that we have a \(\sigma\)-field extension \(L/K\). If the \(\sigma\)-field \(L\) is inversive and \(K\) is a \(\sigma\)-subfield of \(L\) such that \(\alpha(K)\subseteq K\) for any \(\alpha\in\sigma^{*}\), we say that \(K\) is an inversive difference (or \(\sigma^{*}\)-) subfield of \(L\) or that we have a \(\sigma^{*}\)-field extension \(L/K\). In the last case, if \(S\subseteq K\), then the smallest \(\sigma^{*}\)-subfield of \(L\) containing \(K\) and \(S\) is denoted by \(K\langle S\rangle^{*}\). \(S\) is said to be the set of \(\sigma^{*}\)_-generators_ of \(K\langle S\rangle^{*}\) over \(K\). If the set \(S\) is finite, \(S=\{\eta_{1},\ldots,\eta_{n}\}\), we say that \(L/K\) is a finitely generated inversive difference (or \(\sigma^{*}\)-) field extension. As a field, \(L\langle S\rangle^{*}=K(\gamma a\,|\,\gamma\in\Gamma,\,a\in S)\). Let \(R\) and \(R^{\prime}\) be two difference rings with the same basic set \(\sigma\), so that elements of \(\sigma\) act on each of the rings as pairwise commuting endomorphisms. (More rigorously, we assume that there exist injective mappings of \(\sigma\) into the sets of endomorphisms of the rings \(R\) and \(R^{\prime}\) such that the images of any two elements of \(\sigma\) commute. For convenience we will denote these images by the same symbols). A ring homomorphism \(\phi:R\longrightarrow R^{\prime}\) is called a _difference_ (or \(\sigma\)-) _homomorphism_ if \(\phi(\alpha a)=\alpha\phi(a)\) for any \(\alpha\in\sigma\), \(a\in R\). It is easy to see that the kernel of such a mapping is a reflexive difference ideal of \(R\). In what follows we deal with inversive difference (\(\sigma^{*}\)-) rings and fields. If \(R\) is such a ring and \(Y=\{y_{1},\ldots,y_{n}\}\) is a finite set of symbols, we can consider the polynomial ring \(R[\Gamma Y]\), where \(\Gamma Y\) denotes the set of symbols \(\{\gamma y_{j}|\gamma\in\Gamma,1\leq j\leq n\}\), as an inversive difference ring containing \(R\) as its \(\sigma^{*}\)-subring. The corresponding inversive difference ring extension is defined by setting \(\alpha(\gamma y_{j})=(\alpha\gamma)y_{j}\) for any \(\alpha\in\sigma^{*}\), \(\gamma\in\Gamma\), \(1\leq j\leq n\); it is denoted by \(R\{y_{1},\ldots,y_{n}\}^{*}\) and called the ring of inversive difference (or \(\sigma^{*}\)-) polynomials in \(\sigma\)-indeterminates \(y_{1},\ldots,y_{n}\) over \(R\). A \(\sigma^{*}\)-ideal of \(R\{y_{1},\ldots,y_{n}\}^{*}\) is called _linear_ if it is generated (as a \(\sigma^{*}\)-ideal) by homogeneous linear \(\sigma^{*}\)-polynomials, that is, \(\sigma^{*}\)-polynomials of the form \(\sum_{i=1}^{d}a_{i}\gamma_{i}y_{k_{i}}\) (\(a_{i}\in R\), \(\gamma_{i}\in\Gamma\), \(1\leq k_{i}\leq n\) for \(i=1,\ldots,d\)). It is shown in [12, Proposition 2.4.9] that if \(R\) is a \(\sigma^{*}\)-field, then a linear \(\sigma^{*}\)-ideal of \(R\{y_{1},\ldots,y_{n}\}^{*}\) is prime. If \(K\) is an inversive difference (\(\sigma^{*}\)-) field, \(f\in K\{y_{1},\ldots,y_{n}\}^{*}\) and \(\eta=(\eta_{1},\ldots,\eta_{n})\) is an \(n\)-dimensional vector with coordinates in some \(\sigma^{*}\)-overfield of \(K\), then \(f(\eta)\) (or \(f(\eta_{1},\ldots,\eta_{n})\) ) denotes the result of the replacement of every entry \(\gamma y_{i}\) in \(f\) with \(\gamma\eta_{i}\) (\(\gamma\in\Gamma\), \(1\leq i\leq n\)). If \(\pi:R=K\{y_{1},\ldots,y_{n}\}^{*}\to L=K\langle\eta_{1},\ldots,\eta_{n}\rangle^ {*}\) is a natural \(\sigma\)-homomorphism (\(\pi(a)=a\) for any \(a\in K\) and \(y_{i}\mapsto\eta_{i}\)), then \(P=\operatorname{Ker}\pi\) is a prime \(\sigma^{*}\)-ideal of \(R\) called the _defining ideal_ of the extension \(L/K\). In this case, \(L\) is isomorphic to the \(\sigma\)-field \(\operatorname{qf}(R/P)\), the quotient field of \(R/P\) (\(\eta_{i}\leftrightarrow y_{i}+P\)). Let \(K\) be a \(\sigma^{*}\)-field and \(\mathcal{U}\) a family of elements of some \(\sigma^{*}\)-overfield of \(K\). We say that the family \(\mathcal{U}\) is \(\sigma\)-_algebraically dependent_ over \(K\), if the family \(\Gamma\mathcal{U}=\{\gamma(u)\,\mid\,\gamma\in\Gamma,\,u\in\mathcal{U}\}\) is algebraically dependent over \(K\) (that is, there exist elements \(u_{1},\ldots,u_{k}\in\Gamma\mathcal{U}\) and a nonzero polynomial \(f\) in \(k\) variables with coefficients in \(K\) such that \(f(u_{1},\ldots,u_{k})=0\)). Otherwise, the family \(\mathcal{U}\) is said to be \(\sigma\)-_algebraically independent_ over \(K\). If \(L\) is a \(\sigma^{*}\)-overfield of a \(\sigma^{*}\)-field \(K\), then a set \(B\subseteq L\) is said to be a \(\sigma\)-_transcendence basis_ of \(L\) over \(K\) if \(B\) is \(\sigma\)-algebraically independent over \(K\) and every element \(a\in L\) is \(\sigma\)-algebraic over \(K\langle B\rangle\) (it means that the set \(\{\gamma a\,\mid\,\tau\in\Gamma\}\) is algebraically dependent over the field \(K\langle B\rangle^{*}\)). If \(L\) is a finitely generated \(\sigma^{*}\)-field extension of \(K\), then all \(\sigma\)-transcendence bases of \(L\) over \(K\) are finite and have the same number of elements (see [12, Proposition 4.1.6]). This number is called the \(\sigma\)-_transcendence degree_ of \(L\) over \(K\) (or the \(\sigma\)-transcendence degree of the extension \(L/K\)); it is denoted by \(\sigma\)-\(\operatorname{tr}.\deg_{K}L\). The following theorem, whose prove can be found in [6, Section 6.4], introduces the (univariate) dimension polynomial of a finitely generated inversive difference field extension. **Theorem 2.1**.: _Let \(K\) be an inversive difference field with a basic set \(\sigma=\{\alpha_{1},\ldots,\alpha_{m}\}\) and \(L=K\langle\eta_{1},\ldots,\eta_{n}\rangle^{*}\) be a \(\sigma^{*}\)-field extension of \(K\) generated by a finite set \(\eta=\{\eta_{1},\ldots,\eta_{n}\}\). Then there exists a polynomial \(\phi_{\eta|K}(t)\in\mathbb{Q}[t]\) such that_ * \(\phi_{\eta|K}(r)=\operatorname{tr}.\deg_{K}K(\{\gamma\eta_{j}|\gamma\in\Gamma( r),1\leq j\leq n\})\) _for all sufficiently large_ \(r\in\mathbb{N}\)_;_ * \(\deg\phi_{\eta|K}\leq m\) _and_ \(\phi_{\eta|K}(t)\) _can be written as_ \(\phi_{\eta|K}(t)=\sum_{i=0}^{m}a_{i}\binom{t+i}{i}\)__ _where \(a_{0},\ldots,a_{m}\in\mathbb{Z}\) and \(2^{m}|a_{m}\)._ (iii) \(d=\deg\phi_{\eta|K}\)_, \(a_{m}\) and \(a_{d}\) do not depend on the set of \(\sigma^{*}\)-generators \(\eta\) of \(L/K\) (\(a_{d}\neq a_{m}\) if and only if \(d<m\)). Moreover, \(\frac{a_{m}}{2^{m}}=\sigma\text{-}\text{\rm tr.}\deg_{K}L\)._ (iv) _If the elements \(\eta_{1},\ldots,\eta_{n}\) are \(\sigma\)-algebraically independent over \(K\), then_ \[\phi_{\eta|K}(t)=n\sum_{k=0}^{m}(-1)^{m-k}2^{k}\binom{n}{k}\binom{t+k}{k}\,.\] The polynomial \(\phi_{\eta|K}(t)\) is called the \(\sigma^{*}\)_-dimension polynomial_ of the \(\sigma^{*}\)-field extension \(L/K\) associated with the system of \(\sigma^{*}\)-generators \(\eta\). Methods and algorithms for computation of such polynomials can be found in [6]. DIMENSION POLYNOMIALS OF SUBSETS OF \(\mathbb{Z}^{m}\) In what follows we present some results about numerical polynomials associated with subsets of \(\mathbb{Z}^{m}\) (\(m\) is a positive integer). The proofs of the corresponding statements can be found in [5] and [6, Chapter 2]. **Definition 2.2**.: A polynomial in \(p\) variables \(f(t_{1},\ldots,t_{p})\in\mathbb{Q}[t_{1},\ldots,t_{p}]\) is called **numerical** if \(f(r_{1},\ldots,r_{p})\in\mathbb{Z}\) for all sufficiently large \((r_{1},\ldots,r_{p})\in\mathbb{N}^{p}\). (It means that there exist \(s_{1},\ldots,s_{p}\in\mathbb{N}\) such that the membership \(f(r_{1},\ldots,r_{p})\in\mathbb{Z}\) holds for all \((r_{1},\ldots,r_{p})\in\mathbb{N}^{p}\) with \(r_{1}\geq s_{1},\ldots,r_{p}\geq s_{p}\).). It is clear that every polynomial with integer coefficients is numerical. As an example of a numerical polynomial in \(p\) variables with non-integer coefficients (\(p\in\mathbb{N},p\geq 1\)) one can consider a polynomial \(\prod_{i=1}^{p}\binom{t_{i}}{m_{i}}\) where \(m_{1},\ldots,m_{p}\in\mathbb{N}\). (As usual, \(\binom{t}{k}\) (\(k\in\mathbb{Z},k\geq 1\)) denotes the polynomial \(\frac{t(t-1)\ldots(t-k+1)}{k!}\) in one variable \(t\), \(\binom{t}{0}=1\), and \(\binom{t}{k}=0\) if \(k\) is a negative integer.) The following theorem proved in [6, Chapter 2] gives the "canonical" representation of a numerical polynomial in several variables. **Theorem 2.3**.: _Let \(f(t_{1},\ldots,t_{p})\) be a numerical polynomial in \(p\) variables \(t_{1},\ldots,t_{p}\), and let \(\deg_{t_{i}}f=m_{i}\) (\(1\leq i\leq p\)) where \(m_{1},\ldots,m_{p}\in\mathbb{N}\). Then the polynomial \(f(t_{1},\ldots,t_{p})\) can be represented in the form_ \[f(t_{1},\ldots t_{p})=\sum_{i_{1}=0}^{m_{1}}\ldots\sum_{i_{p}=0}^{m_{p}}a_{i_{ 1}\ldots i_{p}}\binom{t_{1}+i_{1}}{i_{1}}\ldots\binom{t_{p}+i_{p}}{i_{p}} \tag{1}\] _with integer coefficients \(a_{i_{1}\ldots i_{p}}\) (\(0\leq i_{k}\leq m_{k}\) for \(k=1,\ldots,p\)) that are uniquely defined by the numerical polynomial._ In what follows (until the end of the section), we deal with subsets of the set \(\mathbb{Z}^{m}\) (\(m\) is a positive integer). Furthermore, we fix a partition of the set \(\mathbb{N}_{m}=\{1,\ldots,m\}\) into \(p\) disjoint subsets (\(p\geq 1\)): \[\mathbb{N}_{m}=\Delta_{1}\cup\Delta_{2}\cup\ldots\Delta_{p} \tag{2}\] where \(\Delta_{1}=\{1,\ldots,m_{1}\}\), \(\Delta_{2}=\{m_{1}+1,\ldots,m_{1}+m_{2}\},\ldots,\Delta_{p}=\{m_{1}+\cdots+m_{ p-1}+1,\ldots,m\}\) (\(m_{i}=\operatorname{Card}\Delta_{i}\) for \(i=1,\ldots,p\); \(m_{1}+\cdots+m_{p}=m\)). If \(a=(a_{1},\ldots,a_{m})\in\mathbf{Z}^{m}\), we denote the numbers \(\sum_{i=1}^{m_{1}}|a_{i}|\), \(\sum_{i=m_{1}+1}^{m_{1}+m_{2}}|a_{i}|,\ldots\), \(\sum_{i=m_{1}+\cdots+m_{p-1}+1}^{m}|a_{i}|\) by \(\operatorname{ord}_{1}a,\ldots,\operatorname{ord}_{p}a\), respectively; \(\operatorname{ord}_{k}a\) (\(1\leq k\leq p\)) is called the _order of \(a\) with respect to \(\Delta_{k}\)_). Furthermore, we consider the set \(\mathbb{Z}^{n}\) as the union \[\mathbb{Z}^{m}=\bigcup_{1\leq j\leq 2^{m}}\mathbb{Z}^{(m)}_{j} \tag{3}\] where \(\mathbb{Z}^{(m)}_{1},\ldots,\mathbb{Z}^{(m)}_{2^{m}}\) are all distinct Cartesian products of \(m\) sets each of which is either \(\mathbb{N}\) or \(\mathbb{Z}_{\leq 0}\). We assume that \(\mathbb{Z}^{(m)}_{1}=\mathbb{N}\) and call \(\mathbb{Z}^{(m)}_{j}\) the _\(j\)th orthant_ of \(\mathbb{Z}^{m}\) (\(1\leq j\leq 2^{m}\)). The set \(\mathbb{Z}^{m}\) will be considered as a partially ordered set with the order \(\trianglelefteqleright\) such that \((e_{1},\ldots,e_{m})\trianglelefteqleright(e_{1}^{\prime},\ldots,e_{m}^{ \prime})\) if and only if \((e_{1},\ldots,e_{m})\) and \((e_{1}^{\prime},\ldots,e_{m}^{\prime})\) lie in the same orthant \(\mathbb{Z}^{m}_{k}\) and \((|e_{1}|,\ldots,|e_{m}|)\leq_{P}(|e_{1}^{\prime}|,\ldots,|e_{m}^{\prime}|)\). In what follows, for any set \(A\subseteq\mathbb{Z}^{m}\), \(W_{A}\) will denote the set of all elements of \(\mathbb{Z}^{m}\) that do not exceed any element of \(A\) with respect to the order \(\trianglelefteqleright\). Furthermore, for any \(r_{1},\ldots r_{p}\in\mathbb{N}\), \(A(r_{1},\ldots r_{p})\) will denote the set of all elements \(x=(x_{1},\ldots,x_{m})\in A\) such that \(\operatorname{ord}_{i}x\leq r_{i}\) (\(i=1,\ldots,p\)). The above notation can be naturally applied to subsets of \(\mathbb{N}^{m}\) (treated as a subset of \(\mathbb{Z}^{m}\)). If \(E\subseteq\mathbb{N}^{m}\) and \(s_{1},\ldots,s_{p}\in\mathbb{N}\), then \(E(s_{1},\ldots,s_{p})\) will denote the set of all \(m\)-tuples \(e\in E\) such that \(\operatorname{ord}_{i}e\leq s_{i}\) for \(i=1,\ldots,p\). Furthermore, we shall associate with a set \(E\subseteq\mathbb{N}^{m}\) a set \(V_{E}=\{v\in\mathbb{N}^{m}\,|\,v\) is not greater than or equal to any \(m\)-tuple in \(E\) with respect to \(\leq_{P}\}\). (Thus, \(v=(v_{1},\ldots,v_{m})\in V_{E}\) if and only if for any element \((e_{1},\ldots,e_{m})\in E\), there exists \(i\in\{1,\ldots,m\}\) such that \(e_{i}>v_{i}\).) The following two theorems proved in [6, Chapter 2] generalize the well-known Kolchin's result on the (univariate) numerical polynomials of subsets of \(\mathbb{N}^{m}\) (see [4, Chapter 0, Lemma 16]) and give explicit formulas for multivariate numerical polynomials associated with finite subsets of \(\mathbb{N}^{m}\) and \(\mathbb{Z}^{m}\). **Theorem 2.4**.: _Let \(E\subseteq\mathbb{N}^{m}\) and let partition (2) of \(\mathbb{N}_{m}\) be fixed. Then there exists a numerical polynomial \(\omega_{A}(t_{1},\ldots,t_{p})\) such that_ (i) \(\omega_{E}(r_{1},\ldots,r_{p})=\operatorname{Card}V_{A}(r_{1},\ldots,r_{p})\) _for all sufficiently large \((r_{1},\ldots,r_{p})\in\mathbb{N}^{p}\)._ (ii) _The total degree \(\deg\omega_{E}\) of the polynomial \(\omega_{E}\) does not exceed \(m\) and \(\deg_{t_{i}}\omega_{E}\leq m_{i}\) (\(1\leq i\leq p\))._ 3. \(\deg\,\omega_{E}=m\) _if and only if_ \(E=\emptyset\)_. Then_ \(\omega_{E}(t_{1},\ldots,t_{p})=\prod_{i=1}^{p}\binom{t_{i}+m_{i}}{m_{i}}\)_._ **Definition 2.5**.: The polynomial \(\omega_{E}(t_{1},\ldots,t_{p})\) is called the dimension polynomial of the set \(E\subseteq\mathbb{N}^{m}\) associated with partition (2) of \(\mathbb{N}_{m}\). **Theorem 2.6**.: _Let \(E=\{e_{1},\ldots,e_{q}\}\) (\(q\geq 1\)) be a finite subset of \(\mathbb{N}^{m}\) and let partition (2) of \(\mathbb{N}_{m}\) be fixed. Let \(e_{i}=(e_{i1},\ldots,e_{im})\quad\)(\(1\leq i\leq q\)) and for any \(l\in\mathbb{N}\), \(0\leq l\leq q\), let \(\Theta(l,q)\) denote the set of all \(l\)-element subsets of \(\mathbb{N}_{q}=\{1,\ldots,q\}\). Let \(\bar{e}_{\theta j}=0\) and for any \(\theta\in\Theta(l,q)\), \(\theta\neq\emptyset\), let \(\bar{e}_{\theta j}=\max\{e_{ij}\,|\,i\in\theta\}\) (\(1\leq j\leq m\)). Furthermore, let \(b_{\theta k}=\sum_{h\in\Delta_{k}}\bar{e}_{\theta h}\) (\(k=1,\ldots,p\)). Then_ \[\omega_{E}(t_{1},\ldots,t_{p})=\sum_{l=0}^{q}(-1)^{l}\sum_{\theta\in\Theta(l, q)}\prod_{j=1}^{p}\binom{t_{j}+m_{j}-b_{\theta j}}{m_{j}}. \tag{4}\] _Remark 2.7_.: Clearly, if \(E\subseteq\mathbb{N}^{m}\) and \(E^{*}\) is the set of all minimal elements of \(E\) with respect to the product order, then the set \(E^{*}\) is finite and \(\omega_{E}(t_{1},\ldots,t_{p})=\omega_{E^{*}}(t_{1},\ldots,t_{p})\). Thus, the last theorem gives an algorithm that allows one to find the dimension polynomial of any subset of \(\mathbb{N}^{m}\) (and with a given partition (2) of \(\mathbb{N}_{m}\)): one should first find the set of all minimal points of the subset and then apply Theorem 2.6. The following theorem proved in [6, Section 2.5] provides analogs of the results of Theorems 2.4 - 2.6 for subsets of \(\mathbb{Z}^{m}\). **Theorem 2.8**.: _Let \(A\subseteq\mathbb{Z}^{m}\) and let partition (2) of the set \(\mathbb{N}_{m}\) be fixed. Then there exists a numerical polynomial in \(p\) variables \(\phi_{A}(t_{1},\ldots,t_{p})\) such that_ (i)_\(\phi_{A}(r_{1},\ldots,r_{p})=\operatorname{Card}W_{A}(r_{1},\ldots,r_{p})\) for all sufficiently large \(p\)-tuples \((r_{1},\ldots,r_{p})\in\mathbb{N}^{p}\)._ (ii)_\(\deg\phi_{A}\leq m\) and \(\deg_{t_{i}}\phi_{A}\leq m_{i}\) for \(i=1,\ldots,p\). Furthermore, if the polynomial \(\phi_{A}(t_{1},\ldots,t_{p})\) is written in the form (1), then \(2^{m}|a_{m_{1}\ldots m_{p}}\)._ (iii) _Let us consider a mapping \(\rho:\mathbb{Z}^{m}\longrightarrow\mathbb{N}^{2m}\) such that_ \[\rho((e_{1},\ldots,e_{m})=(\max\{e_{1},0\},\ldots,\max\{e_{m},0\},\max\{-e_{1 },0\},\ldots,\max\{-e_{m},0\}).\] _Let \(B=\rho(A)\bigcup\{\bar{e}_{1},\ldots,\bar{e}_{m}\}\) where \(\bar{e}_{i}\) (\(1\leq i\leq m\)) is a \(2m\)-tuple in \(\mathbb{N}^{2m}\) whose \(i\)th and \((m+i)\)th coordinates are equal to 1 and all other coordinates are equal to 0. Then_ \[\phi_{A}(t_{1},\ldots,t_{p})=\omega_{B}(t_{1},\ldots,t_{p})\] _where \(\omega_{B}(t_{1},\ldots,t_{p})\) is the dimension polynomial of the set \(B\) (see Definition 2.5) associated with the following partition of the set \(\mathbb{N}_{2m}\): \(\mathbb{N}_{2m}=\Delta^{\prime}_{1}\cup\Delta_{2}\cup\ldots\Delta^{\prime}_{p}\) where \(\Delta^{\prime}_{i}=\Delta_{i}\cup\{m+k\,|\,k\in\Delta_{i}\}\) for \(i=1,\ldots,p\) (see partition (2))._ (iv) _If \(A=\emptyset\), then_ \[\phi_{A}(t_{1},\ldots,t_{p})=\prod_{j=1}^{p}\left[\sum_{i=0}^{m_{j}}(-1)^{m_{j }-i}2^{i}\binom{m_{j}}{i}\binom{t_{j}+i}{i}\right]. \tag{5}\] **Definition 2.9**.: The polynomial \(\phi_{A}(t_{1},\ldots,t_{p})\) is called the **dimension polynomial** of the set \(A\subseteq\mathbb{Z}^{m}\) associated with partition (2) of \(\mathbb{N}_{m}\). _Remark 2.10_.: The equality (5) (as well as the last part of Theorem 2.1) expresses the fact that the number of solutions \((x_{1},\ldots x_{m})\in\mathbb{Z}^{m}\) of the inequality \(|x_{1}|+\cdots+|x_{m}|\leq r\ (r\in\mathbb{N})\) is \(\sum_{k=0}^{m}(-1)^{m-k}2^{k}\binom{n}{k}\binom{r+k}{k}\) (see [6, Proposition 2.1.9]). It follows that if \(r_{1},\ldots,r_{p},s_{1},\ldots,s_{p}\in\mathbb{N}\), \(s_{i}<r_{i}\ (1\leq i\leq p)\), and \(B=\{b=(b_{1},\ldots,b_{m})\in\mathbb{Z}^{m}\,|\,s_{i}\leq\sum_{\nu\in\Delta_{i }}|b_{\nu}|\leq r_{i}\ \text{for}\ i=1,\ldots,p\}\), (with the fixed partition (2) of \(\mathbb{N}_{m}\)), then \[\operatorname{Card}B=\prod_{i=1}^{p}\left[\sum_{j=0}^{m_{i}}(-1)^{m_{i}-j}2^{ j}\binom{m_{i}}{j}\left(\binom{r_{i}+j}{j}-\binom{s_{i}+j-1}{j}\right) \right].\] We will use this observation in the proof of Theorem 4.1. ## 3 \(E\)-reduction of inversive difference polynomials. \(E\)-characteristic sets Let \(K\) be an inversive difference field with a basic set \(\sigma=\{\alpha_{1},\ldots,\alpha_{m}\}\). Let us fix a partition of the set \(\sigma\), that is, its representation as a union of \(p\) disjoint subsets (\(p\geq 1\)): \[\sigma=\sigma_{1}\bigcup\cdots\bigcup\sigma_{p} \tag{6}\] \[\text{where}\ \sigma_{1}=\{\alpha_{1},\ldots,\alpha_{m_{1}}\},\,\sigma_{2}=\{ \alpha_{m_{1}+1},\ldots,\alpha_{m_{1}+m_{2}}\},\,\ldots,\] \[\sigma_{p}=\{\alpha_{m_{1}+\cdots+m_{p-1}+1},\ldots,\alpha_{m}\}\ \ (m_{1}+\cdots+m_{p}=m).\] If \(\ \gamma=\alpha_{1}^{k_{1}}\ldots\alpha_{n}^{k_{n}}\in\Gamma\ (k_{i}\in\mathbb{Z})\) then the order of \(\gamma\) with respect to \(\sigma_{i}\ (1\leq i\leq p)\) is defined as \(\sum_{\nu=m_{1}+\cdots+m_{i-1}+1}^{m_{1}+\cdots+m_{i}}|k_{\nu}|\); it is denoted by \(\operatorname{ord}_{i}\gamma\). If \(i=1\), the last sum is replaced by \(\sum_{\nu=1}^{m_{1}}|k_{\nu}|\). Furthermore, for any \(r_{1},\ldots,r_{p}\in\mathbf{N}\), we set \(\Gamma(r_{1},\ldots,r_{p})=\{\gamma\in\Gamma\,|\,\operatorname{ord}_{i}\gamma \leq r_{i}\ (i=1,\ldots,p)\}\). Let us consider \(p\) total orderings \(<_{1},\ldots,<_{p}\) of the group \(\Gamma\) such that \[\gamma=\alpha_{1}^{k_{1}}\ldots\alpha_{m}^{k_{m}}<_{i}\gamma^{\prime}=\alpha_ {1}^{k_{1}^{\prime}}\ldots\alpha_{m}^{k_{m}^{\prime}}\ (1\leq i\leq p)\text{ if and only if the }(2m+p)\text{-tuple}\] \[(\operatorname{ord}_{i}\gamma,\operatorname{ord}_{1}\gamma,\ldots,\operatorname {ord}_{i-1}\gamma,\operatorname{ord}_{i+1}\gamma,\ldots,\operatorname{ord}_{p} \gamma,|k_{m_{1}+\cdots+m_{i-1}+1}|,\ldots,\\ |k_{m_{1}+\cdots+m_{i}}|,k_{m_{1}+\cdots+m_{i-1}+1},\ldots,k_{m_{1}+\cdots+m_{i} },|k_{1}|,\ldots,|k_{m_{1}+\cdots+m_{i-1}}|,\\ |k_{m_{1}+\cdots+m_{i}+1},\ldots,|k_{m}|,k_{1},\ldots,k_{m_{1}+\cdots+m_{i-1}}, k_{m_{1}+\cdots+m_{i}+1},\ldots,k_{m})\] is less than the corresponding \((2m+p)\)-tuple for \(\gamma^{\prime}\) with respect to the lexicographic order on \(\mathbb{Z}^{2m+p}\). Two elements \(\gamma_{1}=\alpha_{1}^{k_{1}}\ldots\alpha_{m}^{k_{m}}\) and \(\gamma_{2}=\alpha_{1}^{l_{1}}\ldots\alpha_{n}^{l_{m}}\) in \(\Gamma\) are called _similar_, if the \(m\)-tuples \((k_{1},\ldots,k_{m})\) and \((l_{1},\ldots,l_{m})\) belong to the same orthant of \(\mathbb{Z}^{m}\) (see (3) ). In this case we write \(\gamma_{1}\sim\gamma_{2}\). We say that \(\gamma_{1}\)_divides_\(\gamma_{2}\) (or \(\gamma_{2}\) is a _multiple_ of \(\gamma_{1}\)) and write \(\gamma_{1}|\gamma_{2}\) if \(\gamma_{1}\sim\gamma_{2}\) and there exists \(\gamma\in\Gamma\) such that \(\gamma\sim\gamma_{1}\) and \(\gamma_{2}=\gamma\gamma_{1}\). Let \(R=K\{y_{1},\ldots,y_{n}\}^{*}\) the algebra of \(\sigma^{*}\)-polynomials in \(\sigma^{*}\)-indeterminates \(y_{1},\ldots,y_{n}\) over \(K\). Then \(R\) can be viewed as a polynomial ring in the set of indeterminates \(\Gamma Y=\{\gamma y_{i}\,|\,\gamma\in\Gamma,1\leq i\leq n\}\) whose elements are called _terms_. For every \(j=1,\ldots,p\), we define the order of a term \(u=\gamma y_{i}\) with respect to \(\sigma_{j}\) (denoted by \(\operatorname{ord}_{j}u\)) as the corresponding order of \(\gamma\). Furthermore, considering representation (3) of \(\mathbb{Z}^{m}\) as the union of \(2^{m}\) orthants \(\mathbb{Z}_{j}^{m}\), we set \(\Gamma_{j}=\{\alpha_{1}^{k_{1}}\ldots\alpha_{m}^{k_{m}}\in\Gamma\,|\,(k_{1}, \ldots,k_{m})\in\mathbb{Z}_{j}^{m}\}\) and \(\Gamma_{j}Y=\{\gamma y_{i}\,|\,\gamma\in\Gamma_{j},1\leq i\leq n\}\). Two terms \(u=\gamma y_{i}\) and \(v=\gamma^{\prime}y_{j}\) are called _similar_ if \(\gamma\) and \(\gamma^{\prime}\) are similar; in this case we write \(u\sim v\). If \(u=\gamma y_{i}\) is a term and \(\gamma^{\prime}\in\Gamma\), we say that \(u\) is similar to \(\gamma^{\prime}\) and write \(u\sim\gamma^{\prime}\) if \(\gamma\sim\gamma^{\prime}\). Clearly, if \(u\in\Gamma Y\), \(\gamma\in\Gamma\) and \(\gamma\sim u\), then \(\operatorname{ord}_{j}(\gamma u)=\operatorname{ord}_{j}\gamma+\operatorname{ ord}_{j}u\) for \(j=1,\ldots,p\). Furthermore, if \(u,v\in\Gamma Y\), we say that \(u\)_divides_\(v\) (or \(v\) is a _transform_ or a _multiple_ of \(u\)) and write \(u\,|\,v\), if \(u=\gamma^{\prime}y_{i}\), \(v=\gamma^{\prime\prime}y_{i}\) for some \(y_{i}\) and \(\gamma^{\prime}|\gamma^{\prime\prime}\). (If \(\gamma^{\prime\prime}=\gamma\gamma^{\prime}\) for some \(\gamma\in\Gamma\), \(\gamma\sim\gamma^{\prime}\), we write \(\frac{v}{u}\) for \(\gamma\).) We consider \(p\) orders \(<_{1},\ldots,<_{p}\) on the set \(\Gamma Y\) that correspond to the orders on the group \(\Gamma\) (we use the same symbols for the orders on \(\Gamma\) and \(\Gamma Y\)). These orders are defined as follows: \(\gamma y_{j}<_{i}\gamma^{\prime}y_{k}\) if and only if \(\gamma<_{i}\gamma^{\prime}\) in \(\Gamma\) or \(\gamma=\gamma^{\prime}\) and \(j<k\) (\(1\leq i\leq p\), \(1\leq j,k\leq n\)). **Definition 3.1**.: Let \(f\in K\{y_{1},\ldots,y_{n}\}^{*}\setminus K\) and \(1\leq k\leq p\). Then the greatest with respect to \(<_{k}\) term that appears in \(f\) is called the \(k\)**-leader** of the \(\sigma^{*}\)-polynomial \(f\); it is denoted by \(u_{f}^{(k)}\). The smallest with respect to \(<_{k}\) term in \(f\) is called the \(k\)**-coleader** of \(f\) and is denoted by \(v_{f}^{(k)}\). **Definition 3.2**.: Let \(f\in K\{y_{1},\ldots,y_{n}\}\setminus K\) and let \(u_{f}^{(k)}=\alpha_{1}^{k_{1}}\ldots\alpha_{m}^{k_{m}}y_{i}\) and \(v_{f}^{(k)}=\alpha_{1}^{l_{1}}\ldots\alpha_{m}^{l_{m}}y_{j}\) be the \(k\)-leader and \(k\)-coleader of \(f\), respectively (\(1\leq k\leq p\)). Then for every \(k=1,\ldots,p\), the nonnegative integer \(\operatorname{ord}_{k}u_{f}^{(k)}-\operatorname{ord}_{k}v_{f}^{(k)}\) is called the \(k\)**th effective order** of \(f\); it is denoted by \(\operatorname{Eord}_{k}f\). **Definition 3.3**.: Let \(f\) and \(g\) be two \(\sigma^{*}\)-polynomials in the ring \(K\{y_{1},\ldots,y_{n}\}^{*}\). We say that \(f\)_has lower rank than_\(g\) and write \(\operatorname{rk}\,f<\operatorname{rk}\,g\) if either \(f\in K\), \(g\notin K\), or \[(u_{f}^{(1)},\deg_{u_{f}^{(1)}}f,\operatorname{ord}_{2}u_{f}^{(2)},\ldots, \operatorname{ord}_{p}u_{f}^{(p)},\operatorname{Eord}_{1}f,\ldots, \operatorname{Eord}_{p}f)<_{\operatorname{lex}}\] \[(u_{g}^{(1)},\deg_{u_{g}^{(1)}}f,\operatorname{ord}_{2}u_{g}^{(2)},\ldots, \operatorname{ord}_{p}u_{g}^{(p)},\operatorname{Eord}_{1}g,\ldots, \operatorname{Eord}_{p}g) \tag{7}\] (the comparison of \(u_{f}^{(1)}\) and \(u_{g}^{(1)}\) in this lexicographic order is made with respect to the order \(<_{1}\) on the set of terms \(\Gamma Y\)). If the last two \((2p+1)\)-tuples are equal (or \(f,g\in K\)) we say that \(f\) and \(g\) are of the same rank and write \(\operatorname{rk}f=\operatorname{rk}g\). **Definition 3.4**.: Let \(f,g\in K\{y_{1},\ldots,y_{n}\}^{*}\) and let \(d=\deg_{u_{g}^{(1)}}g\). We say that \(f\) is \(E\)**-reduced** with respect to \(g\) if one of the following two conditions holds. * \(f\) does not contain any \((\gamma u_{g}^{(1)})^{e}\) (\(\gamma\in\Gamma\)) such that \(\gamma\sim u_{g}^{(1)}\) and \(e\geq d\); (ii) \(f\) contains \((\gamma u_{g}^{(1)})^{e}\) with some \(\gamma\in\Gamma\), \(\gamma\sim u_{g}^{(1)}\) and \(e\geq d\), but in this case either there exists \(k\in\mathbb{N}_{p}\), \(k\geq 2\), such that \(\operatorname{ord}_{k}u_{\gamma g}^{(k)}>\operatorname{ord}_{k}(u_{f}^{(k)})\) or there exists \(j\in\mathbb{N}_{p}\) such that \(\operatorname{ord}_{j}v_{\gamma g}^{(j)}<\operatorname{ord}_{j}(v_{f}^{(j)})\). (The "or" here is inclusive, that is, the case when both conditions hold is included.) Thus, \(f\) is not \(E\)-reduced with respect to \(g\) if \(f\) contains some \((\gamma u_{g}^{(1)})^{e}\) such that \(\gamma\in\Gamma\), \(\gamma\sim u_{g}^{(1)}\), \(e\geq d=\deg_{u_{g}^{(1)}}g\), \(\operatorname{ord}_{k}u_{\gamma g}^{(k)}\leq\operatorname{ord}_{k}(u_{f}^{(k)})\) for \(k=2,\ldots,p\), and \(\operatorname{ord}_{j}v_{\gamma g}^{(j)}\geq\operatorname{ord}_{j}(v_{f}^{(j)})\) for \(j=1,\ldots p\). _Remark 3.5_.: If \(f,g\in K\{y_{1},\ldots,y_{n}\}^{*}\) then \(f\) is reduced with respect to \(g\) in the sense of [6, Definition 3.4.22] with respect to the term ordering \(<_{1}\), if condition (i) of the last definition holds. Clearly, in this case \(f\) is \(E\)-reduced with respect to \(g\). _Remark 3.6_.: It follows from [29, Lemma 3.3] that for all \(f\in R=K\{y_{1},\ldots,y_{n}\}^{*}\), \(j\in\{1,\ldots,2^{m}\}\) and \(k\in\{1,\ldots,p\}\), there exist terms \(u_{fjk}\) and \(v_{fjk}\) in \(f\) such that for all elements \(\gamma=\alpha_{1}^{k_{1}}\ldots\alpha_{m}^{k_{m}}\in\Gamma_{j}\) with sufficiently large \((|k_{1}|,\ldots,|k_{m}|)\in\mathbb{N}^{m}\) (in the sense of Definition 2.2), one has \(u_{\gamma f}^{(k)}=\gamma u_{fjk}\) and \(v_{\gamma f}^{(k)}=\gamma v_{fjk}\). For example, let \(\sigma=\{\alpha_{1},\alpha_{2},\alpha_{3}\}\) is considered with the partition \(\sigma=\sigma_{1}\cup\sigma_{2}\cup\sigma_{3}\) with \(\sigma_{i}=\{\alpha_{i}\}\) (\(i=1,2,3\)), \(f=\alpha_{1}^{2}\alpha_{2}^{-1}\alpha_{3}^{-3}y+\alpha_{1}^{-3}\alpha_{2} \alpha_{3}^{-4}y+\alpha_{1}\alpha_{2}^{-2}\alpha_{3}^{2}y+\alpha_{1}^{2}\alpha _{2}^{2}\alpha_{3}y\in K\{y\}^{*}\) and \(\mathbb{Z}_{j}^{(3)}=\{(k_{1},k_{2},k_{3})\,|\,k_{1}\leq 0,k_{2}\geq 0,k_{3} \leq 0\}\). Then for any \(\gamma=\alpha_{1}^{-r}\alpha_{2}^{s}\alpha_{3}^{-t}\in\Gamma_{j}\) (\(r,s,t\geq 0\)), we have \(\gamma f=\alpha_{1}^{-r+2}\alpha_{2}^{s-1}\alpha_{3}^{-t-3}y+\alpha_{1}^{-r-3} \alpha_{2}^{s+1}\alpha_{3}^{-t-4}y+\alpha_{1}^{-r+1}\alpha_{2}^{-2}\alpha_{3}^ {-t+2}y+\alpha_{1}^{-r+2}\alpha_{2}^{s+}\alpha_{3}^{-t+1}y\), hence \(u_{j1f}=u_{j3f}=\alpha_{1}^{-3}\alpha_{2}\alpha_{3}^{-4}y\), \(u_{j2f}=v_{j1f}=\alpha_{1}^{2}\alpha_{2}^{2}\alpha_{3}y\), \(v_{j2f}=v_{j3f}=\alpha_{1}\alpha_{2}^{-2}\alpha_{3}^{2}y\). Therefore, if \(f\in R\) and \(u_{f}^{(1)}=\gamma_{1}y_{k}\) where \(\gamma_{1}\in\Gamma_{j}\) (\(1\leq j\leq 2^{m}\)), then there exist \(a_{if},b_{kf}\in\mathbb{Z}\) (\(2\leq i\leq p,1\leq k\leq p\)) such that for any \(\gamma\in\Gamma_{j}\), \(\operatorname{ord}_{i}u_{\gamma f}^{(i)}=\operatorname{ord}\gamma+a_{if}\) and \(\operatorname{ord}_{k}v_{\gamma f}^{(k)}=\operatorname{ord}\gamma+b_{kf}\). **Proposition 3.7**.: _If \(f,g\in K\{y_{1},\ldots,y_{n}\}^{*}\) and \(\operatorname{rk}f<\operatorname{rk}g\), then \(f\) is \(E\)-reduced with respect to \(g\)._ Proof.: Suppose that \(f\) is not \(E\)-reduced with respect to \(g\). If \(f\) contains some \((\gamma u_{g}^{(1)})^{e}\) such that \(\gamma\in\Gamma\), \(\gamma\sim u_{g}^{(1)}\), and \(e\geq d=\deg_{u_{g}^{(1)}}g\), then \(\gamma=1\) (if \(\gamma\neq 1\), then \(u_{g}^{(1)}<_{1}\gamma u_{g}^{(1)}\leq_{1}u_{f}^{(1)}\) that contradicts the condition (7) for \(\operatorname{rk}f<\operatorname{rk}g\)). Now the fact that \(f\) is not \(E\)-reduced with respect to \(g\) implies that \(\operatorname{ord}_{k}u_{g}^{(k)}\leq\operatorname{ord}_{k}u_{f}^{(k)}\) for \(k=2,\ldots,p\) and \(\operatorname{ord}_{k}v_{g}^{(k)}\geq\operatorname{ord}_{k}v_{f}^{(k)}\) for \(k=1,\ldots,p\). It follows that \(\operatorname{Eord}_{k}g\leq\operatorname{Eord}_{k}f\) (\(1\leq k\leq p\)), so we have arrived at a contradiction with the inequality \(\operatorname{rk}f<\operatorname{rk}g\). Therefore, \(f\) is \(E\)-reduced with respect to \(g\). **Proposition 3.8**.: _Let \(\mathcal{A}=\{g_{1},\ldots,g_{t}\}\) be a finite set of \(\sigma^{*}\)-polynomials in the ring \(R=K\{y_{1},\ldots,y_{n}\}^{*}\), let \(u_{k}^{(i)}\) and \(v_{k}^{(i)}\) denote the \(i\)-leader and \(i\)-coloader of \(g_{k}\), respectively (\(1\leq k\leq t,1\leq i\leq p\)). Let \(d_{k}=\deg_{u_{k}^{(1)}}g_{k}\) and \(I_{k}\) denote the coefficient of \((u_{k}^{(1)})^{d_{k}}\) when \(g_{k}\) is written as a polynomial in \(u_{k}^{(1)}\) (\(1\leq k\leq t\)). Furthermore, let \(I(\mathcal{A})=\{f\in R\,|\,\text{ either }f=1\text{ or }f\text{ is a product of finitely many }\sigma^{*}\)-polynomials of the form \(\gamma(I_{k})\) (\(\gamma\in\Gamma,k=1,\ldots,t)\}\). Then for any \(h\in R\) _there exist \(J\in I(\mathcal{A})\) and \(\overline{h}\in R\) such that \(\overline{h}\) is \(E\)-reduced with respect to \(\mathcal{A}\) and \(Jh\equiv\overline{h}(mod\,[\mathcal{A}]^{*})\) (that is, \(Jh-\overline{h}\in[\mathcal{A}]^{*}\))._ Proof.: If \(h\) is \(E\)-reduced with respect to \(\mathcal{A}\), the statement is obvious (one can set \(\overline{h}=h\)). Suppose that \(h\) is not \(E\)-reduced with respect to \(\mathcal{A}\). In what follows, if a \(\sigma\)-polynomial \(f\in R\) is not \(E\)-reduced with respect to \(\mathcal{A}\), then a term \(w_{f}\) that appears in \(f\) will be called the \(\mathcal{A}\)_-leader_ of \(f\) if \(w_{f}\) is the greatest (with respect to \(<_{1}\)) term among all terms of the form \(\gamma u_{g_{k}}^{(1)}\) with \(\gamma\in\Gamma,\gamma\sim u_{g_{k}}^{(1)}\), (\(1\leq k\leq t\)) such that \(f\) contains \((\gamma u_{k}^{(1)})^{e}\) with \(e\geq d_{k}\), \(\operatorname{ord}_{i}u_{\gamma g_{k}}^{(i)}\leq\operatorname{ord}_{i}u_{f}^{ (i)}\) for \(i=2,\ldots,p\), and \(\operatorname{ord}_{j}v_{\gamma g_{k}}^{(i)}\geq\operatorname{ord}_{j}v_{f}^{ (j)}\) for \(j=1,\ldots,p\). Let \(w_{h}\) be the \(\mathcal{A}\)-leader of the element \(h\), \(d=\deg_{w_{h}}h\), and \(c_{h}\) the coefficient of \(w_{h}^{d}\) when \(h\) is written as a polynomial in \(w_{h}\). Then \(w_{h}=\gamma u_{k}^{(1)}\) for some \(k\in\{1,\ldots,t\}\) and \(\gamma\in\Gamma\) such that \(\gamma\sim u_{g_{k}}^{(1)}\), \(d\geq d_{k}\), \(\operatorname{ord}_{i}u_{\gamma g_{k}}^{(i)}\leq\operatorname{ord}_{i}u_{h}^{ (i)}\) (\(2\leq i\leq p\)), and \(\operatorname{ord}_{j}v_{\gamma g_{k}}^{(j)}\geq\operatorname{ord}_{j}v_{h}^{ (j)}\) (\(1\leq j\leq p\)). Let us choose such \(k\) that corresponds to the maximum (with respect to \(<_{1}\)) 1-leader \(u_{i}^{(1)}\) (\(1\leq i\leq t\)) and consider the \(\sigma^{*}\)-polynomial \(h^{\prime}=\gamma(I_{k})h-c_{h}w_{h}^{d-d_{k}}(\gamma g_{k})\). Clearly, \(\deg_{w_{h}}h^{\prime}<\deg_{w_{h}}h\) and \(h^{\prime}\) does not contain any \(\mathcal{A}\)-leader \(\gamma^{\prime}u_{\nu}^{(1)}\) (\(\gamma^{\prime}\in\Gamma,1\leq\nu\leq t\)) that is greater than \(w_{h}\) with respect to \(<_{1}\) (such a term cannot appear in \(\gamma(I_{k})h\) or \(\gamma g_{k}\), since \(u_{\gamma g_{k}}^{(1)}=\gamma u_{g_{k}}^{(1)}=w_{h}\)). Applying the same procedure to \(h^{\prime}\) and continuing in the same way, we will arrive at a \(\sigma\)-polynomial \(\overline{h}\in R\) such that \(\overline{h}\) is \(E\)-reduced with respect to \(\mathcal{A}\) and \(Jh-\overline{h}\in[\mathcal{A}]^{*}\) for some \(J\in I(\mathcal{A})\). The process of reduction described in the proof of the last proposition can be realized by the following algorithm. (Recall that \(\mathcal{E}_{R}\) denotes the ring of \(\sigma^{*}\)-operators over the \(\sigma^{*}\)-ring \(R=K\{y_{1},\ldots,y_{n}\}^{*}\).) _Algorithm 1_.: \((h,t,g_{1},\ldots,g_{t};\,\overline{h})\) **Input:**\(h\in R\), a positive integer \(t\), \(\mathcal{A}=\{g_{1},\ldots,g_{t}\}\subseteq R\) where \(g_{i}\neq 0\) for \(i=1,\ldots,t\) **Output:** Element \(\overline{h}\in R\), elements \(C_{1},\ldots,C_{t}\in\mathcal{E}_{R}\) and \(J\in I(\mathcal{A})\) such that \(Jh=\sum_{i=1}^{t}C_{i}(g_{i})+\overline{h}\) and \(\overline{h}\) is \(E\)-reduced with respect to \(\mathcal{A}\) **Begin** \(C_{1}:=0,\ldots,C_{t}:=0,\overline{h}:=h\) **While** there exist \(k\), \(1\leq k\leq t\), and a term \(w\) that appears in \(\overline{h}\) with a (nonzero) coefficient \(c_{w}\), such that \(u_{g_{k}}^{(1)}\,|\,w\), \(\deg_{u_{g_{k}}^{(1)}}g_{k}\leq\deg_{w}\overline{h}\), \(\operatorname{ord}_{i}(\gamma_{kw}u_{g_{k}}^{(i)})\leq\operatorname{ord}_{i}u_{ \overline{h}}^{(i)}\) for \(i=2,\ldots,p\), where \(\gamma_{kw}=\frac{w}{u_{g_{k}}^{(1)}}\), and \(\operatorname{ord}_{j}(\gamma_{kw}v_{g_{k}}^{(j)})\geq\operatorname{ord}_{j}v_{ \overline{h}}^{(j)}\) for \(j=1,\ldots,p\), **do** \(z\):= the greatest of the terms \(w\) that satisfy the above conditions. \(l\):= the smallest number \(k\) for which \(u_{g_{k}}^{(1)}\) is the greatest (with respect to \(<_{1}\)) 1-leader of an element of \(\mathcal{A}\) such that \(u_{g_{k}}^{(1)}\,|\,z\), \(\deg_{u_{g_{k}}^{(1)}}g_{k}\leq\deg_{z}\overline{h}\), \(\operatorname{ord}_{i}(\gamma_{kw}u_{g_{k}}^{(i)})\leq\operatorname{ord}_{i}u_{ \overline{h}}^{(i)}\) for \(i=2,\ldots,p\), where \(\gamma_{kz}=\frac{z}{u_{g_{k}}^{(1)}}\), and \(\operatorname{ord}_{j}(\gamma_{kz}v_{g_{k}}^{(j)})\geq\operatorname{ord}_{j}v_{ \overline{h}}^{(j)}\) for \(j=1,\ldots,p\), \(J:=\gamma(I_{l})J,C_{l}:=C_{l}+c_{z}z^{d-d_{l}}\gamma_{lz}\) where \(d=\deg_{z}\overline{h}\), \(d_{l}=\deg_{u_{g_{l}^{(1)}}}g_{l}\), and \(c_{z}\) is the coefficient of \(z^{d}\) when \(\overline{h}\) is written as a polynomial in \(z\) \(\overline{h}:=\tau(I_{l})h^{*}-c_{z}z^{d-d_{l}}(\gamma g_{l})\) **End** **Definition 3.9**.: A set \(\mathcal{A}\subseteq K\{y_{1},\dots,y_{n}\}^{*}\) is said to be \(E\)**-autoreduced** if either it is empty or \(\mathcal{A}\bigcap K=\emptyset\) and every element of \(\mathcal{A}\) is \(E\)-reduced with respect to all other elements of the set \(\mathcal{A}\). _Example_.: Let \(K\) be an inversive difference field with a basic set \(\sigma=\{\alpha_{1},\alpha_{2}\}\) considered with a partition \(\sigma=\sigma_{1}\cup\sigma_{2}\) where \(\sigma_{1}=\{\alpha_{1}\}\) and \(\sigma_{2}=\{\alpha_{2}\}\). Let \(\mathcal{A}=\{g,h\}\subseteq K\{y\}^{*}\) (the ring of \(\sigma^{*}\)-polynomials in one \(\sigma^{*}\)-indeterminate \(y\)) where \[g=\alpha_{1}^{3}\alpha_{2}^{-2}y+\alpha_{2}^{3}y+\alpha_{2}y,\qquad h=\alpha_ {1}^{2}\alpha_{2}^{-1}y+\alpha_{1}^{-1}\alpha_{2}^{2}y+\alpha_{1}\alpha_{2}y.\] Then \(u_{g}^{(1)}=\alpha_{1}^{3}\alpha_{2}^{-2}y\), \(v_{g}^{(1)}=u_{g}^{(2)}=\alpha_{2}^{3}y\), \(v_{g}^{(2)}=\alpha_{2}y\), \(u_{h}^{(1)}=\alpha_{1}^{2}\alpha_{2}^{-1}y\), \(v_{h}^{(1)}=v_{h}^{(2)}=\alpha_{1}\alpha_{2}y\), and \(u_{h}^{(2)}=\alpha_{1}^{-1}\alpha_{2}^{2}y\). We see that \(u_{g}^{(1)}\) is a transform of \(u_{h}^{(1)}\), \(u_{g}^{(1)}=\gamma u_{h}^{(1)}\) where \(\gamma=\alpha_{1}\alpha_{2}^{-1}\sim u_{h}^{(1)}\). Furthermore, \(\gamma h=\alpha_{1}^{3}\alpha_{2}^{-2}y+\alpha_{1}^{2}y+\alpha_{2}y\), so \(u_{\gamma h}^{(1)}=u_{\gamma h}^{(2)}=\alpha_{1}^{3}\alpha_{2}^{-2}y\), \(v_{\gamma h}^{(1)}=\alpha_{2}y\), and \(v_{\gamma h}^{(2)}=\alpha_{1}^{2}y\). Thus, \(\operatorname{ord}_{2}u_{\gamma h}^{(2)}=2<\operatorname{ord}_{2}u_{g}^{(2)}=3\), \(\operatorname{ord}_{1}v_{\gamma h}^{(1)}=0=\operatorname{ord}_{1}v_{g}^{(1)}\), but \(\operatorname{ord}_{2}v_{\gamma h}^{(2)}=0<\operatorname{ord}_{2}v_{g}^{(2)}=1\). Therefore, \(g\) is \(E\)-reduced with respect to \(h\). Since \(h\) is clearly \(E\)-reduced with respect to \(g\), \(\mathcal{A}=\{g,h\}\) is an \(E\)-autoreduced set. At the same time, this set is not autoreduced in the sense of [14] where an analog of Definition 3.4 does not require the option "there exists \(j\in\mathbb{N}_{p}\) such that \(\operatorname{ord}_{j}v_{\gamma g}^{(j)}<\operatorname{ord}_{j}(v_{f}^{(j)})\)" in the case when \(f\) contains \((\gamma u_{g}^{(1)})^{e}\) with some \(\gamma\in\Gamma\), \(\gamma\sim u_{g}^{(1)}\) and \(e\geq d\) (see Definition 3.4). We are going to show that every \(E\)-autoreduced set is finite. The proof of the following lemma can be found in [4, Chapter 0, Section 17]. **Lemma 3.10**.: _Let \(A\) be an infinite subset of the set \(\mathbb{N}^{m}\times\mathbb{N}_{n}\) (\(m,n\geq 1\)). Then there exists an infinite sequence of elements of \(A\), strictly increasing relative to the product order, in which every element has the same projection on \(\mathbb{N}_{n}\)._ Since every infinite sequence of elements of \(\Gamma\) contains an infinite subsequence whose elements are similar to each other (there are only finitely many orthants of \(\mathbb{Z}^{m}\)), the last lemma immediately implies the following statement that will be used below. **Lemma 3.11**.: _Let \(S\) be any infinite set of terms \(\gamma y_{j}\) (\(\gamma\in\Gamma,1\leq j\leq n\)) in the ring \(K\{y_{1},\dots,y_{n}\}^{*}\). Then there exists an index \(j\) (\(1\leq j\leq n\)) and an infinite sequence of terms \(\gamma_{1}y_{j},\gamma_{2}y_{j},\dots,\gamma_{k}y_{j},\dots\) in \(S\) such that \(\gamma_{k}\,|\,\gamma_{k+1}\) for every \(k=1,2,\dots\)._ **Proposition 3.12**.: _Every \(E\)-autoreduced set is finite._ Proof.: Suppose that there is an infinite \(E\)-autoreduced set \(\mathcal{A}\). It follows from Lemma 3.11 that \(\mathcal{A}\) contains a sequence of \(\sigma^{*}\)-polynomials \(\{f_{1},f_{2},\dots\}\) such that \(u_{f_{i}}^{(1)}\,|\,u_{f_{i+1}}^{(1)}\) for \(i=1,2,\dots\). Since the sequence of non-negative integers \(\{\deg_{u_{f_{i}}^{(1)}}f_{i}\}\) cannot have an infinite decreasing subsequence, without loss of generality we can assume that \(\deg_{u_{f_{i}}^{(1)}}f_{i}\leq\deg_{u_{f_{i+1}}^{(1)}}f_{i+1}\)\((i=1,2,\dots)\). Let \(k_{ij}=\operatorname{ord}_{j}u_{f_{i}}^{(1)}\), \(l_{ij}=\operatorname{ord}_{j}u_{f_{i}}^{(j)}\), \(n_{ij}=\operatorname{ord}_{j}v_{f_{i}}^{(j)}\)\((1\leq j\leq p)\). Obviously, \(l_{ij}\geq k_{ij}\geq n_{ij}\)\((i=1,2,\dots;j=1,\dots,p)\), so \(\{(l_{i1}-k_{i1}=0,l_{i2}-k_{i2},\dots,l_{ip}-k_{ip})\,|\,i=1,2,\dots\}\subseteq \mathbb{N}^{p}\) and \(\{(k_{i1}-n_{i1},k_{i2}-n_{i2},\dots,k_{ip}-n_{ip})\mid i=1,2,\dots\}\subseteq \mathbb{N}^{p}\). By Lemma 3.10, there exists an infinite sequence of indices \(i_{1}<i_{2}<\dots\) such that \[(l_{i_{1}2}-k_{i_{1}2},\dots,l_{i_{1}p}-k_{i_{1}p})\leq_{P}(l_{i_{2}2}-k_{i_{2 }2},\dots,l_{i_{2}p}-k_{i_{2}p})\leq_{P}\dots \tag{8}\] and \[(k_{i_{1}1}-n_{i_{1}1},\dots,k_{i_{1}p}-n_{i_{1}p})\leq_{P}(k_{i_{1}1}-n_{i_{1} 1},\dots,k_{i_{2}p}-n_{i_{2}p})\leq_{P}\dots. \tag{9}\] Then for any \(j=2,\dots,p\) and for \(\gamma_{12}=\dfrac{u_{f_{i_{2}}}^{(1)}}{u_{f_{i_{1}}}^{(1)}}\), we have (using (8)) \(\operatorname{ord}_{j}u_{\gamma_{12}f_{i_{1}}}^{(j)}\leq\operatorname{ord}_{j }\gamma_{12}u_{f_{i_{1}}}^{(j)}=k_{i_{2}j}-k_{i_{1}j}+l_{i_{1}j}\leq k_{i_{2}j }+l_{i_{2}j}-k_{i_{2}j}=l_{i_{2}j}=\operatorname{ord}_{j}u_{f_{i_{2}}}^{(j)}\). Similar arguments with the use of (9) show that \(\operatorname{ord}_{j}(\tau v_{f_{i_{1}}}^{(j)})\geq\operatorname{ord}_{j}v_{ f_{i_{2}}}^{(j)}\) for \(j=2,\dots,p\). Thus, the \(\sigma^{*}\)-polynomial \(f_{i_{2}}\) is not \(E\)-reduced with respect to \(f_{i_{1}}\) that contradicts the fact that \(\mathcal{A}\) is an \(E\)-autoreduced set. In what follows, while considering \(E\)-autoreduced sets we always assume that their elements are arranged in order of increasing rank. **Definition 3.13**.: Let \(\mathcal{A}=\{g_{1},\dots,g_{s}\}\) and \(\mathcal{B}=\{h_{1},\dots,h_{t}\}\) be two \(E\)-autoreduced sets in the ring \(K\{y_{1},\dots,y_{n}\}^{*}\). Then \(\mathcal{A}\) is said to have lower rank than \(\mathcal{B}\), written as \(\operatorname{rk}\mathcal{A}<\operatorname{rk}\mathcal{B}\), if one of the following two cases holds: 1. \(\operatorname{rk}g_{1}<\operatorname{rk}h_{1}\) or there exists \(k\in\mathbb{N}\) such that \(1<k\leq\min\{s,t\}\), \(\operatorname{rk}g_{i}=\operatorname{rk}h_{i}\) for \(i=1,\dots,k-1\) and \(\operatorname{rk}g_{k}<\operatorname{rk}h_{k}\). 2. \(s>t\) and \(\operatorname{rk}g_{i}=\operatorname{rk}h_{i}\) for \(i=1,\dots,t\). If \(s=t\) and \(\operatorname{rk}g_{i}=\operatorname{rk}h_{i}\) for \(i=1,\dots,s\), then \(\mathcal{A}\) is said to have the same rank as \(\mathcal{B}\); in this case we write \(\operatorname{rk}\mathcal{A}=\operatorname{rk}\mathcal{B}\) **Proposition 3.14**.: _In every nonempty family of \(E\)-autoreduced sets of difference polynomials there exists an \(E\)-autoreduced set of lowest rank._ Proof.: In order to proof the proposition, we will mimic the proof of the corresponding statement for differential polynomials, see [4, Chapter 1, Proposition 3] as follows. Let \(\mathcal{M}\) be a nonempty family of \(E\)-autoreduced sets in the ring \(K\{y_{1},\dots,y_{n}\}^{*}\). Let us inductively define an infinite descending chain of subsets of \(\mathcal{M}\) as follows: \(\mathcal{M}_{0}=\mathcal{M}\), \(\mathcal{M}_{1}=\{\mathcal{A}\in\mathcal{M}_{0}\,|\,\mathcal{A}\) contains at least one element and the first element of \(\mathcal{A}\) is of lowest possible rank\(\},\dots,\mathcal{M}_{k}=\{\mathcal{A}\in\mathcal{M}_{k-1}\,|\,\mathcal{A}\) contains at least \(k\) elements and the \(k\)th element of \(\mathcal{A}\) is of lowest possible rank\(\},\dots\). It is clear that if \(\mathcal{A}\) and \(\mathcal{B}\) are any two \(E\)-autoreduced sets in \(\mathcal{M}_{k}\) and \(f\) and \(g\) are their \(l\)th \(\sigma\)-polynomials (\(l\geq k\)), then \(\operatorname{rk}f=\operatorname{rk}g\). Therefore, if all sets \(\mathcal{M}_{k}\) are nonempty, then the set \(\{A_{k}\,|\,A_{k}\) is the \(k\)th element of some \(E\)-autoreduced set in \(\mathcal{M}_{k}\}\) would be an infinite \(E\)-autoreduced set, and this would contradict Proposition 3.12. Thus, there is the smallest positive integer \(k\) such that \(\mathcal{M}_{k}=\emptyset\). Clearly, every element of \(\mathcal{M}_{k-1}\) is an \(E\)-autoreduced set of lowest rank in the family \(\mathcal{M}\). Let \(J\) be any nonzero ideal of the ring \(K\{y_{1},\dots,y_{n}\}^{*}\). Since the set of all \(E\)-autoreduced subsets of \(J\) is not empty (if \(0\neq f\in J\), then \(\{f\}\) is an \(E\)-autoreduced subset of \(J\)), the last statement shows that \(J\) contains an \(E\)-autoreduced subset of lowest rank. Such an \(E\)-autoreduced set is called an \(E\)**-characteristic set** of the ideal \(J\). **Proposition 3.15**.: _Let \(\mathcal{A}=\{f_{1},\dots,f_{d}\}\) be an \(E\)-characteristic set of a \(\sigma\)-ideal \(J\) of the ring \(K\{y_{1},\dots,y_{n}\}^{*}\). Then an element \(g\in J\) is \(E\)-reduced with respect to the set \(\mathcal{A}\) if and only if \(g=0\)._ Proof.: First of all, note that if \(g\neq 0\) and \(\operatorname{rk}\,g<\operatorname{rk}\,f_{1}\), then \(\operatorname{rk}\,\{g\}<\operatorname{rk}\,\mathcal{A}\) that contradicts the fact that \(\mathcal{A}\) is a \(E\)-characteristic set of the ideal \(J\). Let \(\operatorname{rk}\,g>\operatorname{rk}\,f_{1}\) and let \(f_{1},\dots,f_{j}\) (\(1\leq j\leq d\)) be all elements of \(\mathcal{A}\) whose rank is lower that the rank of \(g\). Then the set \(\mathcal{A}^{\prime}=\{f_{1},\dots,f_{j},g\}\) is \(E\)-autoreduced. Indeed, by the conditions of the proposition, \(\sigma\)-polynomials \(f_{1},\dots,f_{j}\) are \(E\)-reduced with respect to each other and \(g\) is \(E\)-reduced with respect to the set \(\{f_{1},\dots,f_{j}\}\). Furthermore, each \(f_{i}\) (\(1\leq i\leq j\)) is \(E\)-reduced with respect to \(g\) because \(\operatorname{rk}\,f_{i}<\operatorname{rk}\,g\). Since \(\operatorname{rk}\,\mathcal{A}^{\prime}<\operatorname{rk}\,\mathcal{A}\), \(\mathcal{A}\) is not an \(E\)-characteristic set of \(J\) that contradicts the conditions of the proposition. Thus, \(g=0\). It follows from Remark 3.5 that every autoreduced (respectively, characteristic) set of an ideal \(J\) of \(K\{y_{1},\dots,y_{n}\}^{*}\) in the sense of [6, Definitions 3.4.23 and 3.4.31] with respect to \(<_{1}\) is an \(E\)-autoreduced (respectively, \(E\)-characteristic) set of \(J\). Therefore, one can apply [6, Corollary 6.5.4] to obtain the following statement. **Proposition 3.16**.: _Let \(\preceq\) be a preorder on \(K\{y_{1},\dots,y_{n}\}^{*}\) such that \(f\preceq g\) if and only if \(u_{g}^{(1)}\) is a transform of \(u_{f}^{(1)}\). Let \(f\) be a linear \(\sigma^{*}\)-polynomial in \(K\{y_{1},\dots,y_{n}\}^{*}\setminus K\). Then the set of all minimal with respect to \(\preceq\) elements of the set \(\{\gamma f\,|\,\gamma\in\Gamma\}\) is an \(E\)-characteristic set of the \(\sigma^{*}\)-ideal \([f]^{*}\)._ ## 4 A new type of multivariate dimension polynomials of \(\sigma^{*}\)-field extensions In this section we use properties of \(E\)-characteristic sets to obtain the following result that generalizes Theorem 2.1 and introduces a new type of multivariate dimension polynomials of finitely generated inversive difference field extensions that carry more invariants than any previously known difference dimension polynomials. (By an invariant of an inversive difference (\(\sigma^{*}\)-) field extension we mean a numerical characteristic that does not depend on the choice of the finite set of its \(\sigma^{*}\)-generators.) As before, \(K\) denotes an inversive difference (\(\sigma^{*}\)-) field with a basic set \(\sigma=\{\alpha_{1},\ldots,\alpha_{m}\}\) considered together with its partition (6) into the union of \(p\) disjoint subsets \(\sigma_{i}\), \(\operatorname{Card}\sigma_{i}=m_{i}\) (\(1\leq i\leq p\)). Furthermore, for any two \(p\)-tuples \((r_{1},\ldots,r_{p}),(s_{1},\ldots,s_{p})\in\mathbb{N}^{p}\) with \(s_{i}\leq r_{i}\) for \(i=1,\ldots,p\), we set \[\Gamma(r_{1},\ldots,r_{p};s_{1},\ldots,s_{p})=\{\gamma\in\Gamma\,|\,s_{i}\leq \operatorname{ord}_{i}\gamma\leq r_{i}\text{ for }i=1,\ldots,p\}.\] **Theorem 4.1**.: _Let \(L=K\langle\eta_{1},\ldots,\eta_{n}\rangle^{*}\) be a \(\sigma^{*}\)-field extension generated by a set \(\eta=\{\eta_{1},\ldots,\eta_{n}\}\). Then there exists a polynomial \(\Phi_{\eta\,|\,K}(t_{1},\ldots,t_{2p})\) in \(2p\) variables with rational coefficients and numbers \(r_{i}^{(0)},s_{i}^{(0)},s_{i}^{(1)}\in\mathbb{N}\) (\(1\leq i\leq p\)) with \(s_{i}^{(1)}<r_{i}^{(0)}-s_{i}^{(0)}\) such that_ \[\Phi_{\eta\,|\,K}(r_{1},\ldots,r_{p},s_{1},\ldots,s_{p})=\] \[\operatorname{tr.deg}_{K}K(\{\gamma\eta_{j}\,|\,\gamma\in\Gamma(r_{1},\ldots, r_{p};s_{1},\ldots,s_{p}),1\leq j\leq n\})\] _for all \((r_{1},\ldots,r_{p},s_{1},\ldots,s_{p})\in\mathbb{N}^{2p}\) with \(r_{i}\geq r_{i}^{(0)}\), \(s_{i}^{(1)}\leq s_{i}\leq r_{i}-s_{i}^{(0)}\). Furthermore, \(\deg\Phi_{\eta\,|\,K}\leq m\), \(\deg_{t_{i}}\Phi_{\eta\,|\,K}\leq m_{i}\) for \(i=1,\ldots,p\) and \(\deg_{t_{j}}\Phi_{\eta\,|\,K}\leq m_{j-p}\) for \(j=p+1,\ldots,2p\)._ Proof.: Let \(P\subseteq R=K\{y_{1},\ldots,y_{n}\}\) be the defining \(\sigma^{*}\)-ideal of the extension \(L/K\) and let \(\mathcal{A}=\{f_{1},\ldots,f_{q}\}\) be an \(E\)-characteristic set of \(P\). For any \(\overline{r}=(r_{1},\ldots,r_{p}),\overline{s}=(s_{1},\ldots,s_{p})\in\mathbb{ N}^{p}\) such that \(\overline{s}\leq_{P}\overline{r}\) (that is, \(s_{i}\leq r_{i}\) for \(i=1,\ldots,p\)), let \[W(\overline{r},\overline{s})=\{w\in\Gamma Y\,|\,s_{i}\leq\operatorname{ord}_{ i}w\leq r_{i}\,\text{ for }\,i=1,\ldots,p\},\] \[W_{\eta}(\overline{r},\overline{s})=\{w(\eta)\,|\,w\in W(\overline{r}, \overline{s})\},\] \[U^{\prime}(\overline{r},\overline{s})=\{u\in\Gamma Y\,|\,s_{i}\leq \operatorname{ord}_{i}u\leq r_{i}\,\text{ for }i=1,\ldots,p\text{ and }u\text{ is not a transform}\] of any \(\,u_{f_{j}}^{(1)}\,(1\leq j\leq q)\}\), \[U^{\prime}_{\eta}(\overline{r},\overline{s})=\{u(\eta)\,|\,u\in U^{\prime}( \overline{r},\overline{s})\},\] \[U^{\prime\prime}(\overline{r},\overline{s})=\{u\in\Gamma Y\,|\,s_{i}\leq \operatorname{ord}_{i}u\leq r_{i}\,(1\leq i\leq p),\,\,\,u\text{ is a transform of some }u_{f_{j}}^{(1)}\] (\(1\leq j\leq q\)) and whenever \(u=\gamma u_{f_{j}}^{(1)}\) (\(\gamma\in\Gamma\), \(\gamma\sim u_{f_{j}}^{(1)}\)), either \(\operatorname{ord}_{1}v_{\gamma f_{j}}^{(1)}<s_{1}\) or there exists \(k\in\{2,\ldots,p\}\) such that \(\operatorname{ord}_{k}(u_{\gamma f_{j}}^{(k)})>r_{k}\) or there exists \(i\in\{2,\ldots,p\}\) such that \(\operatorname{ord}_{i}v_{\gamma f_{j}}^{(i)}<s_{i}\) ("or" is inclusive)\(\}\), \[U^{\prime\prime}_{\eta}(\overline{r},\overline{s})=\{u(\eta)\,|\,u\in U^{ \prime\prime}(\overline{r},\overline{s})\}.\] Furthermore, let \[U(\overline{r},\overline{s})=U^{\prime}(\overline{r},\overline{s})\cup U^{ \prime\prime}(\overline{r},\overline{s})\,\text{ and }\,U_{\eta}(\overline{r},\overline{s})=U^{\prime}_{\eta}(\overline{r}, \overline{s})\cup U^{\prime\prime}_{\eta}(\overline{r},\overline{s}).\] We are going to prove that for every \(\overline{r},\overline{s}\in\mathbb{N}^{p}\) with \(\overline{s}<_{P}\overline{r}\), the set \(U_{\eta}(\overline{r},\overline{s})\) is a transcendence basis of the field \(K(W_{\eta}(\overline{r},\overline{s}))\) over \(K\). First, one can see that this set is algebraically independent over \(K\). Indeed, if \(f(w_{1}(\eta),\ldots,w_{k}(\eta))=0\) for some elements \(w_{1},\ldots,w_{k}\in U(\overline{r},\overline{s})\), then the \(\sigma^{*}\)-polynomial \(f(w_{1},\ldots,w_{k})\) lies in \(P\) and it is \(E\)-reduced with respect to \(\mathcal{A}\). (If \(f\) contains a term \(w=\gamma u_{f_{j}}^{(1)}\), \(1\leq i\leq q\), \(\gamma\in\Gamma\), \(\gamma\sim u_{f_{j}}^{(1)}\) such that \(\deg_{w}f\geq\deg_{u_{f_{j}}^{(1)}}f_{j}\), then \(w\in U^{\prime\prime}(\overline{r},\overline{s})\), so either \(\operatorname{ord}_{1}(v_{\gamma f_{j}}^{(1)})<s_{1}\leq\operatorname{ord}_{1 }v_{f}^{(1)}\) or there exist \(k\in\{2,\ldots,q\}\) such that \(\operatorname{ord}_{k}u_{\gamma f_{j}}^{(k)}>r_{k}\geq\operatorname{ord}_{k}u_ {f}^{(k)}\) or there exists \(i\in\{2,\ldots,p\}\) such that \(\operatorname{ord}_{i}v_{\gamma f_{j}}^{(i)}<s_{i}\leq\operatorname{ord}_{i}v_ {f}^{(i)}\); "or" is inclusive). It follows that \(f\) is \(E\)-reduced with respect to \(\mathcal{A}\).) By Proposition 3.15, \(f=0\), so the set \(U_{\eta}(\overline{r},\overline{s})\) is algebraically independent over \(K\). Now let us prove that if \(0\leq s_{i}\leq r_{i}-s_{i}^{(0)}\), where \(s_{i}^{(0)}=\max\{\operatorname{Eord}_{i}f_{j}\,|\,1\leq j\leq q\}\) (\(1\leq i\leq p\)), then every element \(\gamma\eta_{k}\in W_{\eta}(\overline{r},\overline{s})\setminus U_{\eta}( \overline{r},\overline{s})\) (\(\gamma\in\Gamma\), \(1\leq k\leq n\)) is algebraic over the field \(K(U_{\eta}(\overline{r},\overline{s}))\). In this case, since \(\gamma y_{k}\notin U(\overline{r},\overline{s})\), \(\gamma y_{k}\) is equal to some term of the form \(\gamma^{\prime}u_{f_{j}}^{(1)}\) (\(1\leq j\leq q\)) where \(\gamma^{\prime}\in\Gamma\), \(\gamma^{\prime}\sim\gamma^{\prime}u_{j}^{(1)}\), \(\operatorname{ord}_{i}u_{\gamma^{\prime}f_{j}}^{(i)}\leq r_{i}\) for \(i=2,\ldots,p\), and \(\operatorname{ord}_{l}v_{\gamma^{\prime}f_{j}}^{l}\geq s_{l}\) for \(l=1,\ldots,p\). Let us represent \(f_{j}\) as a polynomial in \(u_{f_{j}}^{(1)}\): \[f_{j}=I_{d_{j}}^{(j)}(u_{f_{j}}^{(1)})^{d_{j}}+\cdots+I_{1}^{(j)}u_{f_{j}}^{(1 )}+I_{0}^{(j)}\] where \(I_{0}^{(j)},I_{1}^{(j)},\ldots I_{d_{j}}^{(j)}\) do not contain \(u_{f_{j}}^{(1)}\) (therefore, all terms in these \(\sigma^{*}\)-polynomials are lower than \(u_{f_{j}}^{(1)}\) with respect to \(<_{1}\)). Since \(f_{j}\in P\), \(f_{j}(\eta)=0\), that is, \[I_{d_{j}}^{(j)}(\eta)(u_{f_{j}}^{(1)}(\eta))^{d_{j}}+\cdots+I_{1}^{(j)}(\eta)u_ {f_{j}}^{(1)}(\eta)+I_{0}^{(j)}(\eta)=0. \tag{10}\] Note that \(I_{d_{j}}^{(j)}(\eta)\neq 0\). Indeed, since \(\operatorname{rk}I_{d_{j}}^{(j)}<\operatorname{rk}f_{j}\), the equality \(I_{d_{j}}^{(j)}(\eta)=0\) would imply that \(I_{d_{j}}^{(j)}\in P\). In this case, the family of all \(f_{l}\) with \(\operatorname{rk}f_{l}<\operatorname{rk}I_{d_{j}}^{(j)}\) and \(I_{d_{j}}^{(j)}\) would form an \(E\)-autoreduced set in \(P\) whose rank is lower than the rank of \(\mathcal{A}\). This contradicts the fact that \(\mathcal{A}\) is an \(E\)-characteristic set of \(P\). Similarly, \(I_{\nu}^{(j)}\notin P\) for any \(\nu=0,\ldots,d_{j}\) (and any \(j=1,\ldots,q\)) and since \(P\) is a \(\sigma^{*}\)-ideal, \(\gamma(I_{\nu}^{(j)})\notin P\) for any \(I_{\nu}^{(j)}\), \(\gamma\in\Gamma\). Therefore, if we apply \(\gamma^{\prime}\) to both sides of (10), the resulting equality will show that the element \(\gamma^{\prime}u_{f_{j}}^{(1)}(\eta)=\gamma\eta_{k}\) is algebraic over the field \(K(\{\tilde{\gamma}\eta\,|\,s_{i}\leq\operatorname{ord}_{i}\tilde{\gamma}\leq r_{ i}\,(1\leq i\leq p),\tilde{\gamma}y_{l}<_{1}\gamma^{\prime}u_{f_{j}}^{(1)}\})\). (Note that if \(I=I_{\nu}^{(j)}\) for some \(j\in\{1,\ldots,q\}\) and \(\nu\in\{0,\ldots,d_{j}\}\), then \(\operatorname{ord}_{i}(\gamma^{\prime}u_{I}^{(i)})\leq\operatorname{ord}_{i}u_{ \gamma^{\prime}I}^{(i)}\leq r_{i}\) (\(2\leq i\leq p\)) and \(\operatorname{ord}_{k}(\gamma^{\prime}v_{I}^{(k)})\geq\operatorname{ord}_{k}v_{ \gamma^{\prime}I}^{(k)}\leq s_{k}\) (\(1\leq k\leq p\)) ). Now, the induction on the well-ordered (with respect to \(<_{1}\)) set of terms \(\Gamma Y\) completes the proof of the fact that the set \(U_{\eta}(\overline{r},\overline{s})\) is a transcendence basis of the field \(K(W_{\eta}(\overline{r},\overline{s})\) over \(K\). In order to evaluate the size of \(U_{\eta}(\overline{r},\overline{s})\) we are going to evaluate the sizes of the sets \(U_{\eta}^{\prime}(\overline{r},\overline{s})\) and \(U_{\eta}^{\prime\prime}(\overline{r},\overline{s})\), that is, the sizes of the sets \(U^{\prime}(\overline{r},\overline{s})\) and \(U^{\prime\prime}(\overline{r},\overline{s})\). For every \(k=1,\ldots,n\), let \[A_{k}=\{(i_{1},\ldots,i_{m})\in\mathbb{Z}^{m}\,\mid\,\alpha_{1}^{i_{1}}\ldots \alpha_{m}^{i_{m}}y_{k}\,\,\,\text{is the 1-leader of some element of }\,\mathcal{A}\}.\] Applying Theorem 2.8, we obtain that there exists a numerical polynomial \(\omega_{k}(t_{1},\ldots,t_{p})\) in \(p\) variables with rational coefficients such that \(\omega_{k}(r_{1},\ldots,r_{p})=\operatorname{Card}W_{A_{k}}(r_{1},\ldots,r_{p})\) for all sufficiently large \((r_{1},\ldots,r_{p})\in\mathbb{N}^{p}\). It follows that if we set \(\psi_{\eta|K}(t_{1},\ldots,t_{p})=\sum_{k=1}^{n}\omega_{k}(t_{1},\ldots,t_{p})\), then there exist \(r_{i}^{(0)},s_{i}^{(0)},s_{i}^{(1)}\in\mathbb{N}\)\((1\leq i\leq p)\) with \(s_{i}^{(1)}<r_{i}^{(0)}-s_{i}^{(0)}\) such that for all \(\overline{r}=(r_{1},\ldots,r_{p}),\overline{s}=(s_{1},\ldots,s_{p})\in \mathbb{N}^{p}\) with \(r_{i}\geq r_{i}^{(0)}\), \(s_{i}^{(1)}\leq s_{i}\leq r_{i}-s_{i}^{(0)}\), one has \[\operatorname{Card}U_{\eta}(\overline{r},\overline{s})=\psi_{\eta|K}(r_{1}, \ldots,r_{p})-\psi_{\eta|K}(s_{1}-1,\ldots,s_{p}-1). \tag{11}\] Furthermore, \(\deg\psi_{\eta|K}\leq m\), and \(\deg\psi_{\eta|K}=m\) if and only if at least one of the sets \(A_{k}\)\((1\leq k\leq n)\) is empty. In order to evaluate \(\operatorname{Card}U^{\prime\prime}(\overline{r},\overline{s})\), note that this set consists of all terms \(\gamma u_{f_{j}}^{(1)}\)\((\gamma\in\Gamma\), \(\gamma\sim u_{f_{j}}^{(1)},1\leq j\leq q)\) such that \(s_{i}\leq\operatorname{ord}_{i}u_{\gamma f_{j}}^{(1)}\leq r_{i}\) and either \(\operatorname{ord}_{1}v_{\gamma f_{j}}^{(1)}<s_{1}\) or there exists \(k\in\{2,\ldots,p\}\) such that \(\operatorname{ord}_{k}u_{\gamma f_{j}}^{(k)}>r_{k}\) or there exists \(i\in\{2,\ldots,p\}\) such that \(\operatorname{ord}_{i}v_{\gamma f_{j}}^{(i)}<s_{i}\) ("or" is inclusive). It follows from Remarks 3.6 and 2.10 that if we fix \(j\), the number of such terms \(\gamma u_{f_{j}}^{(1)}\) satisfying the conditions \(\operatorname{ord}_{i}v_{\gamma f_{j}}^{(i)}=\operatorname{ord}\gamma+b_{if_ {j}}<s_{i}\), \(\operatorname{ord}_{i}(\gamma u_{f_{j}}^{(1)})=\operatorname{ord}_{i}\gamma+a_ {1f_{j}}\geq s_{i}\) for \(i\in\{k_{1},\ldots,k_{d}\}\subseteq\{1,\ldots,p\}\), \(\operatorname{ord}_{i}(v_{\gamma f_{j}}^{(i)})=\operatorname{ord}\gamma+b_{if_ {j}}\geq s_{i}\) for \(i\in\{1,\ldots,p\}\), \(i\neq k_{\nu}\)\((1\leq\nu\leq d)\) and \(\operatorname{ord}_{i}u_{\gamma f_{j}}^{(i)}=\operatorname{ord}\gamma+a_{if_{j}} \leq r_{i}\) for \(i=1,\ldots,p\) is equal to \[\prod_{\begin{subarray}{c}1\leq i\leq p,\\ i\neq k_{1},\ldots,k_{d}\end{subarray}}\Bigg{[}\sum_{\mu=0}^{m_{i}}(-1)^{m_{i}- \mu}2^{\mu}\binom{m_{i}}{\mu}\Bigg{(}\binom{r_{i}-a_{if_{j}}+\mu}{\mu}-\] \[\binom{s_{i}-b_{if_{j}}+\mu-1}{\mu}\Bigg{)}\Bigg{]}\cdot\prod_{\nu=1}^{d} \Bigg{[}\sum_{\mu=0}^{m_{k\nu}}(-1)^{m_{k_{\nu}}-\mu}2^{\mu}\binom{m_{k_{\nu} }}{\mu}\cdot\] \[\Bigg{(}\binom{s_{k_{\nu}}-b_{k_{\nu}f_{j}}-1+m_{k_{\nu}}}{m_{k_{\nu}}}- \binom{s_{k_{\nu}}-a_{1f_{j}}-1+m_{k_{\nu}}}{m_{k_{\nu}}}\Bigg{)}\Bigg{]} \tag{12}\] and a similar formula holds for the number of terms satisfying the conditions \(\operatorname{ord}_{i}u_{\gamma f_{j}}^{(i)}>r_{i}\) for \(i\in\{l_{1},\ldots,l_{e}\}\subseteq\{2,\ldots,p\}\), \((\gamma\in\Gamma\), \(\gamma\sim u_{f_{j}}^{(1)})\), \(\operatorname{ord}_{i}v_{\gamma f_{j}}^{(i)}\geq s_{i}\) for \(i\in\{1,\ldots,p\}\) and \(\operatorname{ord}_{i}u_{\gamma f_{j}}^{(i)}\leq r_{i}\) for \(i\neq l_{\nu}\)\((1\leq\nu\leq e)\). Applying the principle of inclusion and exclusion (taking into account terms that are multiples of more than one 1-leaders), we obtain that \(\operatorname{Card}U^{\prime\prime}(\overline{r},\overline{s})\) is an alternating sum of polynomials in \(r_{1},\ldots,r_{p},s_{1},\ldots,s_{p}\) that are products of \(k\)\((0\leq k\leq p)\) terms of the form \(\binom{r_{i}-a_{i}+m_{i}}{m_{i}}-\binom{s_{i}-b_{i}+m_{i}}{m_{i}}\) with \(a_{i},b_{i}\in\mathbb{N}\) \((1\leq i<p)\) and \(p-k\) terms of the form either \(\binom{s_{i}-c_{i}+m_{i}}{m_{i}}-\binom{s_{i}-d_{i}+m_{i}}{m_{i}}\) or \(\binom{r_{i}-c_{i}+m_{i}}{m_{i}}-\binom{r_{i}-d_{i}+m_{i}}{m_{i}}\) with \(c_{i},d_{i}\in\mathbb{N}\), \(c_{i}<d_{i}\). Since each such a polynomial has total degree at most \(m-1\) and its degree with respect to \(r_{i}\) or \(s_{i}\) (\(1\leq i\leq p\)) does not exceed \(m_{i}\), we obtain that \(\operatorname{Card}U^{\prime\prime}(\overline{r},\overline{s})=\lambda(r_{1}, \ldots,r_{p},s_{1},\ldots,s_{p})\) where \(\lambda(t_{1},\ldots,t_{2p})\) is a numerical polynomial in \(2p\) variables such that \(\deg\lambda<m\) and \(\deg_{t_{i}}\lambda\leq m_{i}\), \(\deg_{t_{j}}\lambda\leq m_{j-p}\) for \(i=1,\ldots,p\), \(j=p+1,\ldots,2p\). It follows that the numerical polynomial \[\Phi_{\eta\,|\,K}(t_{1},\ldots,t_{2p})=\psi_{\eta\,|\,K}(t_{1},\ldots,t_{p})- \psi_{\eta\,|\,K}(t_{p+1}-1,\ldots,t_{2p}-1)+\lambda(t_{1},\ldots,t_{2p})\] satisfies conditions of our theorem. **Definition 4.2**.: The numerical polynomial \(\Phi_{\eta|K}(t_{1},\ldots,t_{2p})\) whose existence is established by Theorem 4.1 is called the _\(2p\)-variate \(\sigma^{*}\)-dimension polynomial_ of the \(\sigma^{*}\)-field extension \(L/K\) associated with the system of \(\sigma^{*}\)-generators \(\eta\) and partition (6) of the set \(\sigma\). The following theorem describes some invariants of a \(2p\)-variate \(\sigma\)-dimension polynomial of a finitely generated \(\sigma^{*}\)-field extension \(L/K\) with partition (6) of \(\sigma\), that is, characteristics of the extension that do not depend on the set of \(\sigma^{*}\)-generators of \(L\) over \(K\). In what follows we use the following notation. For any permutation \((j_{1},\ldots,j_{2p})\) of the set \(\{1,\ldots,2p\}\), let \(<_{j_{1},\ldots,j_{2p}}\) denote the lexicographic order on \(\mathbb{N}^{2p}\) such that \((k_{1},\ldots,k_{2p})<_{j_{1},\ldots,j_{2p}}(l_{1},\ldots,l_{2p})\) if and only if either \(k_{j_{1}}<l_{j_{1}}\) or there exists \(q\in\mathbb{N}\), \(2\leq q\leq 2p\), such that \(k_{j_{\nu}}=l_{j_{\nu}}\) for \(\nu<q\) and \(k_{j_{q}}<l_{j_{q}}\). **Theorem 4.3**.: _With the notation of Theorem 4.1, let \(\Phi_{\eta\,|\,K}(t_{1},\ldots,t_{2p})\) be the \(2p\)-variate \(\sigma^{*}\)-dimension polynomial of the \(\sigma^{*}\)-field extension \(L=K\langle\eta_{1},\ldots,\eta_{n}\rangle^{*}\). Since the degrees of \(\Phi_{\eta\,|\,K}\) with respect to \(t_{i}\) and \(t_{p+i}\) (\(1\leq i\leq p\)) do not exceed \(m_{i}=\operatorname{Card}\sigma_{i}\) (see partition (6)), Theorem 2.3 shows that this polynomial can be written as_ \[\Phi_{\eta\,|\,K}=\sum_{i_{1}=0}^{m_{1}}\ldots\sum_{i_{p}=0}^{m_{p}}\sum_{i_{p+ 1}=0}^{m_{1}}\ldots\sum_{i_{2p}=0}^{m_{p}}a_{i_{1}\ldots i_{2p}}\binom{t_{1}+i_ {1}}{i_{1}}\ldots\binom{t_{2p}+i_{2p}}{i_{2p}}.\] _Let \(E_{\eta}=\{(i_{1},\ldots,i_{2p})\in\mathbb{N}^{2p}\,\mid\,0\leq i_{k},i_{p+k} \leq m_{k}\) (\(k=1,\ldots,p\)) and \(a_{i_{1}\ldots i_{2p}}\neq 0\}\). Then the total degree \(d\) of \(\Phi_{\eta\,|\,K}\) with respect to \(t_{1},\ldots,t_{p}\) and the coefficients of the terms of total degree \(d\) in \(\Phi_{\eta\,|\,K}\) do not depend on the choice of the set of \(\sigma\)-generators \(\eta\). Furthermore, if \((\mu_{1},\ldots,\mu_{p})\) is any permutation of \(\{1,\ldots,p\}\) and \((\nu_{1},\ldots,\nu_{p})\) is any permutation of \(\{p+1,\ldots,2p\}\), then the maximal element of \(E_{\eta}\) with respect to the lexicographic order \(<_{\mu_{1},\ldots,\mu_{p},\nu_{1},\ldots,\nu_{p}}\) and the corresponding coefficient \(a_{\mu_{1},\ldots,\mu_{p},\nu_{1},\ldots,\nu_{p}}\) do not depend on the choice of a finite set of \(\sigma\)-generators of \(L/K\) either. Finally, \(a_{m_{1}\ldots m_{p}0\ldots 0}=a_{0\ldots 0m_{1}\ldots m_{p}}=\sigma\mbox{-tr.} \deg_{K}L\)._ Proof.: Suppose that \(\zeta=\{\zeta_{1},\ldots,\zeta_{l}\}\) is another set of \(\sigma^{*}\)-generators of \(L/K\), that is, \(L=K\langle\eta_{1},\ldots,\eta_{n}\rangle^{*}=K\langle\zeta_{1},\ldots,\zeta_{l }\rangle^{*}\). Let \[\Phi_{\zeta\,|\,K}(t_{1},\ldots,t_{2q})=\sum_{i_{1}=0}^{m_{1}}\ldots\sum_{i_{p}=0 }^{m_{p}}\sum_{i_{p+1}=0}^{m_{1}}\ldots\sum_{i_{2p}=0}^{m_{p}}b_{i_{1}\ldots i_{2p} }\binom{t_{1}+i_{1}}{i_{1}}\ldots\binom{t_{2p}+i_{2p}}{i_{2p}}\] be the \(2p\)-variate dimension polynomial of the extension \(L/K\) associated with the system of \(\sigma^{*}\)-generators \(\zeta\). Then there exist \(h_{1},\ldots,h_{p}\in\mathbb{N}\) such that \(\eta_{i}\in K(\bigcup_{j=1}^{l}\Gamma(h_{1},\ldots,h_{p})\zeta_{j})\) and \(\zeta_{k}\in K(\bigcup_{j=1}^{n}\Gamma(h_{1},\ldots,h_{p})\eta_{j})\) for any \(i=1,\ldots,n\) and \(k=1,\ldots,l\). (If \(\Gamma^{\prime}\subseteq\Gamma\), then \(\Gamma^{\prime}\zeta_{j}\) denotes the set \(\{\gamma\zeta_{j}\,|\,\gamma\in\Gamma^{\prime}\}\).) It follows that there exist \(r_{i}^{(0)},s_{i}^{(0)},s_{i}^{(1)}\in\mathbb{N}\) (\(1\leq i\leq p\)) with \(s_{i}^{(1)}<r_{i}^{(0)}-s_{i}^{(0)}\) such that whenever \(r_{i}\geq r_{i}^{(0)}\), \(s_{i}^{(1)}\leq s_{i}\leq r_{i}-s_{i}^{(0)}\) (\(1\leq i\leq p\)), one has \[\Phi_{\eta\,|\,K}(r_{1},\ldots,r_{2p})\leq\Phi_{\zeta\,|\,K}(r_{1}+h_{1}, \ldots,r_{p}+h_{p},r_{p+1}-h_{1},\ldots,r_{2p}-h_{p})\] and \[\Phi_{\zeta\,|\,K}(r_{1},\ldots,r_{2p})\leq\Phi_{\zeta\,|\,K}(r_{1}+h_{1}, \ldots,r_{p}+h_{p},r_{p+1}-h_{1},\ldots,r_{2p}-h_{p}).\] Now the statement of the theorem about the maximal elements of \(E_{\eta}\) with respect to the lexicographic orders \(<_{\mu_{1},\ldots,\mu_{p},\nu_{1},\ldots,\nu_{p}}\) and the corresponding coefficients follows from the fact that for any element \((k_{1},\ldots,k_{2p})\in E^{\prime}_{\eta}\), the term \(\binom{t_{1}+k_{1}}{k_{1}}\ldots\binom{t_{2p}+k_{2p}}{k_{2p}}\) appears in \(\Phi_{\eta|K}(t_{1},\ldots,t_{2p})\) and \(\Phi_{\zeta|K}(t_{1},\ldots,t_{2p})\) with the same coefficient \(a_{k_{1}\ldots k_{2p}}\). The equality of the coefficients of the corresponding terms of total degree \(d=\deg\Phi_{\eta\,|\,K}=\deg\Phi_{\zeta,|\,K}\) in \(\Phi_{\eta,|\,K}\) and \(\Phi_{\zeta\,|\,K}\) can be shown as in the proof of [12, Theorem 3.3.21]. In order to prove the last part of the theorem, note that the expression (12) and a similar expression corresponding to the conditions with \(\operatorname{ord}_{i}u_{\gamma f_{j}}^{(i)}>r_{i}\) for \(i\in\{l_{1},\ldots,l_{e}\}\subseteq\{2,\ldots,p\}\), (\(\gamma\in\Gamma\), \(\gamma\sim u_{f_{j}}^{(1)}\)), \(\operatorname{ord}_{i}v_{\gamma f_{j}}^{(i)}\geq s_{i}\) for \(i\in\{1,\ldots,p\}\) and \(\operatorname{ord}_{i}u_{\gamma f_{j}}^{(i)}\leq r_{i}\) for \(i\neq l_{\nu}\), \(\nu=1,\ldots,e\) (see the proof of Theorem 4.1) have the property that their total degrees with respect to \(r_{1},\ldots,r_{p}\) and \(s_{1},\ldots,s_{p}\) are less than \(m\). It follows that the coefficients of the terms of total degree \(m\) in \(t_{1},\ldots,t_{p}\) and terms of total degree \(m\) in \(t_{p+1},\ldots,t_{2p}\) in the polynomial \(\Phi_{\eta\,|\,K}\) are equal to the corresponding coefficients in the polynomials \(\psi_{\eta\,|\,K}(t_{1},\ldots,t_{p})\) and \(\psi_{\eta\,|\,K}(t_{p+1},\ldots,t_{2p})\), respectively (see (11)). Now, using the fact that if elements \(\eta_{i_{1}},\ldots,\eta_{i_{k}}\) (\(i_{1},\ldots,i_{k}\in\{1,\ldots,n\}\)) are \(\sigma\)-algebraically independent over \(K\), then \(\operatorname{tr.deg}_{K}K((\{\gamma\eta_{i_{j}}\,|\,\gamma\in\Gamma(r_{1}, \ldots,r_{p};s_{1},\ldots,s_{p}),1\leq j\leq k\})=\) \(k\prod_{i=1}^{p}\left[\sum_{j=0}^{m_{i}}(-1)^{m_{i}-j}2^{j}\binom{m_{i}}{j} \left(\binom{r_{i}+j}{j}-\binom{s_{i}+j-1}{j}\right)\right]\) for any \(r_{i},s_{i}\in\mathbb{N}\) with \(s_{i}\leq r_{i}\) (\(1\leq i\leq p\)), one can mimic the proof of [6, Theorem 6.4.8] to obtain that \(a_{m_{1}\ldots m_{p}0\ldots 0}=a_{0\ldots 0m_{1}\ldots m_{p}}=\sigma\)-\(\operatorname{tr.deg}_{K}L\). _Example_.: Let \(K\) be an inversive difference (\(\sigma^{*}\)-) field with a basic set \(\sigma=\{\alpha_{1},\alpha_{2},\alpha_{3}\}\) considered together with its partition \(\sigma=\{\alpha_{1}\}\cup\{\alpha_{2}\}\cup\{\alpha_{3}\}\). Let \(L=K\langle\eta\rangle^{*}\) be a \(\sigma^{*}\)-field extension with the defining equation \[\alpha_{1}^{a}\eta+\alpha_{1}^{-a}\eta+\alpha_{2}^{b}\eta+\alpha_{3}^{c}\eta=0 \tag{13}\] where \(a,b,c\in\mathbb{N}\), \(a>b>c>0\). It means that the defining \(\sigma^{*}\)-ideal \(P\) of the extension \(L/K\) is a linear \(\sigma^{*}\)-ideal of the ring of \(\sigma^{*}\)-polynomials \(K\{y\}^{*}\) generated by the linear \(\sigma^{*}\)-polynomial \(f=\alpha_{1}^{a}y+\alpha_{1}^{-a}y+\alpha_{2}^{b}y+\alpha_{3}^{c}y\). By Proposition 3.16, the \(\sigma^{*}\)-polynomials \(f\) and \(\alpha_{1}^{-1}f=\alpha_{1}^{-(a+1)}y+\alpha_{1}^{a-1}y+\alpha_{1}^{-1}\alpha_{2} ^{b}y+\alpha_{1}^{-1}\alpha_{3}^{c}y\) form an \(E\)-characteristic set of \(P\). Setting \(\overline{r}=(r_{1},r_{2},r_{3})\), \(\overline{s}=(s_{1},s_{2},s_{3})\) and using the notation of the proof of Theorem 4.1, we obtain (applying Theorems 2.6 and 2.8) that for all sufficiently large \((r_{1},r_{2},r_{3},s_{1},s_{2},s_{3})\in\mathbb{N}^{6}\), \(\operatorname{Card}U_{\eta}^{\prime}(\overline{r},\overline{s})=\phi_{\{(a,0,0),(-a-1,0,0)\}}(r_{1},r_{2},r_{3},s_{1},s_{2},s_{3})=\) \(2a(2r_{2}-2s_{2}+2)(2r_{3}-2s_{3}+2)\) Furthermore, using the method of inclusion and exclusion (as it is indicated in the proof of Theorem 4.1), we get \[\operatorname{Card}U_{\eta}^{\prime\prime}(\overline{r},\overline{s})=(2a+1)( 2r_{2}-2s_{2}+2)(2r_{3}-2s_{3}+2)+4b(r_{1}-s_{1}+1)(2r_{3}-2s_{3}+2)+\] \[4c(r_{1}-s_{1}+1)(2r_{2}-2s_{2}+2)-2b(2a+1)(2r_{3}-2s_{3}+2)-2c(2a+1)(2r_{2}-2 s_{2}+2)-\] \[8bc(r_{1}-s_{1}+1)+8abc+4bc.\] Since the \(6\)-variate \(\sigma^{*}\)-dimension polynomial \(\Phi_{\eta\,|\,K}\) expresses the number of elements of the set \(U_{\eta}^{\prime}(\overline{r},\overline{s})\cup\operatorname{Card}U_{\eta}^{ \prime\prime}(\overline{r},\overline{s})\) for all sufficiently large values of its arguments, we obtain \[\Phi_{\eta\,|\,K}(t_{1},\dots,t_{6})=8ct_{1}t_{2}+8bt_{1}t_{3}-8ct_{1}t_{5}-8bt _{1}t_{6}+4(4a+1)t_{2}t_{3}-8ct_{2}t_{4}-\] \[4(4a+1)t_{2}t_{6}-8bt_{3}t_{4}-4(4a+1)t_{3}t_{5}+8ct_{4}t_{5}+8bt_{4}t_{6}+4(4 a+1)t_{5}t_{6}+\] the linear combination of monomials of total degree at most \(\,1\). The univariate \(\sigma^{*}\)-dimension polynomial \(\phi_{\eta\,|\,K}(t)\) (see theorem 2.1) is as follows (by [6, Theorem 6.4.8], it coincides with the dimension polynomial of the set \(A=\{(a,0,0),(-a-1,0,0)\}\subset\mathbb{Z}^{3}\), so it can be computed using Theorems 2.8 and 2.6 with \(p=1\)). \[\phi_{\eta\,|\,K}(t)=4at^{2}+\text{ the linear combination of monomials of degree at most }\,1.\] By Theorem 4.3, \(\deg\,\Phi_{\eta\,|\,K}=2\) and the coefficients of the terms \(t_{i}t_{j}\) (\(1\leq i,j\leq 6\)) are invariants of the extension \(L/K\), that is, they do not depend on the set of \(\sigma^{*}\)-generators of this extension. Therefore, the polynomial \(\Phi_{\eta\,|\,K}(t_{1},\dots,t_{6})\) carries all three parameters \(a\), \(b\) and \(c\) of the defining equation (13). At the same time, the univariate polynomial \(\phi_{\eta\,|\,K}(t)\) carries only the parameter \(a\). The fact that the \(2p\)-variate \(\sigma^{*}\)-dimension polynomial carry more invariants than its univariate counterpart can be applied to the equivalence problem for algebraic difference equations. Suppose that we have two systems of algebraic difference (\(\sigma\)-) equations over a \(\sigma^{*}\)-field \(K\) (i. e., equations of the form \(f_{i}=0\) (\(i\in I\)) where all \(f_{i}\) lie in some ring of \(\sigma^{*}\)-polynomials \(K\{y_{1},\dots,y_{n}\}^{*}\)) that are defining equations of finitely generated \(\sigma^{*}\)-field extensions \(L/K\) and \(L^{\prime}/K\) (that is, the left-hand sides of the systems generate prime \(\sigma^{*}\)-ideals \(P\) and \(P^{\prime}\) in the corresponding rings of \(\sigma^{*}\)-polynomials \(R\) and \(R^{\prime}\) (possibly of different numbers of \(\sigma^{*}\)-generators) such that \(L\) and \(L^{\prime}\) are \(\sigma\)-isomorphic to \(\operatorname{qf}(R/P)\) and \(\operatorname{qf}(R^{\prime}/P^{\prime})\), respectively). These systems are said to be _equivalent_ if there is a \(\sigma\)-isomorphism between \(L\) and \(L^{\prime}\) which is identity on \(K\). The \(2p\)-variate \(\sigma^{*}\)-dimension polynomial introduced by Theorem 4.1 allows one to figure out that two systems of partial algebraic \(\sigma\)-equations are not equivalent in the case when the corresponding \(\sigma^{*}\)-field extensions have the same univariate \(\sigma^{*}\)-dimension polynomials. As an example, consider the difference equations \[\alpha_{1}^{a}\eta+\alpha_{1}^{-a}\eta+\alpha_{2}^{b}\eta+\alpha_{3}^{c}\eta=0 \tag{15}\] and \[\alpha_{1}^{a}\eta+\alpha_{1}^{-a}\eta+\alpha_{2}^{d}\eta+\alpha_{3}^{c}\eta=0 \tag{16}\] where \(a,b,c,d,e\in\mathbb{N}\), \(a>b>c>0\) and \(a>d>e>0\). The invariants carried by the univariate \(\sigma^{*}\)-dimension polynomials associated with these equations (the equation (15) is considered in the last Example) are the same, the degree \(1\) and \(a\). At the same time, the \(6\)-variate dimension polynomials for these equations carry invariants \(a,b,c\), and \(d,e,c\), respectively (these \(6\)-variate dimension polynomials are of the form (14)). Thus, the difference equations (15) and (16) are not equivalent, even though the corresponding \(\sigma^{*}\)-field extensions have the same invariants carried by the univariate \(\sigma^{*}\)-dimension polynomials. ## 5 Acknowledges This research was supported by the NSF grant CCF-2139462.
2304.01218
POLAR-Express: Efficient and Precise Formal Reachability Analysis of Neural-Network Controlled Systems
Neural networks (NNs) playing the role of controllers have demonstrated impressive empirical performances on challenging control problems. However, the potential adoption of NN controllers in real-life applications also gives rise to a growing concern over the safety of these neural-network controlled systems (NNCSs), especially when used in safety-critical applications. In this work, we present POLAR-Express, an efficient and precise formal reachability analysis tool for verifying the safety of NNCSs. POLAR-Express uses Taylor model arithmetic to propagate Taylor models (TMs) across a neural network layer-by-layer to compute an overapproximation of the neural-network function. It can be applied to analyze any feed-forward neural network with continuous activation functions. We also present a novel approach to propagate TMs more efficiently and precisely across ReLU activation functions. In addition, POLAR-Express provides parallel computation support for the layer-by-layer propagation of TMs, thus significantly improving the efficiency and scalability over its earlier prototype POLAR. Across the comparison with six other state-of-the-art tools on a diverse set of benchmarks, POLAR-Express achieves the best verification efficiency and tightness in the reachable set analysis.
Yixuan Wang, Weichao Zhou, Jiameng Fan, Zhilu Wang, Jiajun Li, Xin Chen, Chao Huang, Wenchao Li, Qi Zhu
2023-03-31T06:51:36Z
http://arxiv.org/abs/2304.01218v3
# POLAR-Express: Efficient and Precise ###### Abstract Neural networks (NNs) playing the role of controllers have demonstrated impressive empirical performances on challenging control problems. However, the potential adoption of NN controllers in real-life applications also gives rise to a growing concern over the safety of these neural-network controlled systems (NNCSs), especially when used in safety-critical applications. In this work, we present POLAR-Express, an efficient and precise formal reachability analysis tool for verifying the safety of NNCSs. POLAR-Express uses Taylor model arithmetic to propagate Taylor models (TMs) across a neural network layer-by-layer to compute an overapproximation of the neural-network function. It can be applied to analyze any feed-forward neural network with continuous activation functions. We also present a novel approach to propagate TMs more efficiently and precisely across ReLU activation functions. In addition, POLAR-Express provides parallel computation support for the layer-by-layer propagation of TMs, thus significantly improving the efficiency and scalability over its earlier prototype POLAR. Across the comparison with six other state-of-the-art tools on a diverse set of benchmarks, POLAR-Express achieves the best verification efficiency and tightness in the reachable set analysis. Neural-Network Controlled Systems; Reachability Analysis; Safety Verification; Formal Methods ## I Introduction Neural networks have been successfully used as the central decision-makers in a variety of tasks such as autonomous vehicles [1, 2], aircraft collision avoidance systems [3], robotics [4], HVAC control [5, 6], and other autonomous cyber-physical systems (CPS) [7]. Neural-network controllers can be trained using machine learning techniques such as reinforcement learning [8, 9] and imitation learning from human demonstrations [10, 11] or trajectories generated by model-predictive controllers [12]. However, the use of neural-network controllers also gives rise to new challenges in verifying the safety of these systems due to the nonlinear and highly parameterized nature of neural networks and their closed-loop formations with dynamical systems [13, 14, 15, 16]. In this work, we consider the following reachability verification problem of neural-network controlled systems (NNCSs). **Definition 1** (Reachability Problem of NNCSs).: _The reachability problem of an NNCS is to verify whether a given state is reachable from an initial state of the system, whereas the bounded-time version of this problem is to verify whether a given state is reachable within a given bounded time horizon._ Uncertainties around the initial state, such as those inherent in state measurement or localization systems, or scenarios where the system can start from anywhere in an initial space, require the consideration of an _initial state set_ as opposed to a single initial state in the reachability problem. It is worth noting that simulation-based testings [17], which sample initial states from the initial state set, cannot provide formal safety guarantees such as "no system trajectory from the initial state set will lead to an obstacle collision." In this paper, we consider reachability analysis as the class of techniques that aim at tightly over-approximating the set of all reachable states of the system starting from an initial state set. We illustrate an example of this reachability analysis of a closed-loop system in Figure 1. Reachability analysis of general NNCSs is notoriously difficult due to nonlinearities that exist in both the neural-network controller and the plant. The closed-loop coupling of the neural-network controller with the plant adds another layer of complexity. To obtain a tight overapproximation of the reachable sets, reachability analysis needs to _track state dependencies across the closed-loop system and across multiple time steps_. While this problem has been well studied in traditional closed-loop systems without neural-network controllers [18, 19, 20, 21, 22, 23], it is less clear whether it is important to track the state dependency in NNCSs and how to track the dependency efficiently given the complexity of neural networks. This paper aims to bring clarity to these questions by comparing different approaches for solving the NNCS reachability problems. Existing reachability analysis techniques for NNCSs typically use reachability analysis methods for dynamical systems as subroutines. For general nonlinear dynamical systems, the problem of _exact reachability is undecidable_[24]. Thus, methods for reachability analysis of nonlinear dynamical systems aim at computing a _tight over-approximation of the reachable sets_[25, 26, 27, 28, 29, 20, 22, 21]. On the other hand, there is also rich literature on verifying neural networks. Most of these verification techniques boil down to the problem of estimating or over-approximating the _output ranges_ of the network [30, 31, 32, 33, 34, 35]. The existence of these two bodies of work gives rise to a straightforward combination of NN output range analysis with reachability analysis of dynamical systems for solving the NNCS reachability problem. However, early works have shown that this naive combination with a non-symbolic interval-arithmetic-based [36] output range analysis suffers from large over-approximation errors when computing the reachable sets of the overall system [14, 13]. The primary reason for its poor performance is the lack of consideration of the interactions between the neural-network controller and the plant dynamics. Recent advances in the field of NN verification feature more sophisticated techniques that can yield tighter output range bounds and track the input-output dependency of an NN via symbolic bound propagation [33, 37, 35]. This opens up the possibility of improvement for the aforementioned combination strategy by substituting the non-symbolic interval-arithmetic-based technique with these new symbolic bound estimation techniques. New techniques have also been developed to directly address the verification challenge of NNCSs. Early works mainly use _direct end-to-end over-approximation_[14, 13, 38] of the neural-network function, i.e. computing a function approximation of the neural network with guaranteed error bounds. While this approach can better capture the input-output dependency of a neural network compared to output ranges, it suffers from efficiency problems due to the need to sample from the input space. As a result, this type of technique cannot handle systems with more than a few dimensions. This approach is superseded by more recent techniques that leverage _layer-by-layer propagation_ in the neural network [15, 39, 40, 41]. Layer-by-layer propagation techniques have the advantage of being able to exploit the structure of the neural network. They are primarily based on propagating _Taylor models_ (TMs) layer by layer via Taylor model arithmetic to more efficiently obtain a function over-approximation of the neural network. **Scope and Contributions.** We present POLAR-Express, a significantly enhanced version of our earlier prototype tool POLAR [41]. Inherited from POLAR [41], POLAR-Express uses layer-by-layer propagation of TMs to compute function over-approximations of neural-network controllers. Our technique is applicable to general feed-forward neural networks with continuous (but not necessarily differentiable) activation functions. Compared with POLAR, POLAR-Express has the following new features. * A more efficient and precise method for propagating TMs across non-differentiable ReLU activation functions. * Multi-threading support to parallelize the computation in the layer-by-layer propagation of Taylor models for neural-network controllers, which significantly improved the efficiency and scalability of our approach for complex systems. * Comprehensive experimental evaluation that includes new comparisons with recent state-of-the-art tools such as RINO [42], CORA [43], and Juliareach [44]. Across a diverse set of benchmarks and tools, POLAR-Express achieves state-of-the-art verification efficiency and tightness of over-approximation in the reachable set analysis, outperforming all existing tools. More specifically, compared with the existing literature [35, 16, 43, 44, 42, 41, 39], we provide the most comprehensive experimental evaluation across a wide variety of NNCS benchmarks including neural-network controllers with different activation functions and dynamical systems with up to 12 states. In terms of the approach to over-approximating a neural-network controller, existing tools can be categorized into two classes. The first class shares the common idea of integrating NN output range analysis techniques with reachability analysis tools for dynamical systems, such as \(\alpha,\beta\)-CROWN [35], NNV [16], CORA [43], JuliaReach [44], and RINO [42]. The second class focuses on passing symbolic dependencies across the NNCS and across multiple control steps during reachability analysis, such as POLAR-Express and Verisig 2.0 [39]. Through the comparisons, we hope this paper can also serve as an accessible introduction to these analysis techniques for those who wish to apply them to verify NNCSs in their own applications and for those who wish to dive more deeply into the theory of NNCS verification. ## II Background We first introduce the technical preliminaries and review existing techniques for the safety verification of NNCSs. Fig. 1: An illustrating example of reachability (for a reach-avoid specification). The system shown in the figure starts from a state in the initial state set \(X_{0}\). Each red dot is the system state at the end of a control time-step, and the red curve is the system trajectory where each solid part indicates the system trajectory between two consecutive red-dot states. In this example, the system safely reaches the target set \(X_{f}\) without hitting the avoid set \(X_{A}\). The reachability verification problem is to check whether this is the case for all initial states in \(X_{0}\) (for some given upper bound on the number of control time-steps). NNCS is often defined by an ODE that is governed by a feed-forward neural network at discrete times. Although it is undecidable to know if a state is reachable for an NNCS starting from an initial state, we may compute an over-approximated set of reachable states. The safety of an NNCS can be proven by showing that the reachable set of this NNCS does not contain any unsafe state. Although more general safety or robustness of an NNCS can be proved by computing an invariant for the system reachable states [45, 46], it is still hard to handle a large number of system variables and general nonlinear dynamics. Hence, most of the existing methods for reachability analysis use the _set propagation_ scheme [47]. That is, an over-approximation of the reachable set in a bounded time horizon can be obtained by iteratively computing a super-set for the reachable set in a time step and propagating it to the set computation for the next step. More precisely, starting from a given initial set \(X_{0}\), a set propagation method computes an over-approximation of the reachable set \(\Phi_{t\in[0,\delta]}(X_{0},t)\) where \(\delta\) is the time step, \(\Phi\) denotes the system's evolution function (flowmap) that is often unknown. It then repeats the above work from the obtained reachable set over-approximation at the end of the previous step and computes a new over-approximation for the current one. The over-approximation segments are called _flowpipes_. Such a scheme has been proven to be effective to handle various system dynamics and efficient in handling large numbers of state variables [14, 16, 40, 44, 41, 42]. Algorithm 1 shows the main framework of set propagation for NNCSs. Starting with a given initial set \(X_{0}\), the main algorithm repeatedly performs the following two main steps to compute the flowpipes in the \((i+1)\)-th control step for \(i=0,1,\ldots,K-1\): (a) _Computing the range \(\mathcal{U}_{i}\) of the control input_. This task is to compute the output range of the NN controller w.r.t. the current system state. Since the current system state is a subset of the latest flowpipe, \(\mathcal{U}_{i}\) is computed as an over-approximate set. (b) _Flowpipe construction for the continuous dynamics_. According to the obtained range \(\mathcal{U}_{i}\) of the constant control inputs, the reachable set in the current control step can be obtained using a flowpipe computation method for ODEs. ``` Input: Definition of the system modules, number of control steps \(K\), the initial state set \(X_{0}\). Output: Over-approximation of the reachable set in \(K\) control steps. 1:\(\mathcal{R}\leftarrow\emptyset\); \(\quad\quad\quad\quad\#\) the resulting over-approximation set 2:\(\mathcal{X}_{0}\gets X_{0}\); 3:for\(i=0\) to \(K-1\)do 4: Computing an over-approximation \(\mathcal{U}_{i}\) for the NN output w.r.t. the input set \(\mathcal{X}_{i}\); 5: Computing a set \(\mathcal{F}\) of flowpipes for the plant dynamics from the initial set \(\mathcal{X}_{i}\) in a control step; 6:\(\mathcal{R}\leftarrow\mathcal{R}\cup\mathcal{F}\); 7: Evaluating an over-approximation for the reachable set at the end of the control step and assigning it to \(\mathcal{X}_{i+1}\); 8:endfor 9:return\(\mathcal{R}\). ``` **Algorithm 1**Reachable set computation for NNCSs based on set-propagation. The existing methods can be mainly classified into the following two groups based on their over-approximation purposes. **(I) Pure range over-approximations for reachable sets.** The techniques in this group aim at directly over-approximating the range of the reachable set using geometric or algebraic representations such as intervals [48], zonotopes [49] or other sets represented by constraints. Such an approach can often be developed by designing the over-approximation methods for the plant and controller individually and then using a higher-level algorithm to make the two methods work cooperatively for the closed-loop system. Many existing tools for computing the reachable set over approximations under the continuous dynamics defined by linear or nonlinear ODEs can be used to handle the plant, such as VNODE-LP[19], SpaceEx [20], CORA [21], and Flow* [22]. On the other hand, the task of computing the output range of a neural network can be handled by the output range analysis techniques most of which are developed in the recent years [50, 51, 52, 53, 30, 31, 54, 55, 34, 33, 16, 37, 35]. The main advantage of the techniques in this group is twofold. Firstly, there is no need to develop a new technique from scratch, and the correctness of the composed approach can be proved easily based on the correctness of the existing methods for the subtasks. Secondly, the performance of the approach is often good on simple case studies since it can use well-engineered tools as primitives. However, since those methods mainly focus on the pure range over-approximation work, and do not just lightly track the dependencies among the state variables under the system dynamics, it may cause heavy accumulation of over-approximation error when the plant dynamics is nonlinear or the initial set is large, make the resulting bounds less useful in proving properties of interest. **(II) Functional over-approximations for system evolution.** The reachable set over-approximation methods in this category focus on a more challenging task than only over-approximating the reachable set ranges. They seek to compute an over-approximate function for the flowmap \(\Phi\) of an NNCS. As we pointed out in the previous section, \(\Phi\) is a function only in the variables representing the initial state and the time, and it often does not have a closed-form expression. However, it can be over-approximated by a Taylor model (TM) over a Fig. 2: Taylor model over-approximation of a flowmap function. bounded time interval. Figure 2 gives an illustration in which the TM \(p(\mathbf{x}_{0},t_{0}+\tau)+[a,b]\) is guaranteed to contain the range of the function \(\Phi(\mathbf{x}_{0},t_{0}+\tau)\) for any initial state \(\mathbf{x}_{0}\) and \(t\in[0,\delta]\). In practice, we usually require \(\mathbf{x}_{0}\) to be in a bounded set. Such a TM provides a functional over-approximation rather than a pure range over-approximation which allows tracking the dependency from a reachable state to the initial state approximately. Functional over-approximations often can handle more challenging reachability analysis tasks, in which larger initial sets, nonlinear dynamics, or longer time horizons are specified. Recent work has applied interval, polynomial, and TM arithmetic to obtain over-approximations for NNCS evolution [14, 13, 56, 41]. Those techniques are often able to compute more accurate flowpipes than the methods in the other group. On the other hand, the functional over-approximation methods are often computationally expensive due to the computation of nonlinear multivariate polynomials for tracking the dependencies. **Existing tools.** We consider the following tools in the experimental evaluation: NNV [16], Verisig 2.0 [39], CORA [43], JuliaReach [44] and RINO [42]. Additionally we also simply combine the use of \(\alpha,\beta\)-CROWN [35] and Flow* [22] to provide a baseline for the performance of pure range over-approximation. We summarize the key aspects of the tools in Table I. Basically, NNV, CORA, JuliaReach, and RINO compute range over-approximations, while Verisig 2.0 and POLAR-Express compute functional over-approximations. **Taylor models.** Taylor models are originally proposed to compute higher-order over-approximations for the ranges of continuous functions (see [57]). They can be viewed as a higher-order extension of intervals. A _Taylor model (TM)_ is a pair \((p,I)\) wherein \(p\) is a polynomial of degree \(k\) over a finite group of variables \(x_{1},\ldots,x_{n}\) ranging in an interval domain \(D\subset\mathbb{R}^{n}\), and \(I\) is the remainder interval. Given a smooth function \(f(\mathbf{x})\) with \(\mathbf{x}\in D\) for some interval domain \(D\), its TM can be obtained as \((p(\mathbf{x}),I)\) such that \(p\) is the Taylor expansion of \(f\) at some \(\mathbf{x}_{0}\in D\), and \(I\) is an interval remainder such that \(\forall\mathbf{x}\in D.(f(\mathbf{x})\in p(\mathbf{x})+I)\), i.e., \(p+I\) is an over-approximation of \(f\) at any point in \(D\). When the order of \(p\) is sufficiently high, the main dependency of the \(f\) mapping can be captured in \(p\). Basically, the polynomial \(p\) can be any polynomial approximation of the function \(f\), and it is unnecessary to only use Taylor approximations. When a function \(f(\mathbf{x})\) is overly approximated by a TM \((p(\mathbf{x}),I)\) w.r.t. a bounded domain \(D\), the approximation quality, i.e., size of the overestimation, is directly reflected by the width of \(I\), since \(f(\mathbf{x})=p(\mathbf{x})\) for all \(\mathbf{x}\in D\) when \(I\) is zero by the TM definition. Given two order \(k\) TMs \((p_{1}(\mathbf{x}),I_{1})\) and \((p_{2}(\mathbf{x}),I_{2})\) which are over-approximations of the same function \(f(\mathbf{x})\) w.r.t. a bounded domain \(D\subset\mathbb{R}^{n}\), we use \((p_{1}(\mathbf{x}),I_{1})\prec_{k}(p_{2}(\mathbf{x}),I_{2})\) to denote that the width of \(I_{1}\) is smaller than the width of \(I_{2}\) in all dimensions, i.e., \((p_{1}(\mathbf{x}),I_{1})\) is a more accurate over-approximation of \(f(\mathbf{x})\) than \((p_{2}(\mathbf{x}),I_{2})\). TMs have been proven to be powerful over-approximate representations for the flowmap of nonlinear continuous and hybrid systems [58, 59, 60]. Although polynomial zonotopes [61] are also polynomial representations, they are not expressed in the same variables as the system flowmap functions and therefore not functional over-approximations. Interval Taylor Series (ITS) are univariate polynomials in the time variable \(t\) where the coefficients are intervals. ITS are often used as nonlinear range over-approximations for ODE solutions [19]. ## III Problem Formulation We use the formal model presented in Figure 3 to describe the behavior of an NNCS. It is a composition of four modules, each of which models the evolution or input-output mapping of the corresponding component in an NNCS. The top three modules form the controller of the system, it retrieves the sensor data \(\mathbf{y}\), computes the control input \(\mathbf{u}\), and applies it to the plant at discrete times \(t=0,\delta_{c},\ldots,k\delta_{c},\ldots\) for a control step size \(\delta_{c}>0\). The roles of the modules are described below. **Plant.** This is a model of the physical process. We use an ODE over the \(n\) state variables \(\mathbf{x}\) and \(m\) control inputs \(\mathbf{u}\) to model the evolution of the physical process such as the movement of a vehicle, the rotation of a DC motor, or the pitch angle change of an aircraft. In the paper, we collectively represent a set of ordered variables \(x_{1},\ldots,x_{n}\) by \(\mathbf{x}\). We only consider the ODEs which are at least locally Lipschitz continuous such that its solution w.r.t. an initial condition \(\mathbf{x}(0)=\mathbf{x}_{0}\in\mathbb{R}^{n}\) is unique [62]. **Preprocessing Module.** This module transforms the sample data. It serves as the gate of the controller. At the time \(t=k\delta_{c}\), for every \(k=0,1,\ldots\), it retrieves the sensor data \(\mathbf{y}\) which can be viewed as the image under a mapping from the actual system state \(\mathbf{x}(t)\), and further transform it to an appropriate format \(\mathbf{z}\) for the controller's NN component. A typical preprocessing task in a collision avoidance control could be computing the relative distances of the moving objects. **Neural Network.** This is the core computation module for the control inputs. It maps the input data \(\mathbf{z}\) to the output value \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline Total & Category & Plant & Activation & Set \\ & & Dynamics & Function & Representation \\ \hline \hline \(\alpha,\beta\)-CROWN & (I) & nonlinear & continuous & Interval and \\ & \(\mathbf{x}^{*}\) & & & Time model \\ \hline NNV & (I) & discrete linear, & ReLU, task, & ImageStar \\ & & continuous (CORA) & & \\ \hline Rollinebach & (I) & nonlinear & continuous & Zerouge + Taylor model \\ \hline CORA & (I) & nonlinear & continuous & Polynomial analogue \\ \hline RINO & (I) & nonlinear & differentiable & interval + inverse Taylor series \\ \hline VIRING 2.0 & (I) & nonlinear & differentiable & \\ \hline POLAR-Express & (II) & nonlinear & continuous & Taylor model \\ \hline \hline \end{tabular} \end{table} TABLE I: Summary of the tools evaluated in this paper. Fig. 3: Formal model of NNCS. according to the layer-by-layer propagation rule defined in it. In the paper, we only consider _feed-forward neural networks_. Since the paper focuses on the formal verification of NNCSs, the neural network is explicitly defined as a part of the NNCS. **Postprocessing Module.** This module transforms the NN output value to the control input. Typically, it is used to keep the final control input in the actual actuating range or to filter out inappropriate input values. We assume that the preprocessing and postprocessing modules can only be defined by a conjunction of _guarded transitions_ each of which is in the form of \[\gamma(\mathbf{x})\ \rightarrow\ \mathbf{x}^{\prime}=\Pi(\mathbf{x}) \tag{1}\] such that the guard \(\gamma(\mathbf{x})\) is a conjunction of inequalities in \(\mathbf{x}\), and \(\Pi\) is a transformation from \(\mathbf{x}\). Here, we assume that all guards are disjoint, and allow (1) to have polynomial arithmetic and the elementary functions \(\sin(\cdot)\), \(\cos(\cdot)\), \(\exp(\cdot)\), \(\log(\cdot)\), \(\sqrt{\cdot\cdot}\). Then the expressiveness is sufficient to define lookup tables. **Example 1** (2D Spacecraft Docking).: _We consider the docking of a spacecraft in a 2D plane. The benchmark is described in [63]. As shown in Figure 4, the control goal is to steer the spacecraft to the position at the origin while the velocity should be kept in a safe range. The whole benchmark can be modeled by an NNCS with 5 variables: \(\mathbf{x}=(x,y,v_{x},v_{y},v_{\text{safe}})^{T}\) wherein \((x,y)\) denotes the position of the spacecraft, \((v_{x},v_{y})\) denotes the velocity, and \(v_{\text{safe}}=0.2+0.002054\sqrt{x^{2}+y^{2}}\) is a particular variable that indicates a position-dependent safe limit on the speed. The dynamics is defined as in Eq.2 where \(f_{x}\) and \(f_{y}\) constitute the control input \(\mathbf{u}=(f_{x},f_{y})\) which is obtained by a neural network controller \(\kappa\). The input \(\mathbf{z}=(\frac{x}{1000},\frac{y}{1000},2v_{x},2v_{y},\sqrt{v_{x}^{2}+v_{y}^{2}},v_{safe})\) of the neural network is prepossessed from \(\mathbf{x}\). The output \(\mathbf{v}=(u_{x},u_{y})\) of the neural network is postprocessed to \(f_{x}=\tanh(u_{x}),f_{y}=\tanh(u_{y})\)._ \[\begin{split}\dot{x}=v_{x},\quad\dot{y}=v_{y},\quad\dot{v}_{ safe}=\frac{0.002054(x\cdot v_{x}+y\cdot v_{y})}{v_{safe}}\\ \dot{v}_{x}=0.002054v_{y}+3\times(0.001027)^{2}x+\frac{f_{x}}{1 2},\\ \dot{v}_{y}=-0.002054v_{x}+\frac{f_{y}}{12},\end{split} \tag{2}\] **Executions of NNCSs.** Starting from an initial state \(\mathbf{x}_{0}\in\mathbb{R}^{n}\), for all \(i=0,1,\dots\), the system state \(\mathbf{x}(t)\) in the \((i+1)\)-st control step \(t\in[i\delta_{c},(i+1)\delta_{c}]\) is defined by the solution of the ODE \(\dot{\mathbf{x}}=f(\mathbf{x},\mathbf{u}_{i})\) w.r.t. the initial state \(\mathbf{x}(i\delta_{c})\) and the control input \(\mathbf{u}_{i}\) which is obtained as \(\mathbf{u}_{i}=h\circ\kappa\circ g\circ\pi(\mathbf{x}(i\delta_{c}))\), where \(h,\kappa,g,\pi\) are shown in Fig 3. If we denote the solution of the ODE w.r.t an initial state \(\mathbf{x}_{0}\) and a particular control input \(\mathbf{u}^{\prime}\) by \(\mathbf{x}(t)=\Phi_{f}(\mathbf{x}_{0},t,\mathbf{u}^{\prime})\), the system state at a time \(t\in[i\delta_{c},(i+1)\delta_{c}]\) for any \(i\geq 0\) from the initial state \(\mathbf{x}_{0}\) can be expressed recursively by \[\begin{split}&\mathbf{x}(t)=\Phi_{f}(\mathbf{x}(i\delta_{c}),t-i\delta_{c}, \mathbf{u}_{i})\\ &\text{where }\mathbf{u}_{i}=h\circ\kappa\circ g\circ\pi(\mathbf{x}(i\delta_{c})) \end{split} \tag{3}\] such that \(\mathbf{x}(0)=\mathbf{x}_{0}\). We also call this state a _reachable state_. Without noises or disturbances from the environment, an NNCS has deterministic behavior, and its evolution can be defined by a _flowmap_ function in the form of \(\Phi(\mathbf{x}_{0},t)\), i.e., the reachable state from an initial state \(\mathbf{x}_{0}\) at a time \(t\) is \(\mathbf{x}(t)=\Phi(\mathbf{x}_{0},t)\), and it is _uniquely determined by the initial state and the time_. Unfortunately, \(\Phi\) usually does not have a closed-form expression. The reachability analysis task with respect to two given system states \(\mathbf{x},\mathbf{x}^{\prime}\) and an NNCS asks whether \(\mathbf{x}^{\prime}\) is reachable from \(\mathbf{x}\) under the evolution of the system. Reachability analysis plays a key role in the safety verification of dynamical systems. However, it is a notoriously difficult task due to the undecidability of the reachability problem on the systems defined by even nonlinear difference equations [64]. In order to prove the safety of a system, most of the reachability techniques seek to compute an over-approximation of the reachable set. If no unsafe state is contained in the over-approximated reachable set, then the system is guaranteed to be safe. Fig. 4: The 2D spacecraft docking environment. (a) The goal is to start from some initial position and move towards the origin. (b) The safe speed limit \(v_{safe}\) decreases as the position of the spacecraft approaches the origin. ``` 0: Plant dynamics \(\mathbf{x}^{\prime}=f(\mathbf{x},\mathbf{u})\), preprocessing \(g(\cdot)\), postprocessing \(h(\cdot)\), NN controller \(\kappa(\cdot)\), number of control steps \(K\), initial set \(X_{0}\). 0: Over-approximation of the reachable set over the time interval of \([0,K\delta_{c}]\) where \(\delta_{c}\) is the control step size. 1:\(\mathcal{R}\leftarrow\emptyset\); 2:\(\mathcal{X}_{0}\gets X_{0}\); 3:for\(i=0\) to \(K-1\)do 4: Computing a superset \(\mathcal{Y}_{i}\) for the range of \(\pi(\mathcal{X}_{i})\); 5: Computing a superset \(\mathcal{Z}_{i}\) for the range of \(g(\mathcal{Y}_{i})\); 6: Computing a superset \(\mathcal{V}_{i}\) for the range of \(\kappa(\mathcal{Z}_{i})\), with multi-threading support; 7: Computing a superset \(\mathcal{U}_{i}\) for the range of \(h(\mathcal{V}_{i})\); 8: Computing a set \(\mathcal{F}\) of flowpipes for the continuous dynamics \(\dot{\mathbf{x}}=f(\mathbf{x},\mathbf{u})\) with \(\mathbf{u}\in\mathcal{U}_{i}\) from the initial set \(\mathbf{x}(0)\in\mathcal{X}_{i}\) over the time interval of \([i\delta_{c},(i+1)\delta_{c}]\); 9:\(\mathcal{R}\leftarrow\mathcal{R}\cup\mathcal{F}\); 10: Evaluating an overapproximation for the reachable set at the time \(t=(i+1)\delta_{c}\) based on \(\mathcal{F}\) and assigning it to \(\mathcal{X}_{i+1}\); 11:endfor 12:return\(\mathcal{R}\). ``` **Algorithm 2**Framework of POLAR-Express. ## IV Framework of POLAR-Express We present the POLAR-Express framework in Algorithm 2 to compute flowpipes for NNCSs. It uses the standard set-propagation framework Algorithm 1 but has the following novel elements: _(1) Polynomial over-approximation for activation functions using Bezier curves. (2) Symbolic representation of TM remainers in layer-by-layer propagation. (3) A seamless integration of the above techniques to compute accurate flowpipes for NNCSs. (4) A more precise and efficient method for propagating TMs across ReLU activation function. (5) Multi-threading support to parallelize the computation in the layer-by-layer propagation of TMs for NN._ The details are explained below. **Computing \(\mathcal{Y}_{i}\) and \(\mathcal{U}_{i}\).** Given a preprocessing or postprocessing module and its input set which is represented as a TM, an output TM can be obtained by computing the reachable sets of the guarded transitions. Given a guarded transition of the form (1) along with a TM \(S\) for \(\mathbf{x}\)'s range, the reachable set \(S^{\prime}\), i.e., the range of \(\mathbf{x}^{\prime}\), can be computed by first computing the intersection \(S_{I}=S\cap\{\mathbf{x}\,|\,\gamma(\mathbf{x})\}\) and then evaluating \(\Pi(S_{I})\) using TM arithmetic [65]. Although TMs are not closed under an intersection with a semi-algebraic set, we may use the domain contraction method proposed in [58] to derive an over-approximate TM for the intersection. ### _Layer-by-Layer Propagation using TMs_ POLAR-Express uses the layer-by-layer propagation scheme to compute a TM output for the NN, and features the following key novelties: (a) A method to selectively compute Taylor or Bernstein polynomials for activation functions. The purpose is to _derive a smaller error according to the approximated function and its domain_. The Bernstein polynomials are represented in their _Bezier forms_. (b) A technique to symbolically represent the intermediate linear transformations of TM interval remainders during the layer-by-layer propagation. The purpose of using Symbolic Remainders **(SR)** is to _reduce the accumulation of overestimation produced in composing a sequence of TMs_. The approach is described as follows. **Layer-by-Layer Propagation with Multi-Threading Support**. The framework of layer-by-layer propagation has been widely used to compute NN output ranges. Most of the existing methods use range over-approximations such as intervals with constant bounds [50, 66] or linear polynomial bounds [33], zonotopes [55, 34], star sets [16]. Some TM-based methods are also proposed [14, 13, 40] to obtain functional over-approximations for the input-output mapping of an NN. However, the use of functional over-approximations in the reachability analysis of an NNCS as a whole has not been well investigated. Hence, we propose the following approach to better over-approximate the input-output dependency than the existing state-of-the-art. ``` Input: Input TM \((p_{1}(\mathbf{x}_{0}),I_{1})\) with \(\mathbf{x}_{0}\in X_{0}\), the \(M+1\) matrices \(W_{1},\ldots,W_{M+1}\) of the weights on the incoming edges of the hidden and the output layers, the \(M+1\) vectors \(B_{1},\ldots,B_{M+1}\) of the neurons' bias in the hidden and the output layers. Output: a TM \((p_{r}(\mathbf{x}_{0}),I_{r})\) that contains the set \(\kappa((p_{1}(\mathbf{x}_{0}),I_{1}))\). 1:\((p_{r},I_{r})\leftarrow(p_{1},I_{1})\); 2:for\(i=1\) to \(M+1\)do 3:\((p_{t},I_{t})\gets W_{i}\cdot(p_{r},I_{r})+B_{i}\); 4: Computing a polynomial approximation \(p_{\sigma,i}\) for the vector of the current layer's activation functions \(\sigma\) w.r.t. the domain \((p_{t},I_{t})\); 5: Evaluating a conservative remainder \(I_{\sigma,i}\) for \(p_{\sigma,i}\) w.r.t. the domain \((p_{t},I_{t})\); 6:\((p_{r},I_{r})\gets p_{\sigma,i}(p_{t}+I_{t})+I_{\sigma,i}\); 7:endfor 8:return\((p_{r},I_{r})\). ``` **Algorithm 3**Layer-by-layer propagation using polynomial arithmetic and TMs Algorithm 3 presents the main framework of our approach without using SR and focuses on the novelty coming from the tighter TM over-approximation for the activation functions (Lines 4 and 5). Before introducing our selective over-approximation method, we describe how a TM output is computed from a given TM input for a single layer. The idea is illustrated in Fig. 5. The circles in the right column denote the neurons in the current layer which is the \((i+1)\)-th layer, and those in the left column denote the neurons in the previous layer. The weights on the incoming edges to the current layer are organized as a matrix \(W_{i}\), while we use \(B_{i}\) to denote the vector organization of the biases in the current layer. Given that the output range of the neurons in the previous layer is represented as a TM (vector) \((p_{i}(\mathbf{x}_{0}),I_{i})\) where \(\mathbf{x}_{0}\) are the variables ranging in the NNCS initial set. Then, the output TM \((p_{i+1}(\mathbf{x}_{0}),I_{i+1})\) of the current layer can be obtained as follows. First, we compute the polynomial approximations \(p_{\sigma_{1},i},\ldots,p_{\sigma_{l},i}\) for the activation functions \(\sigma_{1},\ldots,\sigma_{l}\) of the neurons in the current layer. Second, interval remainders \(I_{\sigma_{1},i},\ldots,I_{\sigma_{l},i}\) are evaluated for those polynomials to ensure that for each \(j=1,\ldots,l\), \((p_{\sigma_{j},i},I_{\sigma_{j},i})\) is a TM of the activation function \(\sigma_{j}\) w.r.t. \(z_{j}\) ranging in the \(j\)-th dimension of the set \(W_{i}(p_{i}(\mathbf{x}_{0})+I_{i})\). Third, \((p_{i+1}(\mathbf{x}_{0},I_{i+1}))\) is computed as the TM composition \(p_{\sigma,i}(W_{i}(p_{i}(\mathbf{x}_{0})+I_{i})+I_{\sigma,i}\) where \(p_{\sigma,i}(\mathbf{z})=(p_{\sigma_{1},i}(z_{1}),\ldots,p_{\sigma_{l},i}(z_{k}))^ {T}\) and \(I_{\sigma,i}=(I_{\sigma_{1},i},\ldots,I_{\sigma_{l},i})^{T}\). Hence, when there are multiple layers, starting from the first layer, the output TM of a layer is treated as the input TM of the next layer, and the final output TM is computed by composing TMs layer-by-layer. Besides, we use \((p_{j,i},I_{j,i})\) for \(j=1,\ldots,l\) to represent the TMs associated with the \(l\) neurons in a linear layer. The computation of those TMs can be conducted in parallel. So is the propagation through the activation functions in a layer. POLAR-Express realizes such parallelism via multi-threading to retain time efficiency when the dimension of the NN layers is large. **Polynomial Approximations to TMs.** Basically, a TM only defines an over-approximate mapping and is independent of the approximation method used for the polynomial part. Thus, we consider using both Taylor and Bernstein approximations when propagating through an activation function and choose the one that produces less overestimation after a TM combination. The following example shows that the selection cannot be determined only based on the approximation error. Given the TMs \((p_{1},I_{1})\), \((p_{2},I_{2})\) which are both TM over-approximations for the sigmoid function \(f(x)=\frac{1}{1+e^{-x}}\) w.r.t. a TM domain \(x\in q(y)+J\): \[(p_{1},I_{1}) =(0.5+0.25x-0.02083x^{3},\text{[-7.93e-5, 1.92e-4]})\] \[(p_{2},I_{2}) =(0.5+0.24855x-0.004583x^{3},\text{[-2.42e-4, 2.42e-4]})\] \[(q,J) =(0.1y-0.1y^{2},\text{[-0.1,0.1]}).\] where \(y\in[-1,1]\). However the composition \((p_{1}(q(y)+J)+I_{1})\) produces a TM with the remainder \([-0.0466,0.0477]\), while the remainder produces by \(p_{2}(q(y)+J)+I_{2}\) is \([-0.0253,0.0253]\) which is smaller. In other words, a smaller polynomial approximation error does not always lead to a smaller error in combination. Therefore, it motivates us to do a selection after the combination. We generalize this phenomenon by defining the _accuracy preservation problem_, and obviously, the answer is no if TM arithmetic is used. **Definition 2** (Accuracy preservation problem).: _If both \((p_{1}(\mathbf{x}),I_{1})\) and \((p_{2}(\mathbf{x}),I_{2})\) are over-approximations of \(f(\mathbf{x})\) with \(\mathbf{x}\in D\), and \((p_{1}(\mathbf{x}),I_{1})\prec_{k}(p_{2}(\mathbf{x}),I_{2})\). Given another function \(g(\mathbf{y})\) which is already over-approximated by a TM \((q(\mathbf{y}),J)\) whose range is contained in \(D\). Then, does \(p_{1}(q(\mathbf{y})+J)+I_{1}\prec_{k}p_{2}(q(\mathbf{y})+J)+I_{2}\) still hold using order \(k\) TM arithmetic?_ ### _Bernstein Over-approximation for Activation Functions_ Now we turn to our Bernstein over-approximation method for activation functions. It first computes a Bernstein polynomial for the function and then evaluates a remainder interval to ensure the over-approximation. The polynomials are in the B\(\acute{e}\)zier form. **Definition 3** (Bernstein polynomial).: _Given a continuous function \(f(x)\) with \(x\in[a,b]\), its order \(k\) Bernstein polynomial \(p_{k}(x)\) is defined by_ \[\sum_{i=0}^{n}\left(f\left(a+\frac{i}{k}(b-a)\right)\binom{k}{i} \left(\frac{x-a}{b-a}\right)^{i}\left(\frac{b-x}{b-a}\right)^{k-i}\right) \tag{4}\] **Bernstein approximation in Bezier form.** The use of Bernstein approximation only requires the activation function to be continuous in \((p_{t},I_{t})\), and can be used not only in more general situations but also to obtain better polynomial approximations than Taylor expansions (see [67]). We first give a general method to obtain a Bernstein over-approximation for an arbitrary continuous function, and then present _a more accurate approach only for ReLU functions._ Given that the activation functions in a layer are collectively represented as a vector \(\sigma(\mathbf{z})\) and \(\mathbf{z}\) ranges in a TM \((p_{t},I_{t})\). Then the order \(k\) Bernstein polynomial \(p_{\sigma_{j},i}(z_{j})\) for the activation function \(\sigma_{j}\) of the \(j\)-th neuron. It can be computed as (4) while \(f\) is \(\sigma_{j}\), \(a,b\) are the lower and upper bound respectively of the range in the \(j\)-th dimension of \((p_{t},I_{t})\), and they can be obtained from an interval evaluation of the TM. **Remainder Evaluation.** The remainder \(I_{\sigma_{j},i}\) for the polynomial \(p_{\sigma_{j},i}\) can be obtain as a symmetric interval \([-\epsilon_{j},\epsilon_{j}]\) such that \(\epsilon_{j}\) is \[\max_{s=1,\ldots,m} \left(\left|p_{\sigma_{j},i}(\frac{\overline{Z}_{j}\!-\!\underline{Z }_{j}}{m}(s\!-\!\frac{1}{2})\!+\!\underline{Z}_{j})\right.\right.\] \[\left.\left.-\sigma_{j}(\frac{\overline{Z}_{j}-\!\underline{Z}_{j} }{m}(s-\frac{1}{2})\!+\!\underline{Z}_{j})\right|\!+\!L_{j}\!\frac{\overline{Z}_ {j}\!-\!\underline{Z}_{j}}{m}\right)\] wherein \(L_{j}\) is a Lipschitz constant of \(\sigma_{j}\) with the domain \((p_{t},I_{t})\), and \(m\) is the number of samples that are uniformly selected to estimate the remainder. The soundness of the error bound estimation above has been proved in [13] for multivariate Bernstein polynomials. Since the univariate Bernstein polynomial, which we use in this paper, is a special case of multivariate Bernstein polynomials, our approach is also sound. Fig. 5: Single layer propagation More Precise and Efficient Bernstein Over-approximation for ReLU.The above Bernstein over-approximation method works on all continuous activation functions, however, if a function is convex or concave on the domain of interest, a more accurate Bernstein over-approximation that is represented as a TM can be obtained as follows. Given a continuous function \(f(x)\) with \(x\in D\) such that \(f\) is convex on the domain, then the Bernstein polynomials of \(f\) are no smaller than \(f\) at any point in \(D\). Thus, a tight upper bound for \(f\) can be computed as one of its Bernstein polynomial \(p\), while a tight lower bound can be obtained by moving \(p\) straight down by the distance which is equivalent to the maximum difference of \(f\) and \(p\) for \(x\in D\). When \(f\) is ReLU and \(0\in D\), it is convex on \(D\), and its maximum difference to any Bernstein polynomial \(p\) is \(p(0)\). We give the particular Bernstein over-approximation method for ReLU functions by Algorithm 4. An example is illustrated in Fig. 7. **Lemma 1**.: _Given that \(p_{k}(x)\) is the order \(k\geq 1\) Bernstein polynomial of a convex function \(f(x)\) with \(x\in[a,b]\). For all \(x\in[a,b]\), we have that (i) \(f(x)\leq p_{k}(x)\) and (ii) \(p_{k+1}(x)\leq p_{k}(x)\)._ Proof.: The Lemma is proved in [68] for the domain \(x\in[0,1]\). However, it also holds on an arbitrary domain \(x\in[a,b]\) after we replace the lower and upper bounds in the Bernstein polynomials by \(a\) and \(b\). **Corollary 1**.: _If \(p(x)\) is the order \(k\geq 1\) Bernstein polynomial of ReLU\((x)\) with \(x\in[a,b]\), then \(0\leq\text{ReLU}(x)\leq p(x)\) for all \(x\in[a,b]\)._ **Lemma 2**.: _Given that \(p(x)\) is the order \(k\geq 1\) Bernstein polynomial of ReLU\((x)\) with \(x\in[a,b]\) such that \(a<0<b\), then we have that \(p(x)-\text{ReLU}(x)\leq p(0)\) for all \(x\in[a,b]\)._ Proof.: Since ReLU\((x)\) is convex over the domain, by [68], so is \(p(x)\). Therefore, the second derivative of \(p\) w.r.t. \(x\) is non-negative. By evaluating the first derivatives of \(p\) at \(x=a\) and \(x=b\), we have that \(\frac{dp}{dx}|_{x=a}\geq 0\) and \(\frac{dp}{dx}|_{x=b}\leq 1\). Since the first derivatives of ReLU\((x)\) are \(0\) and \(1\) when \(x\in[a,0)\) and \(x\in(0,b]\) respectively, \(p(a)=\text{ReLU}(a)\), and \(p(b)=\text{ReLU}(b)\), we have that the function \(p(x)-\text{ReLU}(x)\) monotonically increasing when \(x\in[a,0]\) and decreasing when \(x\in[0,b]\), hence its maximum value is given by \(p(0)\). **Example 2**.: _In Fig6, an NNCS with ReLU as the activation functions starts from an initial set near \((0.4,0.45)\) and moves towards a target set enclosed by the yellow rectangle (see details in benchmark 5[13, 14]). POLAR-Express with this new precise and efficient over-approximation approach for the ReLU function generates tighter flowpipes than POLAR. The runtime of POLAR-Express is \(1.8\)s while the runtime of POLAR is \(7.1\)s. This result serves as the evidence that this novel Bernstein Over-approximation for ReLU achieves better verification efficiency and accuracy._ ### _Using Symbolic Remainders_ A main source of overestimation in interval arithmetic is the computation of linear mappings. Given a box (Cartesian product of single intervals) \(I\), its image under a linear mapping \(\mathbf{x}\mapsto A\mathbf{x}\) is often not a box but has to be over-approximated by a box in interval arithmetic. For a sequence of linear mappings, the resulting box is often unnecessarily large due to the overestimation accumulated in each mapping. It is also known as the _wrapping effect_[48]. To avoid this class of overestimation, we may symbolically represent the intermediate boxes, and only do an interval evaluation at last. For example, if we need to compute the image of the box \(I\) through the linear mappings: \(\mathbf{x}\mapsto A_{1}\mathbf{x},\ldots,\mathbf{x}\mapsto A_{m}\mathbf{x}\), the box \(I\) is kept symbolically and the composite mapping is computed as \(A^{\prime}=A_{m}\cdots A_{1}\), a tight interval enclosure for the image can be obtained from evaluating \(A^{\prime}I\) using interval arithmetic. Although TM arithmetic uses polynomials to symbolically represent the variable dependencies, it is not free from wrapping effects since the remainders are always computed using interval arithmetic. Consider the TM composition for computing the output TM of a single layer in Fig. 5, the output TM Fig. 6: POLAR-Express (blue) generates tighter over-approximations than POLAR (green) under the same hyper-parameters; the red curves are the simulated traces; the yellow rectangle is the target set. \(p_{\sigma,i}(W_{i}(p_{i}(\mathbf{x}_{0})+I_{i})+B_{i})+I_{\sigma,i}\) equals to \(Q_{i}W_{i}p_{i}(\mathbf{x}_{0})+Q_{i}W_{i}I_{i}+Q_{i}B_{i}+p_{\sigma,i}^{R}(W_{i}(p_ {i}(\mathbf{x}_{0})+I_{i})+B_{i})+I_{\sigma,i}\) such that \(Q_{i}\) is the matrix of the linear coefficients in \(p_{\sigma,i}\), and \(p_{\sigma,i}^{R}\) consists of the terms in \(p_{\sigma,i}\) of the degrees \(\neq 1\). Therefore, the remainder \(I_{i}\) in the second term can be kept symbolically such that we do not compute \(Q_{i}W_{i}I_{i}\) out as an interval but keep its transformation matrix \(Q_{i}W_{i}\) to the subsequent layers. Given the image \(S\) of an interval under a linear mapping, we use \(\underline{S}\) to denote that it is kept symbolically, i.e., we keep the interval along with the transformation matrix, and \(\overline{S}\) to denote that the image is evaluated as an interval. Next, we present the use of SR in layer-by-layer propagation. Starting from the NN input TM \((p_{1}(\mathbf{x}_{0}),I_{1})\), the output TM of the first layer is computed as \[\underbrace{Q_{1}W_{1}p_{1}(\mathbf{x}_{0})+Q_{1}B_{1}+p_{\sigma,1}^{R}(W_{1}(p_ {1}(\mathbf{x}_{0})+I_{1})+B_{1})+I_{\sigma,1}}_{q_{1}(\mathbf{x}_{0})+J_{1}}+\underbrace {Q_{1}W_{1}I_{1}}_{q_{1}(\mathbf{x}_{0})+J_{1}}\] \[+\underbrace{Q_{1}W_{1}I_{1}}_{q}\] which can be kept in the form of \(q_{1}(\mathbf{x}_{0})+J_{1}+\underline{Q_{1}W_{1}I_{1}}\). Using it as the input TM of the second layer, we have the following TM \[p_{\sigma,2}(W_{2}(q_{1}(\mathbf{x}_{0})+J_{1}+\underline{Q_{1}W_{1} I_{1}})+B_{2})+I_{\sigma,2}\] \[=\underbrace{Q_{2}W_{2}q_{1}(\mathbf{x}_{0})+Q_{2}B_{2}+p_{\sigma,2}^ {R}(W_{2}(q_{1}(\mathbf{x}_{0})+J_{1}+\overline{Q_{1}W_{1}I_{1}})+B_{2})+I_{\sigma,2}}_{q_{2}(\mathbf{x}_{0})+J_{2}}\] \[+\underbrace{Q_{2}W_{2}J_{1}+\underline{Q_{2}W_{2}Q_{1}W_{1}I_{1}}} _{q_{1}(\mathbf{x}_{0})+J_{1}+Q_{2}W_{2}Q_{1}W_{1}I_{1}}\] for the output range of the second layer. Therefore the output TM of the \(i\)-th layer can be obtained as \(q_{i}(\mathbf{x}_{0})+\underline{J}_{i}+\underline{Q_{i}W_{i}\cdots Q_{1}W_{1}I_{ 1}}\) such that \(\underline{J}_{i}=J_{i}+\underline{Q_{i}W_{i}J_{i-1}}+\underline{Q_{i}W_{i}Q_ {i-1}W_{i-1}J_{i-2}}+\cdots+\underline{Q_{i}W_{i}\cdots Q_{2}W_{2}J_{1}}\). We present the SR method in Algorithm 5 where we use two lists: \(\mathcal{Q}[j]\) for \(Q_{i}W_{i}\cdots\cdots Q_{j}W_{j}\) and \(\mathcal{J}[j]\) for \(\mathbb{J}_{j}\) to keep the intervals and their linear transformations. The symbolic remainder representation is replaced by its interval enclosure \(I_{r}\) at the end of the algorithm. **Time and space complexity.** Although Algorithm 5 produces TMs with tighter remainders than Algorithm 3, because of the symbolic interval representations under linear mappings, it requires (1) two extra arrays to keep the intermediate matrices and remainder intervals, (2) two extra inner loops which perform \(i-1\) and \(i-2\) iterations in the \(i\)-th outer iteration. The size of \(Q_{i}W_{i}\cdot\cdots Q_{j}W_{j}\) is determined by the rows in \(Q_{i}\) and the columns in \(W_{j}\), and hence the maximum number of neurons in a layer determines the maximum size of the matrices in \(\mathcal{Q}\). Similarly, the maximum dimension of \(J_{i}\) is also bounded by the maximum number of neurons in a layer. Because of the two inner loops, time complexity of Algorithm 5 is quadratic in \(M\), whereas Algorithm 3 is linear in \(M\). **Theorem 1**.: _In Algorithm 2, if \((p(\mathbf{x}_{0},\tau),I)\) is the \(i\)-th TM flowpipe computed in the \(j\)-st control step, then for any initial state \(\mathbf{c}\in X_{0}\), the box \(p(\mathbf{c},\tau)+I=p^{\prime}(\tau)+I\) contains the actual reachable state \(\varphi_{\mathcal{N}}(\mathbf{c},(j-1)\delta_{c}+(i-1)\delta+\tau)\) for all \(\tau\in[0,\delta]\)._ ``` 1:Setting \(\mathcal{Q}\) as an empty array which can keep \(M+1\) matrices; 2:Setting \(\mathcal{J}\) as an empty array which can keep \(M+1\) multidimensional intervals; 3:\(\mathbb{J}\gets 0\); 4:for\(i=1\) to \(M+1\)do 5:Computing the composite function \(p_{\sigma,i}\) and the remainder interval \(I_{\sigma,i}\) using the BP technique; 6:Evaluating \(q_{i}(\mathbf{x}_{0})+J_{i}\) based on \(\mathbb{J}\) and \(\mathcal{Q}[1]I_{1}\); 7:\(\mathbb{J}\gets J_{i}\); 8:\(\Phi_{i}=Q_{i}W_{i}\); 9:for\(j=1\) to \(i-1\)do 10:\(\mathcal{Q}[j]\leftarrow\Phi_{i}\cdot\mathcal{Q}[j]\); 11:endfor 12:Adding \(\Phi_{i}\) to \(\mathcal{Q}\) as the last element; 13:for\(j=2\) to \(i\)do 14:\(\mathbb{J}\leftarrow\mathbb{J}+\mathcal{Q}[j]\cdot\mathcal{J}[j-1]\); 15:endfor 16:Adding \(J_{i}\) to \(\mathcal{J}\) as the last element; 17:endfor 18:Computing an interval enclosure \(I_{r}\) for \(\mathbb{J}+\mathcal{Q}[1]I_{1}\); 19:return\(q_{M+1}(\mathbf{x}_{0})+I_{r}\). ``` **Algorithm 5**TM output computation using symbolic remainders, input and output are the same as those in Algorithm 3 ## V Benchmark Evaluations We conduct a comprehensive comparison with state-of-the-art tools across a diverse set of benchmarks. In addition, we discuss in detail the applicability and comparative advantages of different techniques. The experiments were performed on a machine with a 6-core 2.20 GHz Intel i7 CPU and 16GB Fig. 7: The Taylor model (TM) over-approximation \(p(x)+I\) of ReLU\((x)\) is given by \(p(x)=p_{B,k}(x)-\frac{p_{B,k}(0)}{2}\) and \(I=[-\frac{p_{B,k}(0)}{2},\frac{p_{B,k}(0)}{2}]\)) where \(p_{B,k}(0)\) is the Bernstein polynomial \(p_{B,k}(x)\) evaluated at \(x=0\). It can be shown that for \(x\in[a,b]\) with \(a<0<b\), the bounds of the interval remainder \(I\) are tight for any order \(k\) Bernstein polynomials approximation with \(k\geq 1\). of RAM. For tools that can leverage GPU acceleration such as \(\alpha,\beta\)-CROWN, the experiments were run with the aid of an Nvidia GeForce GTX 1050Ti GPU. The multi-threading support is realized by using the C++ Standard Library. Considering the overhead introduced by multi-threading, we also measure the performance of the application under single thread to identify the bottleneck caused by multi-threading. **Benchmarks.** Our NNCS benchmark suite consists of Benchmarks 1 - 6 from [14, 13], Discrete Mountain Car (MC) from [15], Adaptive Cruise Control (ACC) from [16], 2D Spacecraft Docking from [63], Attitude Control from [41], Quadrotor-MPC from [15] and QUAD20 from [41]. The performance of a tool is evaluated based on proving the reachability of tasks defined for the benchmarks. They are defined as follows. _Benchmark 1-6._ The reachability verification task asks if the NNCS can reach the target set from any initial state in \(N\) control steps. We give the definitions for the initial and target sets in Table II. _Discrete-Time Mountain Car (MC)._ It is a 2-dimensional NNCS describing an under-powered car driving up a steep hill. We consider the initial condition defined by \(x_{0}\in[-0.53,-0.5]\) and \(x_{1}=0\). The target is \(x_{0}\geq 0.2\) and \(x_{1}\geq 0\) where the car reaches the top of the hill and is moving forward. The total control steps \(N\) is 150. _Adaptive Cruise Control (ACC)._ The benchmark models the moves of a lead vehicle and an ego vehicle. The NN controller tries to maintain a safe distance between them. We use the definition of the initial and target set given in [40], and the number of control steps is \(N=50\). _2D Spacecraft Docking (Example 1)._ The initial set is defined by \(x,y\in[24,26]\), \(v_{x}=v_{y}=-0.1378\), and \(v_{safe}\in[0.2697,0.2755]\) which is directly computed based on the ranges of \(x,y\). The total control steps \(N\) is \(120\). In this benchmark, we only verify the safety property. That is, the NN controller should maintain \(\sqrt{v_{x}^{2}+v_{y}^{2}}\leq v_{safe}\) all the time. _Attitude Control & QUAD20._ The reachability problems for the two benchmarks are the same as the ones given in [41]. _Quadrotor-MPC._ This benchmark is originally given in [40]. It consists of a quadrotor and a planner. The position of the quadrotor is indicated by the state variables \((p_{x},p_{y},p_{z})\), while the velocity in the 3 dimensions is given by \((v_{x},v_{y},v_{z})\). The velocity of the planner is \((b_{x},b_{y},b_{z})\) which has a piecewise-constant definition: \((b_{x},b_{y},b_{z})=(-0.25,0.25,0)\) for \(t\in[0,2]\) (first 10 steps), \((b_{x},b_{y},b_{z})=(-0.25,-0.25,0)\) for \(t\in[2,4]\), \((b_{x},b_{y},b_{z})=(0,0.25,0)\) for \(t\in[4,5]\), and \((b_{x},b_{y},b_{z})=(0.25,-0.25,0)\) for \(t\in[5,6]\). The control input \(\theta\), \(\phi\), \(\tau\) is determined by an NN "bang-bang" controller, which is a classifier mapping system states to a finite set of control actions. The initial set is defined as \(p_{x}-q_{x}\in[-0.05,-0.025]\), \(p_{y}-q_{y}\in[-0.025,0]\), and \(p_{z}-q_{z}=v_{x}=v_{y}=v_{z}=0\). The verification task asks to prove that all reachable states in 30 control steps should be in the safe set \(-0.32\leq p_{x}-q_{x},p_{y}-q_{y},p_{z}-q_{z}\leq 0.32\). **Evaluation Metrics.** The tools are compared based on their performance on all of the benchmarks. Since tools are using different hyper-parameters, we tune their settings for each benchmark, try to make them produce similar reachable set over-approximations and only compare the time costs. For a tool that is not able to handle a benchmark, we present its result with the best setting that we can find. **Stopping Criteria.** We stop the run of a tool when the reachability problem is proved, or the tool raises an error or is terminated by the operating system due to a runtime system error such as being out of memory. Hence, every test produces one of the following four results: **(Yes)** the reachability property is proved, **(No)** the reachability property is disproved, **(U)** the computed over-approximation is too large to prove or disprove the property with the best tool setting we can find, **(DNF)** a tool or system error is reported and the reachability computation fails. **Experimental Results.** Table III, Fig. 8. Because Quadrotor-MPC is a hybrid system, only POLAR-Express, Verisig 2.0, and \(\alpha,\beta\)-CROWN + Flow* are able to deal with it. It is verified to be safe by POLAR-Express with 13.1 seconds and by Verisig 2.0 with 961.4 seconds. The result is Unknown for \(\alpha,\beta\)-CROWN + Flow* after 10854 seconds. ### _Challenges with Running Other Tools_ We found soundness issues when running the CORA tool for Benchmark 5 and the QUAD20 example. In Benchmark 5, both simulation traces and reachable sets of CORA deviate from the other tools. In QUAD20, the reachable sets computed by CORA cannot cover the simulation traces (i.e. not an over-approximation), shown in Fig 8 (l). The default setup in JuliaReach reports the runtime of running the _same_ reachable set computation the second time after a "warm-up" run (whose runtime is not included), likely to take advantage of cache effects or saved computations. For a fair comparison with the other tools, we report the runtime of the first run. RINO has three "DNF's caused by division-by-zero errors. Both RINO and JuliaReach have their own plot functions, while the other tools plot reachable sets in MATLAB. This is an issue in several examples such as the plot function in RINO taking too long to plot the reachable sets. For the most complicated QUAD20 example, Verisig 2.0 failed after 17939 seconds during 1-step reachable set computation, \(\alpha,\beta\)-CROWN + Flow* failed after 3 control steps, NNV failed after 1 step, CORA has the soundness issue as mentioned earlier, the reachable set of JuliaReach explodes after 25 control steps, and RINO has the division-by-zero error. \begin{table} \begin{tabular}{c|c|c|c} Benchmark & Initial set & Target set1 & \(N\) \\ \hline \hline 1 & \([0.8,0.9]\times[0.5,0.6]\) & \([0,0.2]\times[0.05,0.3]\) & 35 \\ \hline 2 & \([0.7,0.9]\times[0.7,0.9]\) & \([-0.3,0.1]\times[-0.35,0.5]\) & 10 \\ \hline 3 & \([0.8,0.9]\times[0.4,0.5]\) & \([0.2,0.3]\times[-0.3,-0.05]\) & 60 \\ \hline 4 & \([0.25,2.27]\times[0.08,0.1]\) & \([-0.2,-0.1]\times[0.0,0.05]\) & 10 \\ & \(\times[0.25,0.27]\) & \([-0.2,-0.1]\times[0.0,0.05]\) & 10 \\ \hline 5 & \([0.38,0.4]\times[0.45,0.47]\) & \([-0.43,-0.15]\times[0.05,0.22]\) & 10 \\ & \(\times[0.25,0.27]\) & \([-0.25,-0.25,0)\) & \([-0.1,0.2]\times[-0.9,-0.6]\) & 10 \\ \hline 6 & \([-0.77,-0.75]\times[-0.45,-0.43]\) & \([-0.1,0.2]\times[-0.9,-0.6]\) & 10 \\ & \(\times[0.51,0.54]\times[-0.3,-0.28]\) & \([-0.1,0.2]\times[-0.9,-0.6]\) & 10 \\ \hline \end{tabular} \end{table} TABLE II: Reachability verification tasks for Benchmark 1-6. ## References * [1] S. A. B. K., and J. M. L. K., "The 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-dimensional 2d-d-dimensional 2d-d 2d-dimensional 2d-dimensional 2d-d-dimensional 2d-dimensional 2d-d 2d-dimensional 2d-dimensional 2d-d-dimensional 2d-d-dimensional 2d-d 2d-dimensional 2d-d-dimensional 2d-d 2d-dimensional 2d-d-dimensional 2d-d 2d-d-dimensional 2d-d 2d-d-dimensional 2d-d 2d-d-dimensional 2d-d 2d-d-dimensional 2d-d 2d-d-dimensional 2d-d-d 2d-d-dimensional 2d-d-d 2d-d-dimensional 2d-d-d 2d-d 2d-d-dimensional 2d-d 2d-d-dimensional 2d-d 2d-d-dimensional 2d-d 2d-d-dimensional 2d-d 2d-d-d 2d-d 2d-d-dimensional 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d 2d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d 2d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d 2d-d-d 2d-d 2d-d 2d-d 2d-d 2d-d-d 2d-d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d-d 2d-d 2d-d-d 2d-d 2d-d 2d-d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d-d 2d ### _Experimental Comparison and Discussion_ of POLAR-Express with existing tools and show that POLAR-Express can achieve state-of-the-art efficiency and tightness in reachable set computations. On the other hand, current techniques still do not scale well to high-dimensional cases. In our experiment, the performance of Verisig 2.0 degrades significantly for the 6-dimensional examples, and POLAR-Express is also less efficient in the QUAD20 example. We believe state dimensions, control step sizes, and the numbers of total control steps are the key factors in scalability. As TMs are parameterized by state variables, higher state dimensions will lead to a more tedious polynomial expression in the TMs. Meanwhile, a large control step or a large number of total control steps can make it more difficult to propagate the state dependencies across the plant dynamics and across multiple control steps. We believe that addressing these scalability issues will be the main subject of future work in NNCS reachability analysis. **Acknowledgement**. We gratefully acknowledge the support from the National Sci- once Foundation awards CCF-1646497, CCF-1834324, CNS-1834701, CNS-1839511, IIS-1724341, CNS-2038853, ONR grant N00014-19-1-2496, and the US Air Force Research Laboratory (AFRL) under contract number FA8650-16-C-2642.
2309.07695
System Effects in Identifying Risk-Optimal Data Requirements for Digital Twins of Structures
Structural Health Monitoring (SHM) technologies offer much promise to the risk management of the built environment, and they are therefore an active area of research. However, information regarding material properties, such as toughness and strength is instead measured in destructive lab tests. Similarly, the presence of geometrical anomalies is more commonly detected and sized by inspection. Therefore, a risk-optimal combination should be sought, acknowledging that different scenarios will be associated with different data requirements. Value of Information (VoI) analysis is an established statistical framework for quantifying the expected benefit of a prospective data collection activity. In this paper the expected value of various combinations of inspection, SHM and testing are quantified, in the context of supporting risk management of a location of stress concentration in a railway bridge. The Julia code for this analysis (probabilistic models and influence diagrams) is made available. The system-level results differ from a simple linear sum of marginal VoI estimates, i.e. the expected value of collecting data from SHM and inspection together is not equal to the expected value of SHM data plus the expected value of inspection data. In summary, system-level decision making, requires system-level models.
Domenic Di Francesco, Max Langtry, Andrew B. Duncan, Chris Dent
2023-09-14T13:15:40Z
http://arxiv.org/abs/2309.07695v1
# System Effects in Identifying Risk-Optimal Data Requirements for Digital Twins of Structures ###### Abstract Structural Health Monitoring (SHM) technologies offer much promise to the risk management of the built environment, and they are therefore an active area of research. However, information regarding material properties, such as toughness and strength is instead measured in destructive lab tests. Similarly, the presence of geometrical anomalies is more commonly detected and sized by inspection. Therefore, a risk-optimal combination should be sought, acknowledging that different scenarios will be associated with different data requirements. Value of Information (VoI) analysis is an established statistical framework for quantifying the expected benefit of a prospective data collection activity. In this paper the expected value of various combinations of inspection, SHM and testing are quantified, in the context of supporting risk management of a location of stress concentration in a railway bridge. The Julia code for this analysis (probabilistic models and influence diagrams) is made available. The system-level results differ from a simple linear sum of marginal VoI estimates, i.e. the expected value of collecting data from SHM and inspection together is not equal to the expected value of SHM data plus the expected value of inspection data. In summary, **system-level decision making, requires system-level models**. keywords: Data-centric Engineering, Decision Analysis, Risk Management, + ###### Abstract We consider the problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of finding the optimal solution of a given problem of a problem of finding the optimal solution of a given problem of finding the \begin{table} \begin{tabular}{c|c|c} Symbol & Meaning & Units \\ \hline \(a\) & risk mitigation action & \(-\) \\ \(a^{*}\) & risk mitigation action associated with expected optimal utility & \(-\) \\ \(A\) & set of available risk mitigation action(s) & \(-\) \\ \(C_{fail}\) & cost of failure & \(-\) \\ \(C_{rm}\) & cost of risk mitigation & \(-\) \\ \(e\) & data collection activity & \(-\) \\ \(e^{*}\) & data collection activity associated with expected optimal utility & \(-\) \\ \(E\) & set of available data collection activities & \(-\) \\ \(m\) & mean & \(-\) \\ \(\Pr(fail)\) & probability of failure & \(-\) \\ \(sd\) & standard deviation & \(-\) \\ \(u\) & utility & \(-\) \\ \(Var\) & variance & \(-\) \\ \(z\) & measurement data & \(-\) \\ \(f_{X}(x)\) & probability mass or density function for uncertain parameter, \(x\) & \(-\) \\ \(\alpha\) & shape parameter of Gamma distribution & \(-\) \\ \(\beta_{R}\) & reliability index & \(-\) \\ \(\gamma\) & scale parameter of Gamma distribution & \(-\) \\ \(\delta\) & misalignment & \(mm\) \\ \(\epsilon\) & measurement uncertainty parameter & \(-\) \\ \(\theta\) & uncertain and unobserved model parameter(s) & \(-\) \\ \(\rho\) & correlation coefficient & \(-\) \\ \(\sigma_{L}\) & applied stress & \(MPa\) \\ \(\sigma_{L-meas}\) & stress inferred from sensor data & \(MPa\) \\ \(\sigma_{Y}\) & yield strength & \(MPa\) \\ \(\Sigma\) & covariance matrix & \(-\) \\ \(\Phi_{G}\) & multivariate distribution defined by a Gaussian copula, & \(-\) \\ & associated covariance matrix and marginal distributions & \\ \end{tabular} \end{table} Table 1: Nomenclature ## 1 Introduction ### Risk Based Structural Integrity Management Risk is defined as the expected consequences of uncertain outcomes [1], and can therefore be used to rank decision alternatives consistently with statistical decision theory [2]. For instance, it is expected to be worthwhile investing in a maintenance activity if it is believed that the corresponding net reduction in failure costs is greater than the costs required to complete the work. By the same approach, the optimal activity (where multiple may be available, including the option to take no action) is that for which this net benefit is the greatest. In industrial Risk Based Inspection (RBI) schemes, calculation complexity is often cited as the reason that simplified heuristics are used, rather than the above-mentioned principled statistical decision analysis [3; 4]. Due to the emergence of novel data collection technologies and methods for scalable probabilistic inference, it can be argued that these justifications for avoiding quantitative statistical methods are now less valid [5]. The principle of RBI is that resource allocation should be directed by risk, i.e. as components in a system become increasingly likely to fail, or the associated failures become increasingly costly (or both), their priority in maintenance budgeting should also increase. Without an absolute scale of risk (which can only be obtained from a fully quantitative risk analysis), it cannot be determined for which components investment is expected to be worthwhile, only their relative priorities can be identified, and not always consistently. Simple rules will always have pragmatic and implementation benefits, but can lead to sub-optimal resource allocation and excessive (unquantified) risk. The barriers associated with the introduction of methods from computational statistics will decrease as the cost of formulating and performing calculations decreases and training in data-centric engineering becomes widespread. The associated benefits of transparent, quantitative and auditable decision-making are then expected to emerge. ### System Effects Recent technological developments in data collection for engineering structures have largely focused on Structural Health Monitoring (SHM). This can broadly be described as the process of fitting sensors to structural components (either during construction of new builds, or as retrofits to existing structures) so that local conditions can be measured in operation [6]. A typical application is that of installing strain gauges to better understand the applied loads at particular locations. Alongside developments in data analysis, across many industries, this may have a transformative impact on the risk management of much of the built environment. The expected value of this data, which can be quantified in the context of a decision analysis (see Section 2), may be especially high when there is little other information available regarding the local environmental conditions. However, in instances where these are considered to be adequately characterised in prior models, it may not be worthwhile installing SHM systems. Rather, one of the other features of a structural integrity assessment may benefit from measurement. It is therefore important to recognise that the value of SHM data may be influenced by the present understanding of the wider structural system, which can be modelled by integrating multiple sources of information. A simple example is shown in Figure 1, commonly referred to as the Engineering Critical Assessment (ECA) of fracture mechanics triangle [7], which describes the key sources of information in an assessment of structural condition. As an example, consider the diagram in Figure 2 of two plates (base metal) joined by a weld consisting of weld metal and a Heat Affected Zone (HAZ), which contains anomalies. Here, data from SHM may provide an indirect and imperfect measurement of the applied stress, \(\sigma_{L}\). Inspection, often referred to as Non-Destructive Testing/Evaluation (NDT/NDE), can provide similarly imperfect detection and sizing of the presence/extent of damage1. Finally, material test ing can provide a better understanding of the properties of the weld, HAZ, and base metal. All of these remain important inputs in empirical fracture mechanics models [9]. As SHM technologies develop, it may be possible to reliably infer material and geometrical properties from the data they provide. The methods and calculations discussed in this paper are, in principle, agnostic to the source of the data. Rather, they are concerned with a mathematical characterisation of the quality of data required by a model, in the context of the underlying decision problem for which the model is required, see Section 2. By considering the inter-dependencies between multiple sources of information, engineers can better assess the overall health of a structure and make more principled risk management decisions. Figure 1: Features of an assessment of structural condition Figure 2: Geometric features of welded steel connection, including plate misalignment, \(\delta\) ### Probabilistic Digital Twins There are various, and often contradictory definitions of the term _digital twin_. A UK government report [10] proposes the following definition, which is considered suitably broad: Digital twins are realistic digital representations of physical things. They unlock value by enabling improved insights that support better decisions, leading to better outcomes in the physical world. The calculations and discussions in this paper are regarding how mathematical representations of decision problems can be used to obtain transparent and replicable (auditable) risk management strategies for structures. Such analysis is conditional on an underlying model, which may be considered to be a digital twin of the structure. Note that differing requirements have been specified for what constitutes a _digital twin_, such as the requirement for a 3-dimensional model, or for at least one source of data to be in the form of a (near) real-time stream. In practice, the form and constituent components of a digital twin (or any engineering model) should be based on the requirements of the underlying decision problem that the model is intended to support. The decision to invest in a sensing system should be based on the extent to which the data is expected to facilitate improved risk management (see Section 2), and not so that the model can be categorised as a digital twin2. Engineers have a legal [11] and ethical [12] responsibility to develop and maintain safe systems, and this link between underlying risk management decisions and structural calculations/analysis should always be clearly defined, as it will be scrutinised following a failure [13]. Digital twins should be developed to effectively support decisions subject to uncertainty, such as what quality, and quantity of data is required for a structure. ## 2 Decision Analysis ### Conceptual Introduction The need to support high consequence decisions is what dictates requirements for various engineering analysis. For instance, material selection during a design may be informed by stress analysis, or maintenance investments can be informed by fracture mechanics assessments. However, unless the purpose is made explicit, the results of such calculations may require some intermediary interpretation before they can be used to support a decision. This means that there is an opportunity to introduce unquantified and undocumented bias into the analysis. Influence diagrams [14] offer a concise graphical representation of the components of a decision problem, namely uncertainties, costs and possible actions. Domain knowledge is needed to design influence diagrams, as the causal model used effects the results of any calculations performed, with the addition or removal of arrows potentially leading to entirely different outcomes [15]. As an example, consider the case of an inspection for geometric anomalies on a structure. If predicted to be associated with sufficiently large stress concentration effects (characterised by a Stress Concentration Factor, SCF), the relevant components were replaced and tensile testing specimens machined from them. When compared with pre-commissioning test data, the yield strength, \(\sigma_{Y}\) was found to be relatively high at these locations of high SCF. This association can be described in various ways. In Figure 3, diagram (a) proposes that a high strength causes a high SCF, and diagram (b) proposes that a high SCF causes a high strength. A more suitable causal model identifies that the proof load that the structure was exposed to is a confounding variable3, as shown in Figure 4. The reason that this representation matters, is because it will impact risk management decisions. If a modelling team erroneously considered diagram (b) in Figure 3, then any interventions that impact the SCF would also be assumed to change the material strength. Domain expertise is required to design the structure of influence diagrams, and prior models. However, they can then be used to find associated expected optimal decision strategies. Implications of such features in DAGs are discussed in more detail in [16] from a causal inference perspective, and in [17] with reference to engineering applications. As suggested in Section 1.3, deciding whether or not to invest resources in collecting and analysing data is an on-going challenge in engineering. Fundamentally, this requires an evaluation of the question: _how_ and _to what extent_ is the proposed data expected to facilitate improved risk management? Value of Information (VoI) analysis provides a replicable and quantitative framework for addressing this question, as demonstrated by the example calculations in this paper, and in recent scientific literature [18; 19; 20; 5]. It achieves this by considering the engineering models and the underlying decision problem jointly. This helps target analysis in various ways, including identifying when and where purchasing data of a specified quality is expected to be worthwhile. Applying VoI to structural integrity management raises the question, "what is it about data that provides value?" The mechanism is that data reduces statistical (epistemic) uncertainty, for example, as more tensile tests are completed, engineers can better understand the strength of a material. As the uncertainty reduces, decision problems become simpler4 and decision Figure 3: DAGs (a) and (b) conveying alternative causal explanations for an association between \(\sigma_{Y}\) and SCF Figure 4: Influence diagram proposing non-causal association between \(\sigma_{Y}\) and SCF makers may identify new expected optimal strategies, or simply expose themselves to less risk as a consequence of more precisely quantifying characteristics of their system. Footnote 1: The risk of a system is not necessarily a function of the system, but it is not a function of the system. A key challenge of VoI analysis is that the value that data will eventually provide to a decision maker, will depend on the result of the measurement(s) obtained. However, this needs to be evaluated before the measurement is taken. This challenge is common to Bayesian experimental design [21], which is also concerned with optimising data collection. The solution is to use a prior model that describes the uncertainty in the quantity of interest. The calculation compares the risk if a decision maker were to act based on this prior model, with the expected risk when some plausible data is incorporated. This comparison answers the initial question of how and to what extent data is expected to improve risk management decision making. When this difference is quantified on a monetary scale (as is typically the case in VoI analysis), it can be interpreted as how much an engineering team should be willing to pay for the data. Some important features of VoI analysis are summarised below: * **Challenge in generalising results**. The expected value of data in the context of solving a decision problem is case-specific. Changing a single element of the problem may or may not change the results, and this outcome may not be intuitive. * **Quantity vs. Quality trade-off**. For instance, for a given bridge, it may be more worthwhile to collect small amounts of load data than large amounts of high-resolution displacement data. Similarly the value of load measurements may vary with location over the span. * **Data is imperfect**. Considering perfect information provides a useful upper bound and simplifies calculations. Accounting for the imperfect features (precision, bias, reliability, missingness) of data in the likelihood function produces more realistic results at greater computational expense. ### Formal Definitions Consider the influence diagram in Figure 5. Here, the structural reliability of a system \(\beta_{R}\) is defined based on some set of uncertain parameters, \(\theta\). It determines the expected consequence of failure (along with a cost of failure). A set of risk mitigation options, \(A\), which include the option to take no action, are available to the operator. The selection of this action is required to find the cost of risk mitigation. The optimal action, \(a^{*}\), is defined as that which maximises the expected utility (or, equivalently, minimises the expected cost), as defined in Equation 1[22; 1]. \[a^{*}=\operatorname*{arg\,max}_{a\in\mathcal{A}}\operatorname*{\mathbb{E}}_{ \theta\sim f_{\Theta}(\theta)}\big{[}u(a,\theta)\big{]} \tag{1}\] VoI analysis, originally proposed in [23], considers the opportunity to collect data and investigates how this would impact the decision analysis. The basic problem in Figure 5 is extended in Figure 6 to introduce data collection opportunities, \(\mathcal{E}\), including the option to proceed with no additional data. Since this data, \(z\), will generally be some indirect measurement of \(\theta\), it will influence the estimate of \(\beta_{R}\). The two decision variables (risk mitigation and data collection) can be jointly optimised, as shown in Equation 2. The expected value Figure 5: Generic influence diagram for mitigating risk of structural failure of a specific data collection activity, \(e_{i}\), is defined as the difference between the expected utility with and without the data. This result can be interpreted as how much the operator should be willing to pay for the data, meaning that it can directly inform budgeting decisions regarding data acquisition. \[e^{*},a^{*}=\operatorname*{arg\,max}_{\begin{subarray}{c}a\in\mathcal{A}\\ e\in\mathcal{E}\end{subarray}}\operatorname*{\mathbb{E}}_{\begin{subarray}{c} \theta\sim f_{\Theta}(\theta)\\ z\sim f_{Z}(z|\theta)\end{subarray}}\big{[}u(a,e,f_{\Theta}(\theta\mid z) \big{]} \tag{2}\] \[VoI(e_{i})=\operatorname*{\mathbb{E}}_{\begin{subarray}{c}\theta\sim f_{ \Theta}(\theta),\\ z\sim f_{Z}(z|\theta)\end{subarray}}\big{[}u(e_{i},a^{*},f_{\Theta}(\theta \mid z)\big{]}-\operatorname*{\mathbb{E}}_{\theta\sim f_{\Theta}(\theta)}\big{[} u(a^{*},f_{\Theta}(\theta))\big{]} \tag{3}\] The basic calculation procedure is outlined in Algorithm 1. The term _preposterior_ in this algorithm is used in engineering to note that the prior model is being updated using hypothesised data, rather than actual measurements (which are not available at the time of evaluating whether obtaining these measurements is risk-optimal) [24]. The steps involving identifying the optimal action, and associated expected optimal utility require solving an influence diagram, i.e. finding the strategy (combination of available actions) that maximises the expected utility. In this paper, this optimisation has been solved as a mixed-integer linear program using an extension [25] to the Julia mathematical optimisation library [26; 27]. This approach includes the constraint that Figure 6: Generic influence diagram for quantifying the expected value of structural data all (decision) variables are discrete, \(a\in\mathbb{Z}^{n}\), representing the binary option to implement an action or not. Each combination of actions \(a\in A\) can therefore be assigned a discrete index, and the maximum expected utility over this set will be associated with \(a^{*}\)[28]. ``` 1. Identify optimal action \(a^{*}\), with respect to the expected utility, \(E[u]\), given prior distribution(s) \(f_{\Theta}(\theta)\) 2. Find associated expected prior utility \(E[u(a^{*})]\) 3for hypothesised measurement, \(z_{e_{i}}\), in \(\theta\sim f_{\Theta}(\theta)\)do 4. Define likelihood function \(f_{Z}(z_{e_{i}}\mid\theta)\) describing information content of data, from measurement activity \(e_{i}\) 5. Update prior model to obtain _preposterior_ distribution, \(f_{\Theta}(\theta\mid z_{e_{i}})\) 6. Identify \(a^{*}\) given preposterior distribution 7. Find associated expected prior utility \(E[u(a^{*},z_{e_{i}})]\) 8 endfor 9. Calculate expected value of proposed measurement: \(VoI=\dfrac{1}{Nsamples}\times\sum_{n=1}^{Nsamples}E[u(a^{*},z_{e_{i}}(n))]-E[u (a^{*})]\) ``` **Algorithm 1** Quantitative estimate of expected value of data collection ### Practical Considerations: Qualitative Example A desirable feature of VoI analysis is that it is a quantitative and replicable method for identifying risk-optimal data collection activities, to the extent that the associated models provide a valid representation of the true system. For instance, some pipelines in the USA are believed to be over 50 years old, and still in operation despite missing design documentation [29]. There are instances where the material (grade of linepipe steel) is unknown. Although it may be of interest to retrofit SHM sensors to such a structure, a VoI analysis is likely to identify that it is expected to be more worthwhile to understand the material properties. Depending on the intended future operation, it may also be worthwhile to collect other types of data too, but as argued in this paper, there should be a transparent, risk-based justification for any measurements or interventions. As a more detailed example, consider the Morandi cable stayed bridge, which collapsed in Genoa (Italy) in 2018, resulting in death of 43 people. As outlined in [30; 31], previous attempts to investigate the condition of the concrete had failed to obtain the required information. The means by which data collection from the bridge could have prevented the catastrophic failure are considered below. * **Strain gauges**: measured strains may have triggered some intervention if cables failed sequentially, with sufficient time between failures to detect the redistribution of load. * **Inspection of cables**: provided that the inspection was able to obtain measurements of the damage present in the cables (an earlier attempt had not, [31]), this would have identified an important factor in the reduced reliability of the bridge. * **Material testing of cables and concrete**: in the failure investigation, concrete was measured to be approximately three times weaker than the expected strength. This was attributed to illegal activities during the Figure 7: Schematic diagram of failed section of Morandi bridge construction of the bridge in the 1960's. Testing of samples prior to the collapse could have identified this. Chloride-induced corrosion was known to be a threat, given the marine environment that the bridge was exposed to. Propagating uncertainty in the condition of the concrete (and perhaps also the strength of the concrete) through a decision analysis could have shown that investments in data collection and subsequent risk mitigation may have been worthwhile. Without formalising this procedure, maintenance decisions lose transparency, as different interpretations of reports, calculations and confounding variables can be used to justify any outcome. In 2018 steel supports were added to the span of the bridge that failed, see Figure 7. Strengthening repairs are a valid form of risk mitigation, but will not necessarily be the best option. Such interventions should be justified using a decision analysis, with a transparent path from the intervention to the specific source of risk that is being targeted. In this case, strengthening the concrete towers did not improve the reliability of the corroding cables. Alternative options may include limiting the traffic (operational cyclic loads) on the bridge, or replacing parts with new components with more precisely known properties. There are organisational challenges in integrating some novel methods of data collection and analysis into the business processes of the asset owner, in this case Autostrade per l'Italia. The introduction of data-driven management may improve safety by benefitting from recent research, but any existing workflows that are superceded should be carefully considered, as subject matter expertise (for instance, in defining influence diagrams and prior probabilistic models), is not presently straightforward to automate, particularly for specific assets. In this case, a generic review may have identified the threat of Chloride-induced corrosion, but a knowledge of the design and operational history would be able to confirm the extent to which this had been mitigated, or measured. The absence of useful data should have been evident in an increased uncertainty in the cable condition, which may then have identified some available risk mitigation as a worthwhile investment. These concepts are demonstrated quantitatively, in Section 3. ## 3 Quantitative Example: Risk Management for a Railway Bridge ### Introduction A bridge in Stafforshire, UK, was designed to carry passenger trains and was constructed with an SHM system [32]. Specifically, Fibre Bragg Grating (FBG) strain gauges were installed at multiple locations, a diagram of their arrangement is shown in Figure 8. Here, these sensors were placed on the main I-beams (20 each), across the span of the bridge, and on two selected transverse beams (7 each). Tensile strains of approximately \(25\mu\) strain were recorded during a train passing [33]. This bridge is used as an example to demonstrate how maintenance decisions subject to uncertainty can benefit from additio Figure 8: Schematic diagram of strain gauge placement on elements in Staffordshire railway bridge values (e.g. material properties, loads, and costs) used in the calculation do not precisely match those for the Staffordshire rail bridge, but in Section 3.2 are argued to be representative of such a problem. The maintenance decision problem is represented by the influence diagram in Figure 9, which has been solved as part of this example. Here, each of the elements of the ECA triangle in Figure 1 is considered (as shown by the dashed lines), with options to collect data, or reduce the risk at each of the three elements. These elements then jointly inform (an uncertain estimate of) the reliability index, \(\beta_{R}\). The load, and geometry in particular, may vary along the span of the bridge. The influence diagram in Figure 9, and associated probabilistic models in Section 3.2 define the conditions at a single joint. Misalignment at flange connections, such as those in the Staffordshire railway bridge, or at welded connections, such as in Figure 2, both concentrate stresses. Specifically, an upcoming annual maintenance window for the bridge is considered. The failure mechanisms considered are listed below: * **Over-stress**: This limit state is defined as the combined effect of the applied load and the stress concentration exceeding the yield strength. * **Fatigue**: The repeated application of loads, which may individually be insufficient to cause failure, exceeding a permissible limit, as defined by an SN curve5. Test data has been simulated from a curve for a class D joint 6[34], and a probabilistic model has been fit to account for the variability. See the calculation for complete details [35]. Footnote 5: This model proposes a linear relationship (on a logarithmic scale) between the amplitude of repeated stress cycles, \(S\), and the number of cycles at \(S\) before fatigue failure, \(N\). Footnote 6: This category is considered representative for the hot-spot stress (at locations where cracks are more likely to initiate) of a range of welded joints and flange connections The probability of failure, \(Pr(fail)\), is defined as the probability of exceeding of at least one of these limit states, see Equation 4. This is calculated using probabilistic models of yield strength, applied stress, and stress concentration. Figure 9: Extension of ”ECA triangle” (see dashed lines) to influence diagram representation of structural integrity management decision problem The available actions considered in the decision problem are presented in Table 2. Note that this analysis considers that _any combination_ of these actions can be selected, including the option to take no action, and to collect all sources of data and implement all available risk mitigation. \[\Pr(fail)=\Pr(\text{over-stress}\cup\text{fatigue}) \tag{4}\] ### Decision Problem and Utility Function The decision problem described by the influence diagram in Figure 9 is defined by probabilistic models of the uncertain parameters (including the effect of the various interventions, which are summarised in Table 2), and the costs \begin{table} \begin{tabular}{|c|c|c|c|} \hline Action & Action type & Child node & Description \\ \hline testing & data collection & \(\sigma_{Y}\) & reduces uncertainty in material properties \\ \hline repair & risk mitigation & \(\sigma_{Y}\) & strengthening repair increases \\ & & & capacity to withstand load. \\ & & & Stress is multiplied by 0.75 \\ \hline inspection & data collection & SCF & reduces uncertainty in local geometry \\ \hline replace & risk mitigation & SCF & replacing misaligned component \\ & & & reduces stress concentration effects \\ & & & (if correctly installed). \\ & & & SCF is multiplied by 0.0 \\ \hline SHM & data collection & \(\sigma_{L}\) & reduces uncertainty in applied load \\ \hline reduce operation & risk mitigation & \(\sigma_{L}\) & limiting the frequency of train passage \\ & & & reduces the number of stress cycles \\ & & & experienced, increasing fatigue life. \\ & & & Cyclic loading frequency is mulitplied by 0.5 \\ \hline \end{tabular} \end{table} Table 2: Summary of intervention options available to operator and qualitative explanation of their effect on downstream nodes of each outcome. Note that these models describe a specific element, at which some misalignment, and therefore stress concentration, is believed to be present. The prior model for stress, \(\sigma_{L}\), is shown in Equation 5. The mean value of \(50MPa\) is approximately equivalent to the (elastic) stress associated with an SHM measurement of \(25\mu\) strain [33] and an elastic modulus of \(210GPa\), as assumed in [32]. A joint prior model of stress concentration factor, \(SCF\), and yield strength, \(\sigma_{Y}\), is presented in Equation 8. Here some dependency is proposed to account for the effect of previous proof load testing, which makes the combination of relatively low material strength and relatively high stress concentration to be less likely (see Figure 3). \[\sigma_{L}\sim\mathcal{N}(m_{\sigma_{L}},sd_{\sigma_{L}}) \tag{5}\] \[\mu_{\sigma_{L}}\sim\mathcal{N}(m=50,sd=5) \tag{6}\] \[\sigma_{\sigma_{L}}\sim LogNormal(m=6,sd=3) \tag{7}\] \[SCF,\sigma_{Y}\sim\Phi_{G}\binom{\Gamma(\alpha_{SCF},\gamma_{SCF})}{LogNormal (\mu_{\sigma_{Y}},\sigma_{\sigma_{Y}})},\Sigma(SCF,\sigma_{Y})\Bigg{)} \tag{8}\] \[\alpha_{SCF}\sim\mathcal{N}\bigg{(}m=2,sd=\frac{1}{2}\bigg{)},\;\alpha_{SCF}\geq 0 \tag{9}\] \[\gamma_{SCF}\sim\mathcal{N}\bigg{(}m=\frac{1}{2},sd=\frac{1}{2}\bigg{)},\; \gamma_{SCF}\geq 0 \tag{10}\] \[\mu_{\sigma_{Y}}\sim\mathcal{N}(m=400,sd=20) \tag{11}\] \[\sigma_{\sigma_{Y}}\sim LogNormal(m=10,sd=3) \tag{12}\] \[\Sigma(SCF,\sigma_{Y})=\begin{bmatrix}Var[SCF]&\rho(SCF,\sigma_{Y})\cdot\sqrt{ Var[SCF]}\\ \rho(SCF,\sigma_{Y})\cdot\sqrt{Var[SCF]}&\sigma_{\sigma_{Y}}^{2}\end{bmatrix} \tag{13}\] \[Var[SCF]=\alpha_{SCF}\times\gamma_{SCF}^{2} \tag{14}\] \[\rho(SCF,\sigma_{Y})=\frac{2}{3} \tag{15}\] An implementation of this decision analysis using the above models has been made available using the Julia programming language [36] (with supporting optimisation libraries [25, 26]). Samples from these priors have been obtained (using latin hypercube sampling [37]) for prior predictive checks, and solving the risk management decision problems [35]. These samples are then used to estimate the expected value of combinations of material testing, SHM and inspection. The utility function in this calculation is defined in Table 3, providing the cost of implementing the various interventions is defined. Note that every combination of the risk mitigation actions in Table 2 is considered in the calculation. A site visit cost (which is only incurred as part of a risk mitigation activity) is associated with each action. All costs in this example are normalised, so that the consequence of failure is set to 1.0. Risk mitigation costs can therefore be considered as proportions of the failure cost. As an example, the expected utility associated with each risk mitigation option is evaluated using Equation 16. This incorporates the cost of failure, \(C_{fail}\) and of implementing any risk mitigation, \(C_{rm}\). Note that maximising the expected utility (see Equations 1 and 2) is equivalent to minimising the expected cost. As shown in Table 4, the optimal action, \(a^{*}\), conditional on the prior models of \(\sigma_{Y}\), \(SCF\), and \(\sigma_{L}\) is to not invest in any risk mitigation measures in the upcoming window. \[E[u]=Pr(fail)\times-(C_{fail})-C_{rm} \tag{16}\] \begin{table} \begin{tabular}{|c|c|c|c|} \hline Intervention or outcome & \(Pr(fail)\) & Risk mitigation cost(s) & Total cost \\ \hline no action & 0.0357 & 0 & \(a^{*}\) \\ & & & 0.0357 \\ \hline repair & 0.0101 & 0.035 & 0.0451 \\ \hline reduce operation & 0.0271 & 0.050 & 0.0771 \\ \hline replace & 0.000 & 0.085 & 0.085 \\ \hline repair and & 0.0069 & 0.085 & 0.0919 \\ reduce operation & & & \\ \hline repair and replace & 0.000 & 0.110 & 0.110 \\ \hline repair, replace, & & & \\ and reduce operation & & & \\ \hline \end{tabular} \end{table} Table 4: Evaluations of utility function for prior decision analysis (with no additional data collection) \begin{table} \begin{tabular}{|c|c|c|} \hline Intervention or outcome & Cost components & Total cost \\ \hline repair & repair (0.025) & 0.035 \\ & + site visit (0.01) & & 0.035 \\ \hline replace & replace (0.075) & & \\ & + site visit (0.01) & & 0.085 \\ \hline reduce operation & reduce operation (0.05) & & \\ & + site visit 0.01 & & 0.05 \\ \hline repair and replace & repair (0.025) & & \\ repair and replace & + replace (0.075) & & 0.11 \\ & + site visit (0.01) & & \\ \hline repair, replace, & repair (0.025) & & \\ and reduce operation & + replace (0.075) & & 0.16 \\ & + reduce operation (0.05) & & \\ & + site visit (0.01) & & \\ \hline failure & failure (1.0) & 1.0 & 1.0 \\ \hline \end{tabular} \end{table} Table 3: Evaluations of utility function for various combinations of interventions The VoI analysis presented in Section 3.3 compares this outcome to the results conditional on updated models of \(\sigma_{Y}\), \(SCF\), and \(\sigma_{L}\), based on simulated (hypothesised) data from testing, inspection, and SHM. ### Results #### 3.3.1 Perfect Information The expected value of perfect SHM data in the context of planning for an upcoming annual maintenance window is presented in Figure 10. This plot is the result of the decision analysis (expected optimal risk mitigation) associated with each sample from the prior model of stress. As the colorbar indicates, in instances where a relatively low stress concentration is inferred from the inspection data, the expected optimal solution is not to invest in maintenance activities. As the hypothesised misalignment measurement increases, so does the expected cost, initially because the associated higher stress concentration will expose the operator to more risk, but eventually then also because the expected optimal decision transitions to a strategy of investing in a strengthening repair to reduce the stress in the component. This continues until the strategy transitions again, this time to replace the component and remove any stress concentration effects due to poor installation. The mean value of the expected costs from all of these simulations is an estimate of the expected maintenance cost with perfect SHM data, and is indicated by the dashed vertical line. This is compared to the expected costs without this data, as indicated by the solid vertical line (results presented in Table 4). The arrow pointing to the difference between these lines is the expected value of the data. This analysis can be repeated for inspection and testing, as well as combinations of these, i.e. assessing the influence of collecting multiple sources of data. These results are presented in Figure 11. The key finding from this calculation is that these estimates do not simply sum linearly. For instance, the expected value of inspection, \(VoPInsp\) was estimated to be 0.0332 and the expected value of SHM, \(VoPSHM\), was found to be 0.0167. However, the expected value of collecting both types of data together was was estimated to be 0.0334, which is less than \(VoPINsp+VoPSHM\). Similarly, the expected value of testing alone was estimated to be 0.0012, but it is not expected to contribute any further value when completed with either SHM or inspection (or both). This means that in instances where data will be collected from an inspection or SHM system, the maintenance strategy (and associated costs) is expected to be unchanged if material testing is also completed. When a data collection is being evaluated the decision problem changes and the extent to which the new optimal action space now benefits from further reductions in uncertainty (from other types of data) may change. These non-linearities are introduced by the utility function, for example due to the decision boundaries presented in Figure 10. This sometimes non-intuitive transformation onto a utility scale makes it difficult to generalise results, as the analysis is evidently dependent on the features of the specific decision problem. It does, however, introduce the benefit of producing interpretable, actionable results by Figure 10: Expected value of perfect inspection data providing an operator with a maximum (generally) monetary value that they should be willing to spend, for a specified type(s) of data. If a vendor quotes a higher price than the expected value of the data, then the risk optimal solution is simply to proceed without the data. What the results in Figure 11 demonstrate is that when there is the opportunity to collect different types of data, these should be considered jointly in a VoI analysis, or a sub-optimal data collection plan may be identified. #### 3.3.2 Imperfect Information Quantifying the expected value of imperfect data increases the complexity of step 3b in Algorithm 1. Here, a likelihood function is required to describe the imperfect features of the data, and the subsequent posterior distribution is then propagated through the decision analysis. For example, in the case of assessing the expected value of SHM, a measurement from the sensor no longer removes all uncertainty from the estimate of stress (at the time and location Figure 11: Expected value of combinations of structural integrity data of measurement). Rather it is acknowledged that the the measurement may be an under-estimate, or over-estimate of the true stress. This imprecision can be easily incorporated into a Bayesian model, as shown in Figure 12 and Equation 17. Note that as the precision with which the stress can be estimated from the SHM data increases, the expected value of this data asymptotically approaches the expected value of perfect SHM data, as calculated in Figure 11. \[\sigma_{L-meas}\sim N(\sigma_{L},\epsilon) \tag{17}\] The expected value of perfect data provides a useful upper bound, and may be sufficient to support a decision (for instance if it is still below the cost that is being quoted to obtain the data, then it can be concluded that purchasing this data is not expected to be a risk optimal strategy). As data becomes increasingly imprecise, the extent to which it will support risk management decisions will decrease (or, at best, remain the same). This is demonstrated by the sensitivity analysis results in Figure 13. Here, the value on the x-axis is \(\epsilon\) in Equation 17. Note that this pattern is true for other features of imperfect information, such as bias, missingness, reliability, and risk of obtaining the data. Examples of how to incorporate these in a VoI analysis are provided in [38]. #### 3.3.3 Forecasting Multiple Maintenance Windows Another way to extend this problem is to quantify the expected value of data in the context of supporting multiple decision problems. Maintenance strate Figure 12: Simple DAG representation of measurement precision in SHM sensor data gies are not static and depend on planned data collection, or risk mitigation interventions at other points in time. To address this, the influence diagram in Figure 9 can be repeated sequentially to represent the decisions that will need to be made in multiple future maintenance windows. This representation is generally referred to as a dynamic Bayesian network (or influence diagram) [39]. This can be considered a system effect in the dependency between various elements in time, because they are jointly optimised. Solving the dynamic influence diagram provides the results in Table 5. However, when simulating (indirect) measurements of stress from an SHM sensor, the mean expected optimal cost again reduces and the expected VoI can be quantified as shown in Figure 14. Note that in each case, a specific combination of actions was identified, as was the case in the prior analysis in Table 5. The frequency of optimal action pathways is visualised in Figure 15. Here, the width of each path is proportional to the number of samples for which that Figure 13: Sensitivity analysis of measurement uncertainty on expected value of SHM data action pathway minimises expected costs. ## 4 Conclusions The key conclusions and propositions from this work are outlined below: * Influence diagrams (or similar formalisation of structural integrity management decision problems) demonstrate the relationships/dependencies between different elements of a risk mitigation system. This includes showing which specific risks are mitigated by various interventions, and which Figure 14: Prior samples of stress and expected optimal preposterior utilities for the repeated decision problem using three consecutive planning windows \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(a^{*}\) Year 1 & \(a^{*}\) Year 2 & \(a^{*}\) Year 3 & \(E[u(a^{*})]\) \\ \hline strengthen & strengthen & no action & 0.155224 \\ \hline \end{tabular} \end{table} Table 5: Optimal actions (w.r.t. expected utility) forecast for three successive maintenance windows Figure 15: Sequences of optimal actions (w.r.t. expected utility) forecasting over a three-year period quantities are measured by different data collection activities. Subject matter experts can agree this structure, which then lends itself to probabilistic modelling (uncertainty quantification) and optimisation (decision analysis) using modern software libraries. For instance, the calculation associated with the quantitative example in this paper is available in full in [35]. * Value of information analysis can be used to formalise the process of managing engineering data. This is contingent on a model, a mathematical description of the underlying decision problem, and how the (prospective) data is related to the quantities of interest. Ensuring (and transparently demonstrating) that sufficient data is obtained to effectively manage the risk of engineering systems aligns with the legal and ethical responsibilities of the profession. Training in methods of data-centric engineering, and improvements to industrial decision-support software are both expected to make VoI easier to engage with and unlock its benefits for professional engineers. * The assessment procedure presented in this paper, namely 1. Describing the relationship between the quantities of interest with each other and prospective data collection & risk mitigation actions. 2. identifying expected optimal maintenance actions, and; 3. quantifying the expected benefit of collecting additional data, in the context of further supporting this decision, is generic. If SHM technologies allow for other quantities to be measured with the same flexibility, then this approach will still be capable of finding the expected optimal combination of data to purchase. * In this paper it has been demonstrated that when multiple measurement opportunities are present, failing to solve this problem jointly (considering inter-dependencies of collecting different combinations of various types of data) can lead to sub-optimal data collection plans, i.e. either gathering redundant information, or failing to collect information that becomes beneficial only in combination with other measurements. ## 5 Acknowledgements Domenic Di Francesco is supported by the Ecosystem Leadership Award under the EPSRC Grant EP/X03870X/1, and The Alan Turing Institute, particularly the Turing Research Fellowship scheme under that grant. Max Langtry is supported by the EPSRC, through the CDT in Future Infrastructure and Built Environment: Resilience in a Changing World Grant EP/S02302X/1. Chris Dent was supported by the Isaac Newton Institute for Mathematical Sciences, in particular the Mathematical and Statistical Foundations of Data Driven Engineering programme when work on this paper was undertaken. This work was supported by the EPSRC Grant EP/R014604/1. He was also partially supported by a grant from the Simons Foundation.
2304.00106
A $G$-equivariant String-Net Construction
We develop a string-net construction for the (2,1)-dimensional part of a $G$-equivariant three-dimensional topological field theory based on a $G$-graded spherical fusion category. In this construction, a $G$-equivariant generalization of the Ptolemy groupoid enters. We compute the associated cylinder categories and show that, as expected, the model is closely related to the $G$-equivariant Turaev-Viro theory.
Adrien DeLazzer Meunier, Christoph Schweigert, Matthias Traube
2023-03-31T20:01:04Z
http://arxiv.org/abs/2304.00106v1
# A \(G\)-equivariant string-net construction ###### Abstract. We develop a string-net construction for the (2,1)-dimensional part of a \(G\)-equivariant three-dimensional topological field theory based on a \(G\)-graded spherical fusion category. In this construction, a \(G\)-equivariant generalization of the Ptolemy groupoid enters. We compute the associated cylinder categories and show that, as expected, the model is closely related to the \(G\)-equivariant Turaev-Viro theory. ###### Contents * 1 Introduction * 1.1 Miscellaneous Notation * 2 Categorial Preliminaries * 2.1 Spherical Fusion Categories * 2.2 Graphical Calculus * 2.3 \(G\)-categories * 3 Once extended \(G\)-equivariant HTQFTs * 4 \(G\)-equivariant Ptolemy Groupoid * 5 Bare String-Net Spaces and Cylinder Categories * 5.1 A vector space for surfaces * 5.2 Cylinder Category * 6 \(G\)-equivariant String-Nets * 6.1 Construction of \(G\)-equivariant String-Net Space * 6.2 Computations of the \(G\)-String-Net Space * A Proof of Theorem 4.9 We dedicate this article to the memory of Krzysztof Gawedzki, in admiration for the mathematical and physical depth and breadth of his work. He pioneered higher structures in quantum field theories and used, in particular, equivariant structures and orbifold constructions to get beautiful insights [11][12][13]. ## 1. **Introduction** String-nets originated in physics from a description of topological phases of matter [10]. A mathematical formulation for string-nets was later given in [11]. The idea is to consider a vector space generated by graphs embedded into a surface and labeled with data from a spherical fusion category. Relations are given by the graphical calculus in the category, which is considered locally on the surface. This can be understood as an example of a topological field theory in terms of generators and relations in the sense of [15]. Using the description of the bicategory of \((3,2,1)\)-cobordisms in terms of generators and relations [1], in [16][1] it was shown that the string-net construction of [11] can be extended to a once extended three dimensional TQFT and that the string-net TQFT is equivalent to the once extended Turaev-Viro TQFT of [1][1][1][1]. A natural question is, whether this equivalence can be extended to a \(G\)-equivariant setting, for \(G\) any finite group. There are several different points that need to be addressed when trying to answer this question. In [13], a version of the Levin-Wen model on surfaces with \(G\)-bundles was defined. However, a rigorous mathematical formulation for \(G\)-equivariant strings-nets, in the spirit of [11], was not given. If there was a suitable \(G\)-equivariant string-net construction on surfaces, possibly with boundary, one should then compare it to a \(G\)-equivariant version of the Turaev-Viro construction. TQFTs on manifolds with \(G\)-bundles were defined in [14], where they are called homotopy quantum field theories (HTQFTs). A construction of a (once extended) Turaev-Viro HTQFT with input a \(G\)-graded spherical fusion category is given in [15][15]. Thus there is a natural candidate to compare an equivariant string-net construction to. In this paper we give a mathematical definition for \(G\)-equivariant string-nets on compact surfaces, which are allowed to have non-empty boundary. As an algebraic input we need a \(G\)-graded spherical fusion category \(\mathcal{C}\). Furthermore, we show that our construction indeed reproduces the \((2,1)\)-part of a once extended HTQFT as defined in [17]. We show that its value on objects of a suitable \(G\)-bordism bicategory is equivalent to the \(G\)-center of \(\mathcal{C}\). In addition, we are able to compute string-net spaces on surfaces in terms of purely algebraic data, i.e. we will show in section 6.2.3 that the string-net space on a surface is isomorphic to a certain hom-space in the \(G\)-center. Comparing our string-net construction to the Turaev-Viro theory of [15] is subtle, as we formulate, for reasons explained in [17, Remark 2.4], our construction in the language of bicategories and use related, but slightly different, geometric inputs. Our assumptions allow us to compute cylinder categories rather than to postulate them, cf. Remark 5.17. An extension to three dimensional bordisms, though, is beyond the scope of this paper. Since the results of [1] are not available for \(G\)-equivariant HTQFTs such an extension would require different methods from the ones on [16]. From the point of view of applications, our construction might be interesting for the construction of correlators in orbifold rational conformal theory (RCFT). Constructions of correlators in (orbifold) RCFTs in terms of three dimensional TQFT were given in a series of papers [10, 11, 12, 13, 14, 15]. Due to the close connection of string-nets with three dimensional TQFTs, in [17] a string-net constructions for closed RCFT correlators based on Cardy-bulk algebra field content was given. The construction was extended to arbitrary open-closed RCFTs with fixed open-closed field content in [14] and to arbitrary open-closed RCFTs with defects in [13]. Using the string-net construction given in this paper, it seems reasonable to obtain a construction of orbifold correlators very similar to the previous ones. This paper is organized as follows. In section 2 we recall some facts about spherical fusion categories as well as about \(G\)-graded categories. In particular we give in Proposition 2.3 an explicit expression for the \(G\)-crossing of the \(G\)-center of a \(G\)-graded spherical fusion category. In section 3 we recall once extended \(G\)-equivariant HTQFTs and explain our definition for an equivariant bordism bicategory. Sections 4, 5, and 6 are the main part of the paper. In section 4 a \(G\)-labeled version of the Ptolemy-groupoid is introduced. This is the central technical tool for our string-net construction. The main result in this section is Theorem 4.9, where we show that the \(G\)-enhanced Ptolemy-complex inherits from the ordinary Ptolemy complex the property of being connected and simply connected. Using the \(G\)-Ptolemy groupoid, in section 5 we finally lay out our construction of the \(G\)-equivariant string-net space and show that it indeed is the \((2,1)\)-part of a once extended HTQFT. In section 6, we compute string-net spaces for a cylinder, pair of pants and a genus two surface with three boundary components. By doing so, we will see how the HTQFT structure of our string-net construction induces the \(G\)-crossing as well as the monoidal product in the \(G\)-center. A higher genus computation will then connect equivariant string-nets to the equivariant Turaev-Viro HTQFT. ### Miscellaneous Notation We fix some notation, which will be used throughout the whole paper. First, \(\mathbb{K}\) will be an algebraically closed field of characteristic zero, \(G\) a finite group and \(BG\) its classifying space, which is an Eilenberg-MacLance space of type \(K(G,1)\). For a small category \(\mathcal{C}\), the set of objects is denoted \(\mathcal{C}_{0}\) and the set of morphisms \(\mathcal{C}_{1}\). Given a graph \(\Gamma\), its sets of vertices and edges will be \(V(\Gamma)\) and \(E(\Gamma)\). The set of half-edges incident to a vertex \(v\) will be denoted \(H(v)\). Given a surface \(\Sigma\) and an embedded graph \(\Gamma\hookrightarrow\Sigma\), the connected components of \(\Gamma^{[2]}\coloneqq\Sigma\backslash\Gamma\) will be called _\(2\)-faces of \(\Gamma\)_. If a graph \(\Gamma\) is oriented, its set of oriented edges will be \(E^{or}(\Gamma)\). In addition, an edge \(e\) with an orientation will be written in bold symbols, i.e \(\coloneqq(e,or)\). The same edge with the opposite orientation will get an additional overline \(\overline{\mathbf{e}}\coloneqq(e,-or)\). A graph is called finite, if it has finitely many vertices and edges. **Acknowledgement:** The authors thank Yang Yang and Theo Johnson-Freyd for useful discussions. CS and MT are supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under SCHW1162/6- 1; CS is also supported by DFG under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306. ## 2. **Categorial Preliminaries** ### Spherical Fusion Categories We will work exclusively with spherical fusion categories. For the reader's convenience, we recall some definitions and facts. Proofs for the statements can be found in many sources, and an exhaustive textbook treatment is given in [1]. Categories \(\mathcal{C}\) in this paper are always enriched in the symmetric monoidal category of finite dimensional \(\mathbb{K}\)-vector spaces and abelian. In this case we speak of _\(\mathbb{K}\)-linear categories_. Since we fix the ground field from the start, we will just speak of _linear categories_. A linear category is _monoidal_, if there is a bilinear functor \(\otimes:\mathcal{C}\times\mathcal{C}\to\mathcal{C}\) and an unit object \(\mathbb{1}\) in \(\mathcal{C}\) together with the usual associativity and unitality constraints. Without loss of generality we can assume that monoidal categories are strict, meaning that the following objects are identical \(\mathbb{1}\otimes c=c=c\otimes\mathbb{1}\) and \((a\otimes b)\otimes c=a\otimes(b\otimes c)\). Assuming these strictness conditions, associativity and unitality constraints become trivial. In addition we take categories to be _finitely semi-simple_. That is, there is a finite set of isomorphism classes of simple objects, for which we choose a set I(C) of representatives, which includes the monoidal unit. Every object decomposes as a finite direct sum of simple objects and \(\operatorname{End}_{\mathcal{C}}(\mathbb{1})\simeq\mathbb{K}\). A category satisfying these properties is called a _fusion category_. Furthermore, a monoidal category has _right (resp. left) duals_, if for any object \(c\in\mathcal{C}\) there exists an object \(c^{*}\) (resp. \({}^{*}c\)) and morphisms \[\operatorname{ev}_{c}:c^{*}\otimes c\to\mathbb{1},\qquad\operatorname{ coev}_{c}:\mathbb{1}\to c\otimes c^{*} \tag{2.1}\] for right duals and \[\widetilde{\operatorname{ev}}_{c}:c\otimes{}^{*}c\to\mathbb{1},\qquad \widetilde{\operatorname{coev}}_{c}:\mathbb{1}\to{}^{*}c\otimes c \tag{2.2}\] for left duals. The evaluation and coevaluation morphisms have to satisfy the snake identities \[(\operatorname{Id}_{c}\otimes\operatorname{ev}_{c})\circ(\operatorname{ coev}_{c}\otimes\operatorname{Id}_{c})=\operatorname{Id}_{c},\qquad(\widetilde{ \operatorname{ev}}_{c}\otimes\operatorname{Id}_{c})\circ(\operatorname{Id}_ {c}\otimes\widetilde{\operatorname{coev}}_{c})=\operatorname{Id}_{c} \tag{2.3}\] The category is called _rigid_ if every object has a left and right dual. A _pivotal structure_ on a rigid category is a monoidal natural isomorphism \(\pi:\operatorname{Id}_{\bullet}\Rightarrow(\bullet)^{**}\). Similar to the monoidal structure, we can assume [14, Theorem 2.2] pivotal structures to be strict if they exist, meaning \(\pi_{c}=\operatorname{Id}_{c}\). In a category with a (strict) pivotal structure, i.e. a pivotal category, we can form left and right traces of morphisms \(f\in\operatorname{End}_{\mathcal{C}}(c)\) \[\operatorname{tr}_{\ell}(f)\coloneqq\operatorname{ev}_{c^{*}}\circ(f \otimes\operatorname{Id}_{c^{*}})\circ\operatorname{coev}_{c},\qquad \operatorname{tr}_{r}(f)\coloneqq\widetilde{\operatorname{ev}}_{{}^{*}c}\circ( \operatorname{Id}_{{}^{*}c}\otimes f)\circ\widetilde{\operatorname{coev}}_{c}\,. \tag{2.4}\] A pivotal category is called _spherical_ if left and right traces coincide. Note that in a spherical category one can identify left and right duals, which we will implicitly do. As left and right traces agree in spherical categories, we simply speak of _the_ trace and drop the distinction between left and right traces from notation. In a spherical category, we can associate to \(c\in\mathcal{C}_{0}\) its _dimension_\(d_{c}\in\operatorname{End}_{\mathcal{C}}(\mathbb{1})\simeq\mathbb{K}\) defined as \[d_{c}\coloneqq\operatorname{tr}(\operatorname{Id}_{c})\quad. \tag{2.5}\] The whole category has a _global dimension_ \[D\coloneqq\sum_{i\in I(\mathcal{C})}d_{i}^{2} \tag{2.6}\] which is non-vanishing [1, Theorem 7.21.12]. ### Graphical Calculus The graphical calculus for spherical fusion categories plays a prominent role in the string-net construction. We discuss it quickly to fix some conventions. For \(c\in\mathcal{C}_{0}\) we represent \(\mathrm{Id}_{c}\) by a straight line in the plane, oriented from bottom to top and labeled with \(c\). Similar \(\mathrm{Id}_{c^{*}}\) is represented by a \(c\)-labeled straight line oriented from top to bottom \[\mathrm{Id}_{c}=\left\arrowvert_{c}\right\arrowvert_{c}\] \[\mathrm{Id}_{c^{*}}=\left\arrowvert_{c}\right\arrowvert_{c}\] A morphism \(c\xrightarrow{f}d\) is drawn as and composition of morphism is simply concatenation of string-diagrams. The monoidal product is represented by drawing strands form left to right. Evaluation and coevaluation morphisms are given by oriented caps and cups [MISSING_PAGE_POST] Pivotality implies \(\tau_{a,b}\circ\tau_{b,a}=\operatorname{Id}\). Therefore, instead of looking at each morphism set \(\operatorname{Hom}_{\mathfrak{C}}(\mathbb{1},c_{1}\otimes\cdots\otimes c_{n})\) on its own, we take the limit over the diagram \[\longrightarrow\operatorname{Hom}_{\mathfrak{C}}(\mathbb{1},c_{i+1}\otimes \cdots\otimes c_{i-1}\otimes c_{i})\xrightarrow{\tau_{c_{i+1}\wedge c_{i+2} \otimes\cdots\otimes c_{i}}}\operatorname{Hom}_{\mathfrak{C}}(\mathbb{1},c_{i+2 }\otimes\cdots\otimes c_{i}\otimes c_{i+1})\longrightarrow\] For later use, we denote the limit by \(\mathcal{C}(c_{1},\cdots,c_{n})\). An element \(f\in\mathcal{C}(c_{1},\cdots,c_{n})\) will be represented by a circular coupon rather than a box, since it only depends on the cyclic order of \(c_{1},\cdots,c_{n}\). There is a partial composition map for elements in the limit, induced by the maps \[\operatorname{Hom}_{\mathfrak{C}}(\mathbb{1},a\otimes b)\otimes \operatorname{Hom}_{\mathfrak{C}}(\mathbb{1},b^{*}\otimes c) \rightarrow\operatorname{Hom}_{\mathfrak{C}}(\mathbb{1},a \otimes c)\] \[(f,g) \mapsto(\operatorname{Id}_{a}\otimes\widetilde{\operatorname{ev} }_{b}\otimes\operatorname{Id}_{c})\circ(f\otimes g) \tag{2.7}\] The composition will still be represented by concatenating strands in string-diagrams. Using semi-simplicity for any \(c\in\mathcal{C}_{0}\) we can pick a basis \(\left\{\alpha_{c,i}^{k}\right\}_{k}\) in \(\mathcal{C}(c,i^{*})\) and \(\left\{\alpha_{m}^{c,i}\right\}_{m}\) in \(\mathcal{C}(c^{*},i)\) for any \(i\in I(\mathcal{C})\) with and thus where \(d_{i}=\operatorname{tr}(\operatorname{Id}_{i})\). Finally we discuss _6j-symbols_ in \(\mathcal{C}\). In a pivotal finite semi-simple category, by choosing bases in the spaces of three point couplings, the vector space \(\mathcal{C}(i,j,k,\ell)\) has two distinct bases. One stems from the decomposition \(\mathcal{C}(i,j,k,\ell)\simeq\bigoplus_{r}\mathcal{C}(i,j,r)\otimes\mathcal{C }(r^{*},k,\ell)\), whereas the other corresponds to the splitting \(\mathcal{C}(i,j,k,\ell)\simeq\bigoplus_{s}\mathcal{C}(j,k,s)\otimes\mathcal{C }(s^{*},\ell,i)\). The entries of the transformation matrix between the two bases are the 6j-symbols The 6j-symbols satisfy the usual pentagon relation. ### \(G\)-categories This section is a recollection of the relevant definitions associated to categories which are \(G\)-graded and possibly carry a \(G\)-action. The reader can find detailed accounts of \(G\)-categories in the literature, e.g. [10, Appendix 5][13] and references therein. Besides recalling definitions, we give in Proposition 2.3 an alternative definition for a \(G\)-crossing on the \(G\)-center of a \(G\)-graded category. Though our definition is equivalent to the one in [13], we still introduce it, since it makes the discussion of string-nets on cylinders more transparent later. Throughout the section \(\mathcal{C}\) is a \(\mathbb{K}\)-linear, strict monoidal category. \(\overline{G}\) denotes the category having as objects the elements of \(G\) and only identity morphisms. Multiplication in \(G\) endows \(\overline{G}\) with a strict monoidal structure. **Definition 2.1**.: The monoidal, linear category \(\mathcal{C}\) is _\(G\)-graded_, if it decomposes into a direct sum of pairwise disjoint, full \(\mathbb{K}\)-linear subcategories \(\big{\{}\mathcal{C}_{g}\big{\}}_{g\in G}\), such that 1. any object \(c\in\mathcal{C}\) decomposes as a finite direct sum \(c=c_{g_{1}}\oplus\cdots\oplus c_{g_{n}}\), with \(c_{g_{i}}\in\mathcal{C}_{g_{i}}\). 2. for \(c\in\mathcal{C}_{g}\), \(d\in\mathcal{C}_{h}\), it holds \(c\otimes d\in\mathcal{C}_{gh}\). 3. \(\operatorname{Hom}(c,d)=0\) for \(c\in\mathcal{C}_{g}\) and \(d\in\mathcal{C}_{h}\) with \(g\neq h\). 4. \(\mathbb{1}\in\mathcal{C}_{e}\). Figure 1. Completeness relation in the semi-simple category \(\mathcal{C}\). The subcategories \(\mathcal{C}_{g}\) are called homogeneous components of \(\mathcal{C}\). The component \(\mathcal{C}_{e}\) for the neutral element \(e\in G\) is called the _neutral component_. There is an obvious forgetful functor \(U:\mathcal{C}\to\mathcal{C}\), which simply forgets the \(G\)-grading. A \(G\)-graded category \(\mathcal{C}\) is rigid/ pivotal/ spherical if its underlying monoidal linear category \(\mathcal{C}\) is rigid/ pivotal/ spherical. It is fusion, if its underlying category is a fusion category, such that any homogeneous component contains at least one simple object. In a \(G\)-graded fusion category, any set of representing objects for isomorphism classes of simple objects splits as a disjoint union \(I=\bigsqcup_{geG}I_{g}\) with \(I_{g}\) the set of isomorphism classes of simples in \(\mathcal{C}_{g}\). Thus simple objects are always homogeneous. In a \(G\)-graded category the group \(G\) serves as an index set, but doesn't act on the category so far. A \(G\)-action will come in the form of a \(G\)-crossing. For a monoidal category \(\mathcal{D}\), recall that \(\operatorname{Aut}_{\mathcal{C}}(\mathcal{D})\) is the category having monoidal equivalences \(F:\mathcal{D}\xrightarrow{\simeq}\mathcal{D}\) as objects and monoidal natural isomorphisms as morphisms. Composition of functors equips it with a strict monoidal structure whose monoidal unit is the identity functor. **Definition 2.2**.: A \(G\)-crossed category is a \(G\)-graded category \(\mathcal{C}\) together with a strong monoidal functor \(\rho:\overline{G}\to\operatorname{Aut}_{\mathcal{C}}(\mathcal{C})\) such that \(\rho_{h}(\mathcal{C}_{g})\subset\mathcal{C}_{h^{-1}gh}\). The functor \(\rho\) is called a _\(G\)-crossing_. For any \(g\in G\), \(\rho_{g}\) is a strong monoidal functor. In addition, as \(\rho\) is strong monoidal, a \(G\)-crossed category comes with natural isomorphisms \(\left\{\eta_{h,g}(\bullet):\rho_{g}\rho_{h}(\bullet)\xrightarrow{\simeq}\rho_ {hg}(\bullet)\right\}\) and \(\eta_{0}(\bullet):\operatorname{Id}_{\mathcal{C}}\xrightarrow{\simeq}\rho_ {e}(\bullet)\). These maps satisfy the usual coherence diagrams, which can be found in [13, section 3]. Just as it is sometimes necessary to equip a given monoidal category with the additional structure of a braiding, there is a meaningful notion of a braiding for G-crossed categories. This cannot be an ordinary braiding for the underlying monoidal category since for non-abelian \(G\) it holds in general that \(\operatorname{Hom}(c\otimes d,d\otimes c)=0\) for \(c\), \(d\) in different homogeneous components of \(\mathcal{C}\). However, \(\operatorname{Hom}(c\otimes d,d\otimes\rho_{g}(c))\) for \(d\in\mathcal{C}_{g}\) and \(c\in\mathcal{C}_{h}\) has a chance to be non-zero, as both \(c\otimes d\) and \(d\otimes\rho_{g}(c)\in\mathcal{C}_{hg}\). Thus, a \(G\)-braiding is a natural isomorphism \[\left\{\beta_{c,d}:c\otimes d\to d\otimes\rho_{g}(c)\right\} \tag{2.8}\] defined for homogeneous elements \(d\in\mathcal{C}_{g}\), \(c\in\mathcal{C}_{h}\) for all \(g\), \(h\in G\) and linearly extended to all objects of \(\mathcal{C}\). The natural isomorphism has to satisfy three coherence diagrams for which we refer to [13, section 3]. To a \(G\)-graded category we associate its \(G\)-center \(\mathsf{Z}_{G}(\mathcal{C})\), which is defined to be the relative center with respect to \(\mathcal{C}_{e}\). To be a bit more explicit, the \(G\)-center has objects pairs \((c,\gamma_{c,\bullet})\), where \(c\in\mathcal{C}\) and \(\gamma_{c}\) is a relative half-braiding for objects in the neutral component \(\mathcal{C}_{e}\), i.e. \[\gamma_{c,\bullet}=\left\{\gamma_{c,X}:c\otimes X\to X\otimes c\right\}_{X \in C_{e}} \tag{2.9}\] is a natural isomorphism satisfying the usual hexagon relation. A morphism \((c,\gamma_{c})\to(d,\gamma_{d})\) is a morphism \(f:c\to d\) in \(\mathcal{C}\) satisfying \((\operatorname{Id}_{X}\otimes f)\circ\gamma_{c,X}=\gamma_{d,X}\circ(f\otimes \operatorname{Id}_{X})\). The \(G\)-center obviously is a \(G\)-graded category. Similar to the non-graded case, for \(\mathcal{C}\) a \(G\)-graded spherical fusion category, the center has the structure of a \(G\)-modular category. Of course, for a \(G\)-graded category there also exists its Drinfeld center \(\mathsf{Z}(\mathbb{C})\). The \(G\)-center can be very different from the Drinfeld center. However the two are related via orbifolding [1, Theorem 3.5]. Since we don't need the full \(G\)-modular structure on \(\mathsf{Z}_{G}(\mathbb{C})\) we simply refer to [13] for the definition of a \(G\)-modular category. We only need that \(\mathsf{Z}_{G}(\mathbb{C})\) is \(G\)-crossed and we explicitly give the \(G\)-crossing. Similar to the non-equivariant case, there is an adjunction \[I:\mathbb{C}\rightleftharpoons\mathsf{Z}_{G}(\mathbb{C}):F,\qquad I+F \tag{2.10}\] with \(F\) the forgetful functor and its adjoint functor \(I\), the _induction functor_, defined on objects \[I(c)\coloneqq\bigoplus_{i\in I_{e}}i^{*}\otimes c\otimes i\,, \tag{2.11}\] where \(I_{e}\) is a set of simple objects in \(\mathbb{C}_{e}\). Its action on morphisms is the obvious one. To have the structure of an element in the graded center, the object \(I(c)\) is equipped with the standard non-crossing half-braiding where \(X\in\mathbb{C}_{e}\). (Recall from Figure 1 that the pairwise appearance of \(\alpha\) implies a summation over dual bases.) We will continue with this notation, as figures tend to become overloaded with notation otherwise. Due to the compatibility condition with the half-braiding \(\operatorname{Hom}_{\mathsf{Z}_{G}(\mathbb{C})}(a,b)\subset\operatorname{ Hom}_{\mathbb{C}}(a,b)\) is a proper subspace. (Here, by abuse of notation, we identify \(a\in\mathsf{Z}_{G}(\mathbb{C})\) with the underlying object in \(\mathbb{C}\).) An idempotent \(P:\operatorname{Hom}_{\mathbb{C}}(a,b)\to\operatorname{Hom}_{\mathbb{C}}(a,b)\) with \(\operatorname{Im}(P)=\operatorname{Hom}_{\mathsf{Z}_{G}(\mathbb{C})}(a,b)\) is given by where we introduced the _cloaking circle_ in and the crossings of the cloaking circle with the \(a\) and \(b\)-labeled lines correspond to the half-braidings of \(a\) and \(b\). The proof of that \(P\) is an idempotent is exactly the same as in the non-graded case (c.f. [1]). The existence of a \(G\)-crossing on \(\mathsf{Z}_{G}(\mathcal{C})\) for \(\mathcal{C}\) a \(G\)-fusion category was proven in [14]. An explicit expression for a \(G\)-crossing on \(\mathsf{Z}_{G}(\mathcal{C})\) is given in [13], where \(\mathcal{C}\) only needs to be a non-singular1, pivotal \(G\)-graded category. We only work with spherical \(G\)-fusion categories, which are automatically non-singular. To define a \(G\)-crossing we generalize (2.11) and consider a functor \(I^{h}:\mathsf{Z}_{G}(\mathcal{C})\to\mathcal{C}\), which acts on objects as follows \(I^{h}(c)=\bigoplus_{i\in I_{h}}i^{*}\otimes c\otimes i\). The action on morphisms is the obvious one. We want to construct a \(G\)-crossing on \(\mathsf{Z}_{G}(\mathcal{C})\) from \(I^{h}\). In order to do so, we consider the idempotent Footnote 1: See [13, section 4.1] for the definition of a non-singular category. where we again use the half-braiding \(\gamma_{c,i^{*}\otimes j}\) to braid the strands. It is shown in [14] that \(\sum_{i\in I_{c}}d_{i}^{2}=\sum_{U\in I_{h}}d_{U}^{2}\) for all \(h\in G\). Using this, it is straightforward to show that \(\pi_{c}\) is indeed an idempotent. Its image is denoted \(P(c)\coloneqq\operatorname{Im}(\pi_{c})\), restriction and inclusion maps are depicted as follows The image \(P(c)\) has half-braiding where \(X\in\mathcal{C}_{e}\) and we use again the half-braiding \(\gamma_{c,i\otimes X\otimes j^{\prime}}\). Thus \((P(c),\gamma_{P(c),\bullet})\in\mathsf{Z}_{G}(\mathcal{C})_{h^{-1}gh}\). From that, we can give an explicit \(G\)-crossing for \(\mathsf{Z}_{G}(\mathcal{C})\). **Proposition 2.3**.: _The maps_ \[\begin{split}\phi_{h}:\mathsf{Z}_{G}(\mathcal{C})_{g}& \to\mathsf{Z}_{G}(\mathcal{C})_{h^{-1}gh}\\ c&\mapsto(P(c),\gamma_{P(c),\bullet})\end{split} \tag{2.12}\] _constitute a \(G\)-crossing on \(\mathsf{Z}_{G}(\mathcal{C})\)._ Up to a reordering of tensor factors the proof is the same as the one given in [11, section 4], hence we skip it here. _Remark 2.4_.: The \(G\)-crossing we defined is not the same as the \(G\)-crossing given in [11]. However, in the semi-simple case, it is not hard to show that the center equipped with the two different \(G\)-crossings are equivalent as \(G\)-crossed categories. Our definition is motivated by string-net constructions on cylinders (cf. section 6.2.1). ## 3. **Once extended \(G\)-equivariant HTQFTs** The modern formulation of once extended HTQFTs uses the language of bicategories and \(2\)-functors. A suitable definition of a symmetric monoidal bicategory \(G\mathcal{Bord}(n,n-1,n-2)\) of once extended \(G\)-equivariant bordisms was given in [12]. For technical reasons we need a slight modification of this bicategory and consider pointed maps to \(BG\) for objects. For convenience, we will also choose base points for one-dimensional manifolds in the following definition. As we will explain in Remark 3.2, these choices are not really essential In the following a manifold \(M\) is _pointed_, if for each of its connected components a distinguished basepoint has been chosen. A _map between pointed manifolds_\(M\xrightarrow{f}N\) is a continuous, basepoint-preserving map. In addition, once and for all we choose a basepoint \(\star\in BG\). **Definition 3.1**.: The symmetric monoidal bicategory \(G\mathcal{Bord}_{\star}(3,2,1)\) is given by: 1. Objects of \(G\mathcal{Bord}_{\star}(3,2,1)\) are pairs \((M,f)\) of a pointed, closed, oriented \(1\)-dimensional manifold \(M\) and a pointed map \(f:M\to(BG,\,\star)\). 2. A \(1\)-morphism is a pair \((\Sigma,\zeta):(M_{0},f_{0})\to(M_{1},f_{1})\) consisting of an oriented, compact, collared \(2\)-dimensional manifold with boundary \(\Sigma\), orientation preserving diffeomorphisms \(\iota_{0}:M_{0}\times(-1,0]\to\Sigma_{0}\), \(\iota_{1}:M_{1}\times[0,1)\to\Sigma_{1}\) and a map \(\zeta:\Sigma\to BG\), such that the diagram commutes. Here \(\Sigma_{0}\cup\Sigma_{1}\) is a collar for \(\Sigma\). Note that no basepoint is chosen for \(\Sigma\). 2. A \(2\)-morphism \((W,\phi):(\Sigma_{0},\zeta_{0})\Rightarrow(\Sigma_{1},\zeta_{1})\) in \(G\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Its objects are linear categories. A \(1\)-morphism \(\mathcal{C}\xrightarrow{F}\mathcal{D}\) is a linear functor \[\mathcal{D}^{op}\boxtimes\mathcal{C}\xrightarrow{}\mathcal{V}\mathcal{E}t_{ \mathbb{K}}\,, \tag{3.2}\] where \(\mathcal{D}^{op}\boxtimes\mathcal{C}\) has objects pairs \((d,c)\in\mathcal{D}^{op}\times\mathcal{C}\) and \(\operatorname{Hom}_{\mathcal{D}^{op}\mathfrak{C}}((d,c),(d^{\prime},c^{\prime}) )\coloneqq\operatorname{Hom}_{\mathcal{D}^{op}}(d,d^{\prime})\otimes_{ \mathbb{K}}\operatorname{Hom}_{\mathcal{C}}(c,c^{\prime})\). Another name for \(1\)-morphisms are bimodules, or pro-functors. A \(2\)-morphism simply is a natural transformation between linear functors. The enriched tensor product gives \(\mathcal{B}\mathcal{B}\mathcal{U}\mathcal{M}_{\mathbb{K}}\) a symmetric monoidal structure. Given profunctors \(F:\mathcal{C}^{op}\boxtimes\mathcal{D}\xrightarrow{}\mathcal{V}\mathcal{E}t_{ \mathbb{K}}\) and \(G:\mathcal{D}^{op}\boxtimes\mathcal{C}\xrightarrow{}\mathcal{V}\mathcal{E}t_{ \mathbb{K}}\) the composition \(F\circ G:\mathcal{C}^{op}\boxtimes\mathcal{C}\xrightarrow{}\mathcal{V} \mathcal{E}\mathfrak{C}\xrightarrow{}\mathcal{V}\mathcal{E}t_{\mathbb{K}}\) is given by the coend \[(F\circ G)(e,c)\coloneqq\int^{d\in\mathcal{D}}F(e,d)\otimes G(d,c)\quad. \tag{3.3}\] ## 4. \(G\)-equivariant Ptolemy Groupoid In this section we introduce \(G\)-triangulations on surfaces, which are an enhancement of ideal triangulations of a surface. The main result is Theorem 4.9. It allows us to treat any \(G\)-surface as combinatorial object. We consider an oriented, compact smooth surface \(\Sigma\) with \(r>0\) boundary components. We choose a distinguished point on each connected component of the boundary and denote by \(\delta=\{\delta_{1},\cdots,\delta_{r}\}\) the chosen set of points. In addition, we fix an arbitrary finite set \(M\subset\Sigma\backslash\partial\Sigma\) of marked points. In case \(\Sigma\) is a disk, \(M\) has to contain at least one element, and in all other cases \(M\) may be empty. **Definition 4.1**.: [13, Definition 1.19] An ideal triangulation \(\mathsf{T}\) of \((\Sigma,\delta,M)\) consists of a collection \(\{\alpha_{i}\}_{i\in I}\) of isotopy classes of embedded arcs, having endpoints in \(\delta\cup M\), such that for every boundary component \(b\), its isotopy class relative to \(\delta\cup M\) is contained in \(\{\alpha_{i}\}_{i\in I}\) and \(\Sigma-\cup_{i\in I}\alpha_{i}\) is a disjoint union of triangles. The dual graph to an ideal triangulation is a uni-trivalent fat graph, whose cyclic order at vertices is induced by the orientation of the boundaries of dual triangles. The orientation of the boundary of a triangle is, of course, induced by the orientation of \(\Sigma\). The \(1\)-skeleton \(\mathsf{T}^{[1]}\) of \(\mathsf{T}\) is an isotopy class of an embedded graph and when speaking of vertices, edges and faces of an ideal triangulation, we always mean vertices, edges and faces of \(\mathsf{T}^{[1]}\). Ideal triangulations aren't triangulations in general, e.g. we may encounter bubble graphs like in figure 2. It is well known that any two triangulations of a surface can be transformed into each other by a finite sequence of \(2-2\) and \(3-1\) Pach Figure 2. One configuration of possible arcs in an ideal triangulation. By cutting along the edges one obtains an honest triangle. triangulations aren't triangulations, there is an analog of the \(2-2\)-Pachner move for ideal triangulations. **Definition 4.2**.: Let \(f\in E(\mathsf{T})\) be such that \(f\) is adjacent to two different \(2\)-faces of \(\mathsf{T}\) and \(f\) is not a boundary edge. A _flip along_\(f\) is the move shown in figure 3. In contrast to triangulations, there is no \(3-1\) move for ideal triangulation, since we work with a fixed set of vertices. To be more precise, there exists a \(2\)-dimensional CW-complex \(\mathcal{P}(\Sigma,\delta,M)\), the Ptolemy-complex, describing ideal triangulations on \((\Sigma,\delta,M)\). Like the Lego-Teichmuller complex, the Ptolemy-complex is additional structure om \(\Sigma\), which enables us to treat the surface \(\Sigma\) in a combinatorial way. **Definition 4.3**.: The _Ptolemy complex_\(\mathcal{P}(\Sigma,\delta,M)\) is the \(2\)-dimensional CW-complex with **0-cells:**: Vertices of \(\mathcal{P}(\Sigma,\delta,M)\) are ideal triangulations. **1-cells:**: There is an edge between two vertices for any flip \(F_{e}:\mathsf{T}\to\mathsf{T}^{\prime}\) as in figure 3. **2-cells:**: There are three types of \(2\)-cells: **P1:**: For \(f\in E(\mathsf{T})\) such that the flip \(F_{f}\) exists, there is the \(2\)-cell The edge \(f^{\prime}\) is the new edge appearing in the flip (see figure 3). **P2:**: For any two edges \(e\), \(f\in E(\mathsf{T})\) with disjoint endpoints such that \(F_{e}\) and \(F_{f}\) are defined, the flips commute, i.e. there is a quadrilateral \(2\)-cell The flips \(F^{\prime}_{e}\), \(F^{\prime}_{f}\) are the flips performed at the edges \(e\), respectively \(f\), which are now part of the new ideal triangulations \(T^{\prime}_{1}\) and \(T^{\prime}_{2}\). Figure 3. Flip move along the edge \(f\). The red lines show the dual fat graph. **P3:** Given two edges \(e\), \(f\in E(\mathsf{T})\) sharing exactly one endpoint and \(F_{e}\), \(F_{f}\) exist, there is a pentagonal \(2\)-cell **Theorem 4.4**.: _[_12_, Corollary V.1.1.2]_ _The Ptolemy-complex \(\mathcal{P}(\Sigma,\delta,M)\) is connected and simply connected._ Connectedness is a classical result and can be proven by purely combinatorial methods. However, the proof that \(\mathcal{P}(\Sigma,\delta,M)\) is simply connected uses a fair amount of Teichmuller theory. The crucial point is that ideal triangulations, or fat-graphs, give a cell decomposition of the decorated Teichmuller space \(\mathcal{T}\left(\Sigma,\delta,M\right)\), which is a contractible space. In [12, Chapter 5, Definition 1.1] the _Ptolemy-groupoid_ is defined as the path groupoid of \(\mathcal{T}\left(\Sigma,\delta,M\right)\), thus it is the fundamental groupoid of the Ptolemy-complex. So far we discussed the classical situation of just a surface with ideal triangulations. However, in this paper, we need a Ptolemy complex which also accounts for \(G\)-bundles over the surface. Thus, we introduce the notion of a \(G\)-Ptolemy-complex and show, that it is still connected and simply connected. In spirit this is a discrete realization of the holonomy functor for the \(G\)-bundle determined by \(\zeta:\Sigma\to BG\). **Definition 4.5**.: A \(G\)_-labeled ideal triangulation_ on \((\Sigma,\delta,M)\) is an isotopy class of an embedded oriented graph \(\mathsf{T}\hookrightarrow\Sigma\) together with a map \(g:E^{or}(\mathsf{T})\to G\), such that 1. the underlying graph of \(\mathsf{T}\) obtained by forgetting the orientation is an ideal triangulation of \((\Sigma,\delta,M)\). 2. the map \(g\) satisfies \(g(\mathbf{e})=g(\overline{\mathbf{e}})^{-1}\). 3. if oriented edges \(\mathbf{e}_{1}\), \(\mathbf{e}_{2}\), \(\mathbf{e}_{3}\) form the counterclockwise oriented boundary of a triangle of \(\mathsf{T}\) the following relation holds in \(G\) (4.1) \[g\left(\mathbf{e}_{1}\right)g\left(\mathbf{e}_{2}\right)g\left(\mathbf{e}_{3} \right)=e\quad.\] The map \(g\) is defined using oriented edges. Due to the condition in ii) giving the map on one orientation of the edge uniquely fixes the map on the edge with opposite orientation. **Definition 4.6**.: Given an edge \(\mathbf{f}\) of a \(G\)-triangulation \(\mathsf{T}\), which is adjacent to two different 2-cells the \(G\)_-flip along_\(\mathbf{f}\) is given by the transformation shown in figure 4. _Remark 4.7_.: The edge \(\mathbf{f}\) at which a \(G\)-flip is performed cannot be a boundary edge. \(G\)-triangulations are a combinatorial tool to describe 1-morphisms in the bicategory \(G\mathcal{Bord}_{\star}(3,2,1)\). In \(G\mathcal{Bord}_{\star}(3,2,1)\) all three layers of data are equipped with a map to \(BG\). Equivalently one can regard \(G\mathcal{Bord}_{\star}(3,2,1)\) as the bordism 2-category of bordisms together with the datum of a \(G\)-principal bundle. To make this precise, recall that the _mapping groupoid_\(\Pi(M,N)\), for topological spaces \(M\),\(N\), has as objects continuous maps \(M\xrightarrow{f}N\) and morphisms \(f\to g\) are homotopy classes relative to \(M\times\{0,1\}\) of homotopies \(f\sim_{h}g\). For \(N=BG\) the classifying space of \(G\), there is a canonical equivalence of groupoids \[\Pi(M,BG)\simeq\mathcal{P}\mathcal{B}\mathit{un}_{G}(M)\,, \tag{4.2}\] where \(\mathcal{P}\mathit{Bun}_{G}(M)\) is the groupoid of \(G\)-principal bundles on \(M\). By choosing a basepoint \(\bullet\in M\) and restricting objects to pointed maps from \(M\) to \(BG\) but keeping as morphisms equivalence classes of unpointed homotopies, one gets an equivalent groupoid \(\Pi_{\star}(M,BG)\). So from the pointed map \(f:(M,\bullet)\to(BG,\star)\) we get that objects of \(G\mathcal{Bord}_{\star}(3,2,1)\) can be described as unions of circles with fixed \(G\)-principal bundles. The underlying manifold of a connected object \(((M,\bullet),f)\) is diffeomorphic to a circle \(S^{1}\) and its classifying map determines a homomorphism \(f_{*}:\pi_{1}(S^{1},\bullet)\simeq\mathbb{Z}\to G\simeq\pi_{1}(BG,\star)\), i.e. an element \(f_{*}(1)\eqqcolon g\) in \(G\). In this sense, we can identify objects of \(G\mathcal{Bord}_{\star}(M,BG)\) with finitely many circles which are labeled by a group element. Let \((\Sigma,\zeta):((M_{0},\bullet),f_{0})\to((M_{1},\bullet),f_{1})\). The surface \(\Sigma\) itself is not pointed, however, if \(*\in M_{0}\) is a basepoint of a connected component, it holds \(\zeta(\iota_{0}(*))=f_{0}(*)=\star\). Thus, the images of the basepoints in the boundary of \(\Sigma\) all get mapped to the basepoint in \(BG\) by \(\zeta\). To the data of a 1-morphism \((\Sigma,\zeta):((M_{0},m_{0}),f_{0})\to((M_{1},m_{1}),f_{1})\) we want to associate a \(G\)-triangulation of \(\Sigma\). We begin by choosing a finite set of points \(K\subset\Sigma\backslash\partial\Sigma\), which for \(\Sigma\) homeomorphic to a disk needs to be non-empty. Let \(\delta\coloneqq\iota_{0}(m_{0})\cup\iota_{1}(m_{1})\in\partial\Sigma\) be the image of the basepoints and \(\mathsf{T}\) an ideal triangulation based at \(\delta\cup K\). Let Figure 4. Due to the cyclicity condition (4.1) the \(G\)-labels \(g\) and \(g^{\prime}\) are uniquely fixed by the \(G\)-labels \(g_{1}\), \(g_{2}\), \(g_{3}\), \(g_{4}\). \(b\subset\partial\Sigma\) be a connected component of the boundary. Assume that the circle which is mapped to \(b\) is labeled by \(g\in G\). By construction \(\zeta_{*}(b)\in\pi_{1}(BG,\,\star)\) is the loop corresponding to \(g\). Thus, \(G\)-labels of boundary edges in \(\mathsf{T}\) are uniquely fixed by source and target objects of the \(1\)-morphism. Since all images of basepoints in \(\Sigma\) get mapped to \(\star\), the image of a non-boundary edge \(\mathbf{e}\in E^{or}(\mathsf{T})\) with both endpoints in \(\delta\) is an oriented loop in \(\pi_{1}(BG,\,\star)\), hence determines a group element \(g_{\mathbf{e}}\). The \(G\)-color of all other edges is determined up to certain _gauge transformations_. Let \(p=(\mathbf{e}_{1},\cdots,\mathbf{e}_{r})\) be an oriented edge path in \(\mathsf{T}\) with endpoints in \(\delta\). The path \(p\) gets mapped to a loop in \(BG\), based at \(\star\). Hence there exists \(g_{p}\in G\), such that \(\operatorname{Im}(\zeta|_{p})\simeq g_{p}\) holds. We can assume that only the first and last edge of the path have endpoints in \(\delta\). To each edge \(\mathbf{e}_{i}\in p\) we assign a group element \(g_{i}\), such that \[g_{1}g_{2}\cdots g_{r}=g_{p}\quad. \tag{4.3}\] This coloring is not unique. We can change \(g_{i}\mapsto g_{i}h\) and \(g_{i+1}\mapsto h^{-1}g_{i+1}\) and the new color still satisfies (4.3). These are so called _gauge transformations_. A specific labeling of the oriented edges of \(\mathsf{T}\) with group elements is _compatible_ with \(\zeta\), if (4.3) holds for all possible edge paths with endpoints in \(\delta\). The set of all \(G\)-colorings of \(\mathsf{T}\) which are compatible with \(\zeta\) is denoted by \(\operatorname{Col}_{G}(\mathsf{T})\). A gauge transformation then is a map \[\lambda:M\to\operatorname{Maps}\left(G,\operatorname{End}\left(\operatorname{ Col}_{G}(\mathsf{T})\right)\right), \tag{4.4}\] where for \(p\in M\) and \(g\in G\), \(\lambda_{g}(p)\) acts as Multiplication in \(G\) endows the set of gauge transformations with a group structure such that the gauge group acts transitively on \(\operatorname{Col}_{G}(\mathsf{T})\). To summarize, for any \(1\)-morphism \((\Sigma,\zeta)\) in \(G\mathcal{Bord}_{\star}(3,2,1)\) and any ideal triangulation \(\mathsf{T}\) of \(\Sigma\), based at the images of the basepoints and a possibly empty set \(M\), we get a \(G\)-triangulation, unique up to gauge transformation. **Definition 4.8**.: Let \((\Sigma,\zeta):(M_{0},f_{0})\to(M_{1},f_{0})\) be \(1\)-morphism in \(G\mathcal{Bord}_{\star}(3,2,1)\) and \(M\subset\Sigma\partial\Sigma\) a finite set of points. The _\(G\)-equivariant Ptolem groupoid_\(\mathcal{P}^{G}(\Sigma,\zeta,M)\) is a \(2\) dimensional CW-complex defined as follows. **0-cells:**: Vertices of \(\mathcal{P}^{G}(\Sigma,\zeta,M)\) are (ideal) \(G\)-triangulations induced by \(\zeta\). **1-cells:**: There is an edge \(\mathsf{T}\to\mathsf{T}^{\prime}\) in \(\mathcal{P}^{G}(\Sigma,\zeta,M)\) for any \(G\)-flip and any gauge transformation. **2-cells:**: Since the \(G\)-label for \(G\)-flips is uniquely determined, we lift the relations **P1-P3** of the Ptolemy-groupoid to \(\mathcal{P}^{G}(\Sigma,\zeta,M)\). This yields three relations **GP1**, **GP2** and **GP3**. In addition there is a mixed relation as well as two pure gauge relations. **GP4:**: For **e** an edge of a G-triangulation and \(v\) a vertex incident to **e**, the flip along **e** and a gauge transformation at \(v\) commute. That is, there is a quadrilateral 2-cell **GP5:**: Gauge transformations at different vertices commute, i.e. for \(v\neq u\) vertices of a \(G\)-triangulation, there are quadrilateral 2-cells corresponding to \(\lambda(v)\lambda(u)=\lambda(u)\lambda(v)\). **GP6:**: Finally we add triangular 2-cells \(\lambda_{g}(v)\cdot\lambda_{h}(v)=\lambda_{gh}(v)\), implementing the gauge-action. **Theorem 4.9**.: _The \(G\)-equivariant Ptolemy groupoid \(\mathcal{P}^{G}(\Sigma,\zeta,M)\) from Definition 4.8 is connected and simply connected._ Figure 5. Flip commutes with gauge transformation We defer the proof of Theorem 4.9 to the appendix, since it doesn't offer any deeper insight for the main results of this paper. ## 5. **Bare String-Net Spaces and Cylinder Categories** ### A vector space for surfaces We have set the stage for constructing a \(G\)-equivariant string-net space for \(1\)-morphisms. The main idea is to start with an ideal \(G\)-triangulation \(\mathsf{T}\) on \((\Sigma,\zeta)\) and define a string-net space relative to it by only allowing graphs transversal to \(\mathsf{T}\). We proceed by subsequently taking quotients implementing the rules of the graphical calculus for \(\mathscr{C}\) inside disks having different relative positions to \(\mathsf{T}\). This will yield string-net spaces \(\operatorname{SN}_{\mathsf{T}}^{\mathscr{C}}(\Sigma,\zeta)\) for any \(G\)-triangulation \(\mathsf{T}\). In order to get a space which solely depends on \((\Sigma,\zeta)\), we define for any edge of \(\mathscr{P}^{G}(\Sigma,\zeta)\) an isomorphism between string-nets spaces. These maps will satisfy the relations **GP1-6** and we get a projective system of vector spaces. Taking the limit of the system will ultimately give the string-net space. At first sight, this might appear as an overly cumbersome way of defining a string-net space on a \(G\)-surface. In the usual string-net approach one just considers all embedded \(\oplus\)-colored graphs, for \(\oplus\) any spherical fusion category. However, there seems to be no way of deciding whether an arbitrary \(\mathscr{C}\)-colored, embedded graph on \(\Sigma\) is compatible with the \(G\)-structure on \(\Sigma\). This appears to be only possible if the graph is fine enough, meaning that it is at least isotopic to the \(1\)-skeleton of a CW-decomposition of \(\Sigma\). In other words, in the equivariant case string-net graphs need to be sensitive to the global topology of \(\Sigma\). Instead of arbitrary CW-decompositions we work with ideal triangulations, because there we have full control over the combinatorics involved. Without further ado we now spell out the details of the construction. We choose an arbitrary finite, but fixed set of points \(M\subset\Sigma\backslash\partial\Sigma\). Note that \(M\) can be empty except for the disk. **Definition 5.1**.: Let \(\Gamma\) be an arbitrary embedded finite oriented graph in \(\Sigma\) and \(\mathscr{C}\) a \(G\)-graded fusion category. A _\(\mathscr{C}\)-coloring of \(\Gamma\)_ comprises two functions. First \[c:E^{\mathrm{or}}(\Gamma)\to\mathscr{C}_{0}^{\mathrm{hom}} \tag{5.1}\] assigns to each oriented edge \((e,\mathrm{or})\) an homogeneous object of \(\mathscr{C}\), such that \(c(\mathbf{e})=c(\overline{\mathbf{e}})^{*}\). The second function is a map \[\phi:V(\Gamma) \to\mathscr{C}_{1}\] \[v \mapsto\phi_{v}\in\mathscr{C}\left(\bigotimes_{\mathbf{e}\in H(v) }c(\mathbf{e})^{\epsilon_{\mathbf{e}}}\right), \tag{5.2}\] where \(\epsilon_{\mathbf{e}}=1\) if \(\mathbf{e}\) is oriented away from \(v\) and \(\epsilon_{\mathbf{e}}=-1\) if it is oriented towards \(v\). A negative exponent for an object in \(c\in\mathscr{C}\) indicates its dual. In particular via the grading map \(p:\mathscr{C}_{0}^{\mathrm{hom}}\to G\) we get a \(G\)-labeling \(g_{\Gamma}\) of \(\Gamma\) satisfying \(g_{\Gamma}(\overline{\mathbf{e}})=g_{\Gamma}(\mathbf{e})^{-1}\). The graph and its \(G\)-labeling should be compatible with a \(G\)-triangulation in a sense we are about to define (cf. Definition 5.3). **Definition 5.2**.: Let \(\mathsf{T}\in\mathscr{P}^{G}(\Sigma,\zeta,M)\). An embedded graph \(\Gamma\hookrightarrow\Sigma\) is _totally transversal_ to \(\mathsf{T}\) if \(V(\mathsf{T})\cap\Gamma=\mathsf{T}\cap V(\mathsf{T})=\emptyset\) and any edge of \(\mathsf{T}\) that is not on the boundary intersects at least one edge of \(\Gamma\). Furthermore, all intersections are transversal. The obvious example of a graph transversal to an ideal triangulation is its dual trivalent fat graph. Let \(\Gamma\) be a \(\mathcal{C}\)-colored graph, which is totally transversal to the underlying ideal triangulation of a \(G\)-triangulation \(\mathsf{T}\). Pick a representative in the isotopy class for \(\mathsf{T}\). For an edge \(\mathbf{e}\in E^{\alpha}(\mathsf{T})\) consider the edges \(\{\mathbf{f}_{1},\cdots,\mathbf{f}_{n}\}\) of \(\Gamma\) intersecting with \(\mathbf{e}\), as in figure 6. The edge \(\mathbf{f}_{i}\) has \(G\)-color \(g_{i}\). The orientation of \(\mathbf{e}\) gives a linear order on the set of intersecting edges. The transversal intersection can be positive or negative, depending on the orientation of the \(\mathbf{f}_{i}\), e.g. the intersection \(\mathbf{e}\cap\mathbf{f}_{1}\) is positive, whereas \(\mathbf{e}\cap\mathbf{f}_{n-1}\) is negative in figure 6. The linear order allows us to multiply the \(G\)-labels of the the \(\mathbf{f}_{i}\)'s taking orientations into account. For the graph in figure 6 this gives \[G\ni m_{\mathbf{e}}=g_{1}\cdots g_{n-1}^{-1}g_{n}\quad. \tag{5.3}\] It is easy to check that \(m_{\mathbf{e}}\) is well defined, i.e. independent of the chosen representative in the isotopy class for \(\mathsf{T}\). **Definition 5.3**.: Let \(\Gamma\) be a \(\mathcal{C}\)-colored graph which is totally transversal to the underlying ideal triangulation \(G\)-triangulation \(\mathsf{T}\). Then \(\Gamma\) is \(G\)_-transversal_ to \(\mathsf{T}\) if \[m_{\mathbf{e}}=g(\mathbf{e}) \tag{5.4}\] holds for any edge of \(\mathsf{T}\). If the \(G\)-triangulation is clear from the context we sometimes just say that an embedded graph is \(G\)-transversal. We are working on surfaces with boundary, therefore string-net spaces will depend on a choice of boundary condition, or boundary value. **Definition 5.4**.: 1. A _boundary value_ for a surface \((\Sigma,\zeta)\) is a disjoint union of finitely many points \(B\in\partial\Sigma\) together with a map \(\mathbf{B}:B\to\mathcal{C}_{0}^{hom}\). 2. The _boundary value_ of a \(G\)-transversal graph \(\Gamma\) on \(\Sigma\) is defined as the disjoint union of intersection points \(B_{\Gamma}\) of the graph with \(\partial\Sigma\), together with the map \(\mathbf{B}:B_{\Gamma}\to\mathcal{C}_{0}^{hom}\), mapping an intersection point to the \(\mathcal{C}\)-color of its corresponding edge. _Remark 5.5_.: Note that one can state the definition for a boundary value in case of surfaces without the map \(\zeta\) in the exact same fashion. \(\zeta\) enters the definition only in so far, as restricting the possible boundary values, as \(G\)-labels of edges have to compatible with \(\zeta\). **Definition 5.6**.: Let \(\operatorname{VGraph}^{\mathcal{C}}_{G\mathsf{T}}(\Sigma,\zeta,M;\mathbf{B})\) be the \(\mathbb{K}\)-vector space freely generated by all graphs which are \(G\)-transversal to \(\mathsf{T}\) with boundary value \(\mathbf{B}\). Figure 6. In the color version, the orange line represents an edge of \(\mathsf{T}\) with \(G\)-label \(g\). The black lines are edges \((\Gamma,c)\) _Remark 5.7_.: The vector space \(\mathrm{VGraph}^{\circledleft}_{GT}(\Sigma,\zeta,M;\mathbf{B})\) is huge, e.g. isotopic graphs with the same \(\circledleft}\)-coloring are considered as different graphs so far. Isotopy invariance will follow only after requiring the graphical calculus for \(\circledleft}\) to hold locally on disks. In a first step, we consider embedded closed disks \(D\hookrightarrow\Sigma\mathsf{T}\). Given a \(G\)-transversal graph \(\Gamma\) the boundary of the disk \(D\) is required to intersect edges of \(\Gamma\) transversally and mustn't intersect \(\Gamma\) at vertices of \(\Gamma\). As usual we get a cyclic set of objects \(\{c_{i}\}\) from the \(\circledleft}\)-color of the edges intersecting \(\partial D\) and a linear evaluation map [1, Theorem 2.3] \[\langle\bullet\rangle_{D}:\mathrm{VGraph}^{\circledleft}_{GT}(\Sigma,\zeta,M; \mathbf{B})\to\circledleft}\left(\bigotimes_{i}c_{i}^{\epsilon_{i}}\right) \quad. \tag{5.5}\] which is defined on vectors meeting the transversality requirements with respect to \(D\). The sign conventions are obvious from figure 7. **Definition 5.8**.: Let \(D\hookrightarrow\Sigma\mathsf{T}\) be an embedded disk. An element \(\Gamma\coloneqq\sum_{i}x_{i}\Gamma_{i}\in\mathrm{VGraph}^{\circledleft}_{GT}( \Sigma,\zeta,M;\mathsf{T})\) is _null with respect to \(D\)_ if * all \(\Gamma_{i}\)'s are transversal to \(D\). * \(\Gamma_{i}|_{\Sigma\setminus D}=\Gamma_{j}|_{\Sigma\setminus D}\) for all \(i,j\) * \(\langle\Gamma\rangle_{D}=0\). The _vector space of \(\mathsf{T}\)-disk null graphs_\(\mathrm{NGraph}^{\circledleft}_{GT}(\Sigma,\zeta,M;\mathbf{B})\) is the subspace of \(\mathrm{VGraph}^{\circledleft}_{GT}(\Sigma,\zeta,M;\mathbf{B})\) spanned by all null graphs for all possible disks \(D\hookrightarrow\Sigma\mathsf{T}\). The quotient of \(\mathrm{VGraph}^{\circledleft}_{GT}(\Sigma,\zeta,M;\mathbf{B})\) by the vector space of \(\mathsf{T}\)-disk null graphs \(\mathrm{NGraph}^{\circledleft}_{GT}(\Sigma,\zeta,M;\mathbf{B})\) is denoted \[\mathrm{SN}^{\prime\prime}_{\mathsf{T};\mathsf{G}}(\Sigma,M;\mathbf{B}) \coloneqq\frac{\mathrm{VGraph}^{\circledleft}_{GT}(\Sigma,\zeta,M;\mathbf{B} )}{\mathrm{NGraph}^{\circledleft}_{GT}(\Sigma,\zeta,M;\mathbf{B})}\quad. \tag{5.6}\] _Remark 5.9_.: Due to the \(G\)-grading of \(\circledleft}\), non-zero vectors in \(\mathrm{SN}^{\prime\prime}_{\mathsf{T};\mathsf{G}}(\Sigma,M;\mathbf{B})\) satisfy a cyclicity condition at all of their vertices. That is, given \(v\in V(\Gamma)\), the \(\circledleft}\)-color of its incident edges \(\{\mathbf{e}_{i}\}=E^{\mathrm{or}}(v)\) has to satisfy \[p\left(c\left(\mathbf{e}_{1}\right)^{\epsilon_{1}}\right)\cdots p\left(c\left( \mathbf{e}_{n}^{\epsilon_{n}}\right)\right)=e\quad. \tag{5.7}\] There is a sign convention involving the orientation of the incident edges, which can be easily deduced from figure 8 and is similar to the one used in figure 7. The space \(\operatorname{SN}^{\prime\prime}_{\mathsf{T},\mathcal{R}}(\Sigma,M;\mathbf{B})\) only partly achieves the goal of globalizing the graphical calculus for \(\mathcal{R}\) to \(\Sigma\). So far we have imposed local relations inside 2-faces of \(\mathsf{T}\). In order to define local relations everywhere on \(\Sigma\) we take a further quotient. This will allow for relations inside disks intersecting edges of the ideal triangulation \(\mathsf{T}\). We introduce an equivalence relation implementing isotoping a vertex of \(\Gamma\) through an edge of \(\mathsf{T}\). **Definition 5.10**.: Let \(\Gamma,\Gamma^{\prime}\in\operatorname{SN}^{\prime\prime}_{\mathsf{T}, \mathcal{R}}(\Sigma,M;\mathbf{B})\), then \(\Gamma\sim_{\mathbf{f}}\Gamma^{\prime}\) if and only if the two graphs are related by the move shown in figure 9. Note that due to the cyclicity condition (5.7), the relation \(\sim_{\mathbf{f}}\) is well defined. We take the quotient of \(\operatorname{SN}^{\prime\prime}_{\mathsf{T},\mathcal{R}}(\Sigma,M;\mathbf{B})\) by \(\sim_{\mathbf{f}}\) for all edges \(\mathbf{f}\) of \(\mathsf{T}\) and denote it \[\operatorname{SN}^{\prime}_{\mathsf{T},\mathcal{R}}(\Sigma,M;\mathbf{B}) \tag{5.8}\] In \(\operatorname{SN}^{\prime}_{\mathsf{T},\mathcal{R}}(\Sigma,M;\mathbf{B})\) local relations hold inside all disks not meeting the images of the marked points. As a consequence boundary points of string-nets cannot be moved. Finally, the graphs still depend on the chosen \(G\)-triangulation \(\mathsf{T}\). We get rid of this dependence next. Let \(\Delta\) be a 2-face of \(\mathsf{T}\) and we take its counterclockwise oriented boundary \(\{\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}\}\) with \(G\)-color \(\{g_{1},g_{2},g_{3}\}\). The 2-face \(\Delta\) may contain a boundary edge, which we assume to be \(\mathbf{e}_{1}\). We denote \(\mathbf{B}_{\Delta}\) for the boundary value restricted to Figure 8. Cyclicity condition for the color at a vertex. Figure 9. The two string-nets \(\Gamma,\Gamma^{\prime}\) agree outside of the neighborhood shown here. The move consists of isotoping the string-net vertex trough the edge \(\mathbf{f}\) of \(\mathsf{T}\). that boundary component. So for a 2-face with boundary edge, we consider the vector space \[\operatorname{Hom}_{\Delta}(\mathcal{C})\coloneqq\bigoplus_{i\in I_{g_{2}},\;j\in I _{g_{3}}}\operatorname{Hom}_{\mathcal{C}}(\mathbb{1},\mathbf{B}_{\Delta} \otimes j\otimes k)\,. \tag{5.9}\] For all other 2-faces we set \[\operatorname{Hom}_{\Delta}(\mathcal{C})\coloneqq\bigoplus_{i\in I_{g_{1}},i \in I_{g_{2}},k\in I_{g_{3}}}\operatorname{Hom}_{\mathcal{C}}(\mathbb{1},i \otimes j\otimes k)\,. \tag{5.10}\] and define a vector space \[H^{\mathcal{C}}_{\mathsf{T}}(\Sigma;\mathbf{B})\coloneqq\bigotimes_{\Delta \subset\mathsf{T}^{[2]}}\operatorname{Hom}_{\Delta}(\mathcal{C})\,, \tag{5.11}\] where \(\mathsf{T}^{[2]}\) denotes the set of 2-faces of \(\mathsf{T}\). **Lemma 5.11**.: _There is an isomorphism_ \[\Psi:H^{\mathcal{C}}_{\mathsf{T}}(\Sigma;\mathbf{B})\to\operatorname{SN}^{ \prime}_{\mathsf{T},\mathcal{C}}(\Sigma,M;\mathbf{B}) \tag{5.12}\] _which maps an element \(\bigotimes_{f\in\pi_{\mathsf{T}}(\Delta)}\phi_{f}\in H^{\mathcal{C}}_{\mathsf{ T}}(\Sigma;\mathbf{B})\) to the equivalence class of the dual fat graph \(\Gamma\) with boundary value \(\mathbf{B}_{\mathsf{T}}=\mathbf{B}\) and whose vertices are colored by the maps \(\phi_{f}\)._ Among other things Lemma 5.11 tells, that \(\operatorname{SN}^{\prime}_{\mathsf{T},\mathcal{C}}(\Sigma,M;\mathbf{B})\) has a basis in terms of equivalence classes of fat graphs with edges meeting \(\partial\Sigma\) which are dual to \(\mathsf{T}\). Internal edges of the basis elements are colored with simple objects and its vertices are colored with basis elements in the corresponding hom-spaces \(H^{\mathcal{C}}_{\mathsf{T}}(\Sigma;\mathbf{B})\) via the map \(\Psi\). We call this the _fat graph basis for \(\mathsf{T}\)_. The proof of Lemma 5.11 is very similar to the proof of [13, Lemma 5.3] and we leave it to the reader to adapt it to the present situation. We almost arrived at a sensible definition of a string-net space. A remaining issue is that edges of a string-net cannot be moved through the marked points \(M\). This issue can be approached by cloaking marked points, which is defined by projecting to a subspace of \(\operatorname{SN}^{\prime}_{\mathsf{T},\mathcal{C}}(\Sigma;\mathbf{B})\). Let \[\Pi_{g}:\operatorname{SN}^{\prime}_{\mathsf{T},\mathcal{C}}(\Sigma,M; \mathbf{B})\to\operatorname{SN}^{\prime}_{\mathsf{T},\mathcal{C}}(\Sigma,M; \mathbf{B}) \tag{5.13}\] be the map acting in a neighborhood of \(M\) by figure 10. For the neutral element \(e\in G\), we simply write \(\Pi\coloneqq\Pi_{e}\) and depict it with a purple circle without group label Figure 10. Action of the map \(\Pi_{g}\) on a vertex \(s\) of an ideal triangulation. It is easy to see, that \(\Pi^{2}=\Pi\) and we set \[\operatorname{SN}_{\mathsf{T}}^{\mathfrak{C}}(\Sigma,M;\mathbf{B})\coloneqq \operatorname{Im}(\Pi)\,. \tag{5.14}\] Using the completeness relation of figure 1 the cloaking circle allows us to move edges of string-nets through marked points. If the edge has non-trivial \(G\)-color, though, e.g. \(c\in\mathcal{C}_{g}\) in figure 11, the color of the cloaking circle might change The resulting string-net is still in the image of \(\Pi\), as can be easily checked using the completeness relation again. The vector space \(\operatorname{SN}_{T}^{\mathfrak{C}}(\Sigma,M;\mathbf{B})\) clearly depends on the choice of a \(G\)-triangulation. Let \((\mathsf{T},g)\xrightarrow{F_{f}}(\mathsf{T}^{\prime},g^{\prime})\) be a \(G\)-flip, where none of the edges of the two triangles is a boundary edge. We define the associated linear map \[F_{\mathsf{T},\mathsf{T}^{\prime}}:\operatorname{SN}_{\mathsf{T}}^{\mathfrak{ C}}(\Sigma,M;\mathbf{B})\to\operatorname{SN}_{\mathsf{T}^{\prime}}^{ \mathfrak{C}}(\Sigma,M;\mathbf{B}) \tag{5.15}\] as the linear extension of the map defined on elements of the fat graph basis by Figure 11. We can use completeness in \(\mathcal{C}\) to pull a string-net edge through a vertex of \(\mathsf{T}\). However, the color of the cloaking circle might change. where the map is the identity outside the open neighborhood of the edge \(\mathbf{f}\) shown in red in the picture. In case there is a boundary edge, we use first use the completeness relation to decompose the \(\mathcal{C}\)-colored edges ending on \(\partial\Sigma\) into a sum of simple colored edges and then perform the same move as in the previous case, followed by the inverse of the completeness relation. To a gauge transformation \(\mathsf{T}\xrightarrow{\lambda_{g}(v)}\mathsf{T}^{\prime}\), we associate the map \(G_{g}(v):\operatorname{SN}_{\mathsf{T}}^{\mathcal{C}}(\Sigma,M;\mathbf{B}) \to\operatorname{SN}_{\mathsf{T}^{\prime}}^{\mathcal{C}}(\Sigma,M;\mathbf{B})\) adding an \(I_{g}\)-colored cloaking circle around the internal vertex \(v\) of the \(G\)-triangulation \(T\). **Theorem 5.12**.: _There is a functor_ \[\operatorname{SN}^{\mathcal{C}}:\Pi_{1}\left(\mathcal{G}^{G}( \Sigma,\zeta)\right) \to\mathcal{V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Definition 5.13**.: Given a \(G\)-surface \((\Sigma,\zeta)\), a set of marked points \(M\) and a \(G\)-graded spherical fusion category \(\mathcal{C}\), the _based bare string-net space_ is defined by \[\mathrm{SN}^{\mathcal{C}}(\Sigma,M;\mathbf{B})\coloneqq\underset{\Pi_{1}\left( \mathcal{C}^{\theta}(\Sigma,\zeta)\right)}{\lim}\mathrm{SN}^{\mathcal{C}}_{ \mathsf{T}}(\Sigma,M;\mathbf{B}) \tag{5.17}\] _Remark 5.14_.: Instead of taking the limit over the diagram, we could have instead taken its colimit. The resulting vector spaces are isomorphic, since the diagram involves only linear isomorphisms. For the same reason with have a canonical isomorphism \[\mathrm{SN}^{\mathcal{C}}(\Sigma,M;\mathbf{B})\cong\mathrm{SN}^{\mathcal{C}}_{ \mathsf{T}}(\Sigma,M;\mathbf{B}) \tag{5.18}\] for any \(G\)-triangulation \(\mathsf{T}\). The bare string-net space based at \(M\) only depends on the \(G\)-surface \((\Sigma,\zeta)\), the set of marked points \(M\) and \(\mathcal{C}\), but not on the choice of a \(G\)-triangulation based at \(\delta\cup M\). The set \(\delta\) will be uniquely determined by the input datum of a \(1\)-morphism. Thus, the only arbitrary choice left is the set \(M\) of marked points. However, there is a distinguished isomorphism between based bare string-net spaces for different choices of marked points. We show this by showing that for all based string-net spaces, there is a specific isomorphism to the based bare string-net space with \(M=\emptyset\), or \(M\) a single point in case of a disk. For a given \(M\), we pick a \(G\)-triangulation and from Lemma 5.11 we get that \(\mathrm{SN}^{\mathcal{C}}(\Sigma,M;\mathbf{B})\) has a fat graph basis with additional cloaking circles around elements in \(M\). To make arguments more palpable, we discuss the specific case of a genus \(2\) surface \(\Sigma\) with \(4\) boundary components. We choose an arbitrary point \(\star\in\Sigma\backslash M\) and a set of generators in \(\pi_{1}(\Sigma\backslash M,\,\star)\) For the dual fat graph \(\Gamma\) of the ideal triangulation \(\mathsf{T}\), we pick a maximal tree \(t\) and denote \(\ell\coloneqq E(\Gamma)-E(t)\). Since the fat graph \(\Gamma\) generates \(\pi_{1}(\Sigma\backslash M,\,\star)\), we can in fact choose \(t\) in such a way, that upon collapsing \(t\) to \(\star\), the homotopy classes of the remaining edges \(\ell\) agree with the chosen generators of \(\pi_{1}(\Sigma\backslash M,\,\star)\). This means, that for any of the generators \(\gamma\), there exists an element \(l_{\gamma}\in\ell\), such that. after collapsing \(t\) to \(\star\) it holds that \(\left[l_{\gamma}\right]=\gamma\). The tree \(t\) has a disk-shaped neighborhood. Using local relations on this disk, we can replace the elements of the fat graph basis with string-nets supported on the contracted graphs. These string-nets still constitute a basis for the based bare string-net space. We use the completeness relation, possibly followed by a gauge transformation isomorphism, to map these new basis elements to string-nets where the points of \(M\) are solely surrounded by cloaking circles After further contracting the graph, we obtain where we can restrict to colorings by simple objects. Thus for any two choices of marked points \(M_{1}\), \(M_{2}\), by identifying the basis elements, we have a distinguished isomorphism between string-net spaces defined in terms of \(G\)-triangulations based at \(\delta\cup M_{1}\) and \(\delta\cup M_{2}\). The isomorphisms give a projective system of vector spaces and we can take its projective limit, which we denote by \(\operatorname{SN}^{\text{\tiny\textregistered}}(\Sigma;\mathbf{B})\). **Definition 5.15**.: The vector space \(\operatorname{SN}^{\text{\tiny\textregistered}}(\Sigma;\mathbf{B})\) is called _bare string-net space_. To summarize our construction, we first needed a combinatorial replacement for a surface \(\Sigma\) with map \(\zeta:\Sigma\to BG\), in order to give a sensible definition for a string-net graph in \(\Sigma\) to respect the \(G\)-structure. This point was settled by using \(G\)-triangulations. Once we came up with a definition of a pre-string-net space depending on a given \(G\)-triangulation, the whole rest of the section was devoted to build up a definition depending only on \(\Sigma\) and \(\zeta\), but not on the combinatorial model. Taking limits over projective systems only involving isomorphisms has the advantage that, whenever we want to work with the bare string-net space, we can just pick our favorite \(G\)-triangulation and compute everything relative to it. We will make use of this fact in section 6.2, when actually computing string-net spaces. _Remark 5.16_.: Allowing an additional set \(M\) of vertices in an ideal triangulation will simplify the description of the behavior of string-net spaces under gluing of \(G\)-surfaces described in section 6.1. The set of \(G\)-triangulations based on boundary points of surfaces is not preserved under gluing of surfaces since gluing gives a new internal vertex. Hence, we had to allow more general \(G\)-triangulations from the start. ### Cylinder Category Since we want to construct the \((2,1)\)-part of a functor \[Z:G^{\mathcal{B}\mathcal{B}\mathcal{M}}_{\star}(3,2,1)\to\mathcal{B}i \mathcal{M}\mathcal{M}\mathcal{L}_{\mathbb{K}} \tag{5.19}\] a surface \(\Sigma\) with \(n\)-boundary components should get mapped to a profunctor (5.20) \[Z_{\Sigma}:\underbrace{\mathcal{C}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Proposition 5.18**.: _Let \(\mathcal{Cyl}_{g}\) be the Karoubi-envelope of \(\widehat{\mathcal{Cyl}}_{g}\). There is an equivalence_ \[\mathcal{Cyl}_{g}\simeq\mathsf{Z}_{G}(\mathcal{C})_{g}\quad. \tag{5.22}\] Proof.: An object in \(\mathcal{Cyl}_{g}\) is a pair \(((\mathbf{m},\mathbf{c}),p)\) of an object in \(\widehat{\mathcal{Cyl}}_{g}\) and an idempotent \(p\) in \(\widehat{\mathcal{Cyl}}_{g}\). We pick a \(G\)-triangulation of the cylinder without internal marked points. It is not hard to see, that \(p\) can be represented by a graph shown in red in figure 12. with \(c,d\in\mathcal{C}_{g}\) the signed tensor products of objects \(\mathbf{c}\), \(\mathbf{d}\) and \(x\in\mathcal{C}_{e}\). We define a functor \(F:\mathcal{Cyl}_{g}\to\mathsf{Z}_{G}(\mathcal{C})_{g}\), whose action on objects is given by \[F:((\mathbf{m},\mathbf{c}),p)\longmapsto(I(c),F(p)) \tag{5.23}\] where \(I\) is the induction functor (cf. (2.11)) and \(F(p)\) is the idempotent shown in the following figure: where \(f\) is the morphism in the representation of \(p\) given above. Its action on morphisms is defined in the obvious way. In the other direction, we define a functor \(K:\mathsf{Z}_{G}(\mathcal{C})_{g}\longrightarrow\mathcal{Cyl}_{g}\) mapping \(Z\in\mathsf{Z}_{G}(\mathcal{C})_{g}\) to the underlying object in \(\mathcal{C}_{g}\) and the idempotent given by Figure 12. In red we show an ideal triangulation of the cylinder with in- and outgoing boundary \(S^{1}_{g}\). The vertical red lines are along the “radial direction” of the cylinder. i.e. the crossing is the half-braiding in \(\mathsf{Z}_{G}(\mathcal{C})_{g}\). The proof, that the functors \(F\), \(K\) give an equivalence of categories is equivalent to the one given in [11, Theorem 6.4]. ## 6. \(G\)-equivariant String-Nets ### Construction of \(G\)-equivariant String-Net Space In this section, we demonstrate how to extract structure of the category \(\mathsf{Z}_{G}(\mathcal{C})\) from \(G\)-equivariant string-nets. To this end, we have to be able to assign to each connected component of the boundary of \(\Sigma\) an object \(Z\in\mathsf{Z}_{G}(\mathcal{C})_{g}\). Given \(Z\), consider the \(G\)-equivariant string-net given by the cylinder where the purple line is a cloaking circle. This is exactly the idempotent of figure 13. Gluing such a cylinder to a boundary component defines an idempotent on the corresponding bare string-net space. Given a boundary value \(Z_{i}\in\mathsf{Z}_{G}(\mathcal{C})\) for each connected component of the boundary, we define the string-net space \(\operatorname{KSN}^{\mathcal{C}}(\Sigma;\mathbf{Z})\coloneqq\operatorname{ KSN}^{\mathcal{C}}(\Sigma;Z_{1},\dots,Z_{n})\) for these boundary values as the common image of all these idempotents. Figure 13. Part of the functor \(K\) from \(G\)-center to the cylinder category. The cylinder category together with the string-net space should be the \((2,1)\)-part of functor \[\operatorname{KSN}^{\mathbb{G}}:G\mathcal{Bord}\,_{\star}(3,2,1)\to \mathcal{B}\mathcal{B}\mathcal{M}\mathcal{O}\mathcal{L}\quad. \tag{6.1}\] Thus, we have to discuss how string-net spaces behave under gluing of surfaces. Let \(\Sigma\), \(\Sigma^{\prime}\) be composable \(1\)-morphisms in \(G\mathcal{Bord}\,_{\star}(3,2,1)\), then we need to show \[\operatorname{KSN}^{\mathbb{G}}(\Sigma\circ\Sigma^{\prime};\mathbf{Z}, \mathbf{Z}^{\prime})\simeq\int^{\mathcal{Z}\mathcal{Z}\mathcal{Z}\mathcal{Z} \mathcal{Z}}\operatorname{KSN}^{\mathbb{G}}(\Sigma;\mathbf{Z},Z)\otimes \operatorname{KSN}^{\mathbb{G}}(\Sigma^{\prime};Z^{\ast},\mathbf{Z}^{\prime})\,, \tag{6.2}\] where \(Z\), \(Z^{\ast}\in\mathbb{Z}_{G}(\mathbb{G})\) are the boundary values of the boundary components at which \(\Sigma\), \(\Sigma^{\prime}\) are glued. Let \(\Sigma\), \(\Sigma^{\prime}\) be equipped with \(G\)-triangulations \(\mathsf{T}\), \(\mathsf{T}^{\prime}\). The glued surface \(\Sigma\circ\Sigma^{\prime}\) inherits a \(G\)-triangulation, where the boundary vertices of the individual surfaces are mapped to new internal vertices. For \(\Sigma\) we pick a \(G\)-triangulation which looks around the gluing boundary like the \(G\)-triangulation shown in figure 13. Using the chosen \(G\)-triangulation on an annular neighborhood around the gluing boundary, the proof of (6.2) is the same as in the non-equivariant case, c.f. [14, section 5.2]. The alert reader may have noticed, that we haven't defined a string-net space on surfaces without boundary components. However, we can consider \((\Sigma^{\prime},\zeta^{\prime})\), a surface with a single boundary component \(b\) and \(\zeta|_{b}=e\in G\) the neutral element. Furthermore, \((\Sigma,\zeta)\) is the surface obtained from \((\Sigma^{\prime},\zeta^{\prime})\) by gluing a disk \(D\) at \(b\). Note that all closed \(G\)-surfaces can be obtained in this way and we simply define \[\operatorname{KSN}^{\mathbb{G}}(\Sigma)\coloneqq\int^{\mathcal{Z}\in \mathbb{Z}_{G}(\mathbb{G})_{\kappa}}\operatorname{KSN}^{\mathbb{G}}(D;Z) \otimes\operatorname{KSN}^{\mathbb{G}}(\Sigma;Z^{\ast})\,. \tag{6.3}\] We expect that, as in the non-equivariant case, the maps (6.4) \[\begin{split}\mathcal{C}\mathcal{U}:(S^{1},g)& \to\mathcal{C}\mathcal{U}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L}\mathcal{L} \mathcal{L}\mathcal{L}\mathcal{L where \(j\in I_{h}(\mathbb{C})\) for any \(h\in G\). We denote the cylinder with lower/ upper boundary labeled by \(g/\ h^{-1}gh\) by \({}_{g}C_{h^{-1}gh}\). From now on, we will not show the ideal triangulation in the arguments. Pictures get overloaded fast and besides establishing a useful presentation for string-nets, an ideal triangulation doesn't serve any further purpose. **Proposition 6.1**.: _For \(X_{1}\in\mathsf{Z}_{G}(\mathbb{C})_{g}\) and \(X_{2}\in\mathsf{Z}_{G}(\mathbb{C})_{h^{-1}gh}\) we have_ \[\mathrm{KSN}^{\mathbb{C}6}\left({}_{g}C_{h^{-1}gh};X_{1},X_{2}\right)\simeq \mathrm{Hom}_{\mathsf{Z}(\mathbb{C})}(\phi_{h}(X_{1}),X_{2})\,, \tag{6.5}\] _where \(\phi_{h}\) is the \(G\)-crossing on \(\mathsf{Z}_{G}(\mathbb{C})\) from Proposition 2.3. In this way \(G\)-equivariant string-nets encode the \(G\)-action on \(\mathsf{Z}_{G}(\mathbb{C})\)._ Proof.: We can use the lower cloaking circle in figure 14 and the completeness relation to bring the \(j\)-labeled line and the other cloaking circle to the front Figure 14. Basic string-net on a cylinder. \[\sum_{i\in I_{h}}\frac{d_{i}}{D} \tag{6.6}\] This also changes the color of the lower cloaking circle. The lower three consecutive undercrossings in (6.6) constitute half-braidings \(\gamma_{x_{1},i\otimes m_{0}\otimes j^{\prime}}\), where \(n_{0}\in I_{e}(\mathbb{K})\) is a simple object in the neutral component of \(\mathbb{K}\) coming from the cloaking circle in the middle. We define linear maps \[F:\operatorname{KSN}^{\mathbb{K}}\left({}_{g}C_{h^{-1}gh};X_{1},X_{2}\right) \quad\longrightarrow\qquad\operatorname{Hom}_{\mathbb{Z}(\mathbb{K})}(\phi_{h} (X_{1}),X_{2})\] \[\sum_{i\in I_{h}}\frac{d_{i}}{D}\] We claim that the two maps are inverse to each other. Applying \(G\circ F\) to the string-net (6.6) yields \[\sum_{i,k\in I_{h}}\frac{d_{i}}{D}\] \[\sum_{i,k\in I_{h}}\frac{d_{i}}{D}\] For the other composition, let \(f^{\prime}\in\operatorname{Hom}_{\mathbb{Z}(\mathcal{C})}(\phi_{h}(X_{1}),X_{2})\), then \[F\circ G(f^{\prime})=\sum_{i,j\in I_{h}}\frac{d_{j}}{D}\] \[\phi_{h}(X_{1})\] \[\overset{(2)}{=}\] \[\phi_{h}(X_{1})\] \[\overset{(1)}{=}\] \[\overset{(2)}{=}\] \[\phi_{h}(X_{1})\] \[\overset{(2)}{=}\] \[\overset{(2)}{=}\] \[\phi_{h}(X_{1})\] \[\overset{(1)}{=}\] \[\overset{(1)}{=}\] \[\overset{(2)}{=}\] \[\overset{(2)}{=}\] Proof.: The proof is very similar as the previous one. We start with an ideal \(G\)-triangulation on a pair of pants In black we also included the dual uni-trivalent fat graph. We pick a maximal subtree in the fat graph and collapse the corresponding string-net to get Note that due to the \(G\)-coloring in the ideal triangulation it holds \(i,j\in I_{e}\). Using the completeness relation in \(\mathcal{G}\) we get a string-net which has an expansion in terms of string-nets From that presentation of the string-net the isomorphism is obvious. #### 6.2.3. Higher Genus Surfaces As an example we consider \(\Sigma\) a surface of genus 2 with three boundary components. We pick a \(G\)-triangulation for \(\Sigma\) with no internal vertices. In the figure, the \(G\)-triangulation is shown in red. Its bounding octagon gives a polygonal decomposition of a genus 2 surface, and the edges of the polygon are labeled with group elements \(\alpha,\beta,\gamma\) and \(\delta\). Boundary edges have \(G\)-label \(b_{1}\), \(b_{2}\) and \(b_{3}\). In black we show a string-net on the surface, where we already collapsed all planar graphs inside faces of the \(G\)-triangulation to one-vertex graphs with labels \(\phi_{i}\). The edges labeled by \(\alpha,\beta,\gamma,\delta\) and \(b_{1},b_{2},b_{3}\) generate the first homology group of \(\Sigma\). By Lemma 5.11 we have a basis for \(\operatorname{KSN}^{\mathbb{G}}(\Sigma;\mathbf{B})\) spanned by equivalence classes of above graphs, with edges labeled by compatible simple objects. We pick an arbitrary but fixed maximal tree \(T\) in the above fat graph underlying the string-net. Such a tree has to lie inside a disk and we can collapse all its edges using the local relations. This yields a basis of \(\operatorname{KSN}^{\mathbb{G}}(\Sigma;\mathbf{B})\) in terms of (equivalence classes of) graphs of the form where the \(i_{n}\)'s are simple objects. We will not show the \(G\)-triangulation anymore from now on. We can isotope the cloaking circle around the first boundary component to run parallel to the fat graph. The resulting string-net, drawn on \(\Sigma\), then looks as follows We can use the completeness relation several times, to bring the fat graph in the form \[\sum_{\begin{subarray}{c}i\in I_{c},\\ m_{1},\ldots,m_{6}\end{subarray}}\frac{d_{i}}{D}d_{m_{1}}d_{m_{2}}d_{m_{3}}d_{m_{4 }}d_{m_{5}}d_{m_{6}}\] Note that \(\zeta\) fixes the \(G\)-color of a \(G\)-triangulation with no internal marked points uniquely. By collapsing a maximal tree, we get a fixed \(G\)-color of the edges in the one vertex graph. Since the cyclicity condition on vertices is preserved by the local moves, this coloring satisfies \[hb_{3}h^{-1}b_{1}gb_{2}g^{-1}\gamma\delta\gamma^{-1}\delta^{-1}\alpha\beta \alpha^{-1}\beta^{-1}=e\quad. \tag{6.8}\] where \(h\) is the \(G\)-color of \(i_{8}\) and \(g\) the one of \(i_{5}\). By contracting the maximal tree \(T\) of the fat graph, the non-contracted edges constitute generators for \(\pi_{1}(\Sigma)\) based at the single vertex. Choosing another maximal tree, we get a different set of free generators. However, since the \(G\)-color of the fat graph was uniquely determined, the \(G\)-color for the generators is also uniquely determined in both cases. It solely depends on the map \(\zeta\). For the tree chosen here, we define an object \[c_{X_{3},X_{1},X_{2},\mathbf{m}}\coloneqq\phi_{h}(X_{3})\otimes X_{1}\otimes\phi _{g}(X_{2})\otimes m_{5}\otimes m_{4}\otimes m_{5}^{*}\otimes m_{4}^{*}\otimes m _{3}\otimes m_{2}\otimes m_{3}^{*}\otimes m_{2}^{*}\, \tag{6.9}\] where \(m_{5}\in I_{\beta}\), \(m_{4}\in I_{\alpha}\), \(m_{3}\in I_{\delta}\), \(m_{2}\in I_{\gamma}\) and \(X_{i}\in\mathsf{Z}_{G}(\mathbb{C})_{b_{i}}\) are the boundary values. The object \[c\coloneqq\bigoplus_{\mathbf{m}}c_{X_{3},X_{1},X_{2},\mathbf{m}} \tag{6.10}\] has a natural half-braiding \[\sum_{\begin{subarray}{c}m_{2},m_{3},\\ m_{4},m_{5}\end{subarray}}d_{m_{2}}d_{m_{3}}d_{m_{4}}d_{m_{5}}\] and thus can be seen as an object \((c,\sigma)\in\mathsf{Z}_{G}(\mathbb{C})\) ([TV20, section 10.4]). The computation is now similar to the one for the cylinder. We define a linear map \[\operatorname{Hom}_{\mathsf{Z}_{G}(\mathbb{C})}(\mathbb{1},c)\longrightarrow \operatorname{KSN}^{\mathbb{R}}(\Sigma;X_{1},X_{2},X_{3})\] which acts on \(\Upsilon\in\operatorname{Hom}_{\mathsf{Z}_{G}(\mathbb{C})}(\mathbb{1},c)\) by and a linear map in the other direction \[\operatorname{KSN}^{\circledR}(\Sigma;X_{1},X_{2},X_{3})\longrightarrow\operatorname{ Hom}_{\mathbb{Z}_{G}(\circledR)}(1,c)\] \[\sum_{\begin{subarray}{c}i\in I_{c_{i}}\\ m_{1},\dots,m_{6}\end{subarray}}\frac{d_{i}}{D}d_{m_{1}}d_{m_{2}}d_{m_{3}}d_{m_ {4}}d_{m_{5}}d_{m_{6}}\] \[\sum_{\begin{subarray}{c}i\in I_{c_{i}}\\ m_{1},\dots,m_{6}\end{subarray}}\frac{d_{i}}{D}d_{m_{1}}d_{m_{2}}d_{m_{3}}d_{m _{4}}d_{m_{5}}d_{m_{6}}\] ## Appendix A Proof of Theorem 4.9 The Theorem will follow from [1, Proposition 6.2], which we quickly state for the reader's convenience. **Theorem A.1**.: _Given two \(2\)-dimensional CW-complexes \(F\), \(B\) and a map \(\pi:F^{[1]}\to B^{[1]}\) of their \(1\)-skeletons, such that.:_ 1. _For any vertex_ \(v\) _and edge_ \(e\) _in_ \(B\)_, there exists a vertex_ \(v^{\prime}\) _and an edge_ \(e^{\prime}\) _with_ \(\pi(v^{\prime})=v\) _and_ \(\pi(e^{\prime})=e\)_. Furthermore, for_ \(v_{1}\xrightarrow{e}v_{2}\) _an edge in_ \(B\)_, and any_ \(v^{\prime}_{1}\in\pi^{-1}(v_{1})\)_, there exists an edge_ \(v^{\prime}_{1}\xrightarrow{e^{\prime}}v^{\prime}_{2}\) _such that_ \(\pi(e^{\prime})=e\)_._ 2. \(B\) _is connected and simply connected._ 3. _There exist a vertex_ \(v\in B\)_, such that_ \(\pi^{-1}(v)\) _is connected and simply connected._ 4. _Given an edge_ \(v_{1}\xrightarrow{e}v_{2}\) _in_ \(B\)_, an edge_ \(v^{\prime}_{1}\xrightarrow{f_{1}}v^{\prime\prime}_{1}\) _in_ \(\pi^{-1}(v_{1})\) _and two lifts_ \(v^{\prime}_{1}\xrightarrow{e^{\prime}}v^{\prime}_{2}\)_,_ \(v^{\prime\prime}_{1}\xrightarrow{e^{\prime\prime}}v^{\prime\prime}_{2}\) _of_ \(e\)_, there exists an edge_ \(v^{\prime}_{2}\xrightarrow{f_{2}}v^{\prime\prime}_{2}\) _in_ \(\pi^{-1}(v_{2})\) _such that the square_ _is contractible in_ \(F\)_._ 5. _The boundary_ \(\partial C\) _of any_ \(2\)_-cell in_ \(B\) _has a contractible lift in_ \(B\)_._ _Then the complex \(F\) is connected and simply connected._ Proof of Theorem 4.9.: The proof consists of checking points i)-v) of Theorem A.1 for the forgetful map \(\pi:\mathcal{P}^{G}(\Sigma,\zeta,M)^{[1]}\to\mathcal{P}(\Sigma,M)^{[1]}\), which forgets the \(G\)-labels of the edges, maps \(G\)-flips to flips of the underlying ideal triangulation and maps gauge transformations to the identity. Note that the fiber of \(\pi\) over any ideal triangulation \(\mathsf{T}\) is the set of all possible \(G\)-labels of \(\Delta\) induced by \(\zeta:\Sigma\to BG\). 1. The forgetful map is clearly surjective on vertices and edges. In addition, given a flip \(\Delta\xrightarrow{F_{e}}\Delta^{\prime}\) and any \(G\)-triangulation \((\Delta,g)\in\pi^{-1}(\Delta)\), there is a \(G\)-flip \((\Delta,g)\xrightarrow{F^{G}_{e}}(\Delta^{\prime},g^{\prime})\) covering \(e\). 2. The Ptolemy-complex is connected and simply connected by theorem 4.4. 3. Let \(\Delta\) be any ideal triangulation of \(\Sigma\). The fiber \(\pi^{-1}(\Delta)\) is connected, as the gauge group \(\mathcal{G}_{\Delta}\) relates any two \(G\)-labelings induced by \(\zeta\). Due to relation **GP5** any sequence of gauge transformations in \(\pi^{-1}(\Delta)\) is homotopic to a sequence of the form \(\lambda_{1}(v)\cdots\lambda_{n_{v}}(v)\lambda_{1}(w)\cdots\lambda_{n_{w}}(w) \lambda_{1}(u)\cdots\) running trough all vertices of \(\Delta\) in arbitrary order. Furthermore, any two paths having the same start- and endpoint in \(\pi^{-1}(\Delta)\) and consisting entirely of different gauge transformations at the same vertex are homotopic. This follows from the observation, that due to **GP6** the subcomplex spanned by gauge transformations at a single vertex of a \(G\)-triangulation is the \(2\)-skeleton of a \(|G|-1\)-dimensional simplex and therefore is simply connected. It follows that \(\pi^{-1}(\Delta)\) is simply connected. 4. This directly follows from the mixed relation **GP4**. 5. Similar to the previous point, this holds, since the boundaries of the \(2\)-cells **GP1**, **GP2** and **GP3** project to the boundaries of the \(2\)-cells **P1**, **P2** and **P3**.
2309.16798
Expert-sourcing Domain-specific Knowledge: The Case of Synonym Validation
One prerequisite for supervised machine learning is high quality labelled data. Acquiring such data is, particularly if expert knowledge is required, costly or even impossible if the task needs to be performed by a single expert. In this paper, we illustrate tool support that we adopted and extended to source domain-specific knowledge from experts. We provide insight in design decisions that aim at motivating experts to dedicate their time at performing the labelling task. We are currently using the approach to identify true synonyms from a list of candidate synonyms. The identification of synonyms is important in scenarios were stakeholders from different companies and background need to collaborate, for example when defining and negotiating requirements. We foresee that the approach of expert-sourcing is applicable to any data labelling task in software engineering. The discussed design decisions and implementation are an initial draft that can be extended, refined and validated with further application.
Michael Unterkalmsteiner, Andrew Yates
2023-09-28T19:02:33Z
http://arxiv.org/abs/2309.16798v1
# Expert-sourcing Domain-specific Knowledge: The Case of Synonym Validation ###### Abstract One prerequisite for supervised machine learning is high quality labelled data. Acquiring such data is, particularly if expert knowledge is required, costly or even impossible if the task needs to be performed by a single expert. In this paper, we illustrate tool support that we adopted and extended to source domain-specific knowledge from experts. We provide insight in design decisions that aim at motivating experts to dedicate their time at performing the labelling task. We are currently using the approach to identify true synonyms from a list of candidate synonyms. The identification of synonyms is important in scenarios were stakeholders from different companies and background need to collaborate, for example when defining and negotiating requirements. We foresee that the approach of expert-sourcing is applicable to any data labelling task in software engineering. The discussed design decisions and implementation are an initial draft that can be extended, refined and validated with further application. ## 1 Introduction The training and validation of natural language processing models, that are based on supervised machine learning, require data that is labelled by humans. Creating labelled data, in particular if it is domain specific, is costly and can require expert knowledge. Furthermore, the lack of high-quality labelled data may prevent the transfer of an approach from one domain to the other, simply because not enough labelled data exists to train the model [10]. Crowdsourcing platforms provide the possibility to harvest human intelligence that can be used for data labelling. While this works well for tasks that target the humans' predisposition for pattern recognition, tasks for which domain-specific knowledge is required cannot be outsourced to an arbitrary crowd. Such tasks need to be designed such that a limited target group remains engaged with the data labelling task and experiences benefits from participation. In this paper, we provide some insight in an ongoing study and provide motivation for the design decisions we made when adopting an existing crowdsourcing tool for our particular task: validation of domain-specific synonym candidates. Background Our current research focuses at supporting requirements engineers to adopt an object classification system, CoClass1, from the construction business domain. The classification is planned to be used throughout the organization to identify and trace specified, designed, constructed and eventually maintained objects. CoClass is a hierarchical ontology of construction objects that provides a coding system, a definition and synonyms for each object. CoClass is still under development and many object to synonym mappings are still incomplete. These mappings are however important for the use of the classification system as it allows users, with different background and vocabulary, to find the objects they are looking for. Furthermore, we plan to use the ontology to automatically classify natural language requirements such that they can be traced during the life-cycle of a project. Footnote 1: [https://coclass.bygggtjanst.se/en/about&about-coclass](https://coclass.bygggtjanst.se/en/about&about-coclass) ### Domain-specific synonym detection In order to fill the synonym gaps in CoClass, we use a learning-to-rank approach for domain-specific synonym detection [23]. The basic idea of this supervised approach is to learn term associations from a domain specific corpus, using features that indicate the synonymous use of a term. The approach produces a list of synonym candidates for each term defined in CoClass (1430 terms with each 1000 synonym candidates). A preliminary evaluation of the candidates with a domain expert suggests that only \(\sim\)1% of the synonym candidates are true synonyms (10 in 1000). While this precision might seem underwhelming, automated synonym detection _is_ difficult and should be compared against its manual alternatives or evaluated against the cost of not discovering new synonyms at all. ## 3 Expert-sourcing synonym validation Reviewing 1,430,000 synonym candidates would be a monumental task for an individual. While crowdsourcing [1] the task to the general public would be possible, it would likely not succeed, as the task is language (Swedish) and domain (construction business) specific, limiting the potential and reliable participants considerably. We chose therefore to use a crowdsourcing framework, Pybossa2, that allows us to control all aspects of the validation process: participants, data storage and task design. Pybossa provides important infrastructure for realizing a crowdsourcing project, such as task importing, management, scheduling, and redundancy, user management and results analysis. In addition, Pybossa provides a REST API and convenience functions that can be used for tasks, e.g. a media player for video/sound annotation tasks or a PDF reader for transcription tasks. In the remainder of the paper, we focus on the task design and the decisions that were made in order to make the validation process efficient and effective. The code for task presentation and analysis is available online3. Footnote 2: [https://github.com/Scifabric/pybossa](https://github.com/Scifabric/pybossa) Footnote 3: [https://github.com/munterkalmsteiner/pybossa-traifiverket-theme](https://github.com/munterkalmsteiner/pybossa-traifiverket-theme) ### Task design The validation task is separated into two phases. In phase 1, the selection, the expert selects \(0..n\) synonyms from a list of candidates for a particular target term. In phase 2, the result, the expert receives feedback on his/her selection. Screenshots of the respective phases are shown in Figure 1 and 2. The red markers are inserted for referencing purposes, used in the following discussion. Panel 1 in Figure 1 shows the target term for which the expert needs to select synonyms. In this area, we show also the hierarchical structure of CoClass under which the target term (transl.: fence) can be found (transl.: Components \(\gg\) Limiting objects \(\gg\) Access-limiting objects \(\gg\) Fence), including the coding that is used for such objects (R \(\gg\) RU \(\gg\) RUA). We also show the definition used in CoClass of the target term (transl: access restricting object by a horizontal elongated barrier with a vertical extent). The purpose is to provide context, to foster organizational learning [21] and to develop a common vocabulary that potentially reduces misunderstandings in the organization. Panel 2 in Figure 1 shows the list of candidate synonyms. We group candidate synonyms with affinity propagation clustering [1], measuring similarity with the Levenshtein distance. This reduces the perceived number of terms an expert has to inspect as similar terms can be accepted/rejected in one task. If the expert is ## References * [1] Figure 1: The selection phase of the task Figure 2: The results phase of the task not sure about the meaning of the term or the synonym candidates, (s)he can skip the task and proceed to the next one. Panel 3 in Figure 1 shows the overall progress, i.e. tasks done of the total number of tasks. Once the expert has made a decision, the results for the particular task are stored and analysed in order to provide immediate feedback to the expert. An example of the analysis is shown in Panel 4 in Figure 2. In the second column of the results table, we show whether the selected term is a correctly identified or a missed actual synonym, according to the already defined synonyms in CoClass, or a completely new identified synonym. In the third column, we show how well aligned the current expert is with other experts that have already performed the same task. For example, the selection of the expert in Figure 2 has missed the actual synonym "parkeringsplanka", and so did another user. They agree that "parkeringsplanka" is not a synonym of "barriar". However, two other experts had a different opinion, i.e. "parkeringsplanka" is indeed a synonym of "barriar". Once the tasks are completed, it is straightforward to identify new synonyms with a simple majority vote. ### Motivational aspects of expert-sourcing When we designed the task, we considered how to create a win-win situation for the participating experts, management which pays for the time spent on the task, and researchers. There exists some evidence that intrinsic motivation is more important than extrinsic motivation for crowd-sourcing workers [10]. Figure 3 shows different aspects of motivation and their relative importance ranking (1-13), based on a survey of 431 Amazon Mechanical Turk workers. In the remainder of this section, we discuss our strategies to foster some aspects of worker motivation. The synonym selection task should transfer some knowledge to the participants. We provide that by showing term definitions, the CoClass hierarchy and code under which the term is found. This fosters individual learning as well as organizational learning as it promotes a common vocabulary (Human Capital Advancement, i.e. motivation to enable training of skills). Similarly, the feedback on the results page helps individuals to understand how well they are aligned with their colleagues (Direct Job Feedback, i.e. motivation provided by the perception of achievement; Community Identification, i.e. the subconscious adoption of norms and values). For management, this could also be useful information as it could indicate where adjustments in documentation or training are needed. Since we know exactly how much time each expert has spent on their tasks, we can quantify the cost for collecting synonyms and potential terminology misalignments (Payment, i.e. motivation by monetary compensation). Such figures can help to get management buy-in when extending the study or replicating it in another organization. A potential threat to the validation of the synonyms is the result page where we show the alignment of experts immediately after their choice (Task Identity, i.e. the extent to which a participant perceives that his/her work leads to a result). Therefore, we randomize the presentation of tasks (in blocks of five, i.e. after five tasks we change the target term), counteracting conscious or unconscious bias. Finally, we seed a true synonym if the expert did not select a synonym after 10 tasks in a row. The intention is both to keep the participant motivated by "finding" a synonym and to verify that the expert is still paying attention to the task and not submitting random answers. In Figure 3, we highlight in **bold** typeface which motivational aspects we address. We briefly discuss which aspects are not covered. Task Autonomy refers to the degree to which creativity and own decisions are permitted Figure 3: Model for worker’s motivation in crowdsourcing, adapted from Kaufmann et al. [10] by the task. The nature of data labelling tasks leaves little leeway and creativity would rather be counter productive. It would be difficult to design a task that caters for this motivational aspect. Skill Variety refers to the usage of different skills for solving a task that match to the available skill set of the worker. One way to address this motivational aspect would be to segment the CoClass terms into themes that require specialized subdomain knowledge, matching a subset of participants' specialized background and expertise. Pastime refers to the motivation to do something in order to avoid boredom. One could argue that, since the synonym selection task can be performed on mobile devices (e.g. while riding the train to work), this motivational aspect is covered. On the other hand, the task _is_ work and part of the professional activities of an employee, making this motivational aspect not applicable to our context. We do not address any aspects from the range of social motivations. Indirect Job Feedback, i.e. motivation through feedback about the delivered work, for example through comments and other encouragements could however be implemented. Finally, we do not use yet any form of gamification mechanisms. Leaderboards and level systems can be effective means to increase long-term engagement and quality of output [11]. ## 4 Conclusions and Future Work In this paper, we suggest expert-sourcing as a mean to acquire labelled data from domain experts. We illustrate the adoption of a crowd-sourcing platform and the design of the data labelling task for domain-specific synonym identification such that it is engaging and useful for the participating experts. We are currently in the process of piloting the approach with select domain experts and gather feedback on the task design. Once the task design is stabilized, we intend to deploy the data collection mechanism to approximately 500 participants. While we apply the approach to a narrow, specialised, problem (synonym identification), the idea and design decisions to cater for motivational aspects are generally applicable to any data labelling task in Software Engineering. One could design tasks to evaluate the quality of certain artefacts and use this assessment to train a classification algorithm, for example to evaluate the degree of ambiguity in statements of requirements specifications, the understandability of test cases, identification of code refactorings, detection of code smells or the readability of source code.
2309.09690
Do learned speech symbols follow Zipf's law?
In this study, we investigate whether speech symbols, learned through deep learning, follow Zipf's law, akin to natural language symbols. Zipf's law is an empirical law that delineates the frequency distribution of words, forming fundamentals for statistical analysis in natural language processing. Natural language symbols, which are invented by humans to symbolize speech content, are recognized to comply with this law. On the other hand, recent breakthroughs in spoken language processing have given rise to the development of learned speech symbols; these are data-driven symbolizations of speech content. Our objective is to ascertain whether these data-driven speech symbols follow Zipf's law, as the same as natural language symbols. Through our investigation, we aim to forge new ways for the statistical analysis of spoken language processing.
Shinnosuke Takamichi, Hiroki Maeda, Joonyong Park, Daisuke Saito, Hiroshi Saruwatari
2023-09-18T11:56:10Z
http://arxiv.org/abs/2309.09690v1
# Do Learned Speech Symbols Follow Zipf's Law? ###### Abstract In this study, we investigate whether speech symbols, learned through deep learning, follow Zipf's law, akin to natural language symbols. Zipf's law is an empirical law that delineates the frequency distribution of words, forming fundamentals for statistical analysis in natural language processing. Natural language symbols, which are invented by humans to symbolize speech content, are recognized to comply with this law. On the other hand, recent breakthroughs in spoken language processing have given rise to the development of learned speech symbols; these are data-driven symbolizations of speech content. Our objective is to ascertain whether these data-driven speech symbols follow Zipf's law, as the same as natural language symbols. Through our investigation, we aim to forge new ways for the statistical analysis of spoken language processing. Shinnosuke Takamichi, Hiroki Maeda, Joonyong Park, Daisuke Saito, and Hiroshi Saruwatari The University of Tokyo, Japan. speech analysis, Zipf's law, generative spoken language model, speech representation ## 1 Introduction Zipf's law, a well-known empirical principle, delineates the frequency of occurrence of elements within a dataset [1]. Specifically, when the occurrence frequency of an element ranks as the \(k\)-th highest within a dataset, it equates to \(1/k\) of the frequency of the most occurring element. This law is observed to be applicable across various data domains, with natural language symbols (e.g., words), follow this pattern [2]. To illustrate, the third most frequent word in an English document, "and", appears approximately one-third as often as the most frequent word, "the"1. In the practice of analyzing the occurrence frequency of natural language symbols within a text corpus, one can denote the frequency rank as \(r\) and its respective occurrence frequency as \(f_{r}\). Consequently, the following relationship is given by Zipf's law: Footnote 1: [https://www.cs.cmu.edu/~cburch/words/top.html](https://www.cs.cmu.edu/~cburch/words/top.html) \[f_{r}=ar^{-\eta}, \tag{1}\] where, \(a\) and \(\eta\) are model parameters. The text corpus follows Zipf's law when \(\eta\approx 1\), and follows a power law otherwise. In essence, Zipf's law is a specific type of power law. When following a power law, the log-log plot of rank against frequency appears linear. By examining the adherence to or deviation from Zipf's law, one can analyze the distinct characteristics of a text corpus. This analysis finds notable applications in natural language processing, as illustrated by the following examples [3]: **Infants' language acquisition**: The vocabulary that 2- to 4-year-olds acquire tends to gravitate towards high-frequency words, a phenomenon indicated by a rank-frequency distribution that is convex, rather than linear [4]. **Writing system variations across languages**: Writing systems significantly differ from one language to another, ranging from sound-based phonograms to semantic-based logograms. Tanaka observed that as notation shifts from phonogram to logogram, the rank-frequency distribution transitions to follow a power law; phonographic languages display a convex, while logographic languages demonstrate a linear on a plot [3]. **Quantification of communication effort**: Zipf's law is also recognized as the _principle of least effort_[5]. This principle stipulates that the utilization of frequently used words minimizes speaking and listening efforts during human communication. Viewing communication through this perspective allows for the potential measurement of the naturalness exhibited in communication, including emergent communication [6] and machine-generated communication [7]. Meanwhile, recent advancements in deep learning, such as self-supervised learning, have facilitated the discovery of discrete symbol representations [8, 9, 10]. These representations are learned in a data-driven manner from speech. In contrast to traditional signal-processing-based representations, such as mel-spectrograms, the symbol sequences learned through these methods captures a wealth of phonetic and semantic contents. While natural language symbols were _crafted by humans_ to encode the contents of speech, the learned speech symbols can be seen as their data-driven counterparts, that means, symbols crafted in a data-driven manner. With this context in mind, we are led to a research question: "Does Zipf's law, which holds true for natural language symbols, also apply to the learned speech symbols?" Verifying this hypothesis could potentially unveil the capability to extend the statistical analysis techniques, traditionally employed in natural language analysis, to the spoken language analysis. Moreover, since this methodology bypasses the necessity for transcriptions, it might forge a path towards textless analysis applicable to a diverse range of sounds, encompassing non-verbal vocalizations and non-speech sounds (further details are elaborated in Section 5). In this study, we conduct experiments to address the posed question, utilizing the generative spoken language model (GSLM) [8], a variant of speech symbol representation methodology. Initially, we perform a foundational experiment to ascertain whether the learned speech symbols follow Zipf's law. Leveraging a speech corpus comprising paired text and speech, we analyze symbol rank-frequency distributions by correlating them with the accompanying text. Subsequently, we explore whether our textless analysis identifies deviation in non-textual contents of speech. Specifically, we aim to capture variations in language fluency between native and non-native speech utterances. Our efforts endeavor to pave a way for comprehensive speech analysis. ## 2 Generative spoken language model (GSLM) Figure 1 illustrates the GSLM [8], an analysis-synthesis system operating through discrete speech symbols. This system is structured into three modules: speech2unit, unit language model, and unit2speech. However, this study focuses solely on the utilization of the speech2unit. This specific module integrates a pre-trained self-supervised learning (SSL) model, featuring technologies such as contrastive predictive coding [11], wav2vec2.0 [12], HuBERT [13], in conjunction with a \(k\)-means clustering model. The SSL model remains fixed, while the \(k\)-means clustering model is trained using a speech corpus. Within this setup, the speech2unit module transforms a speech waveform into a sequence of discrete symbols. Symbols are generated at intervals of \(20\) ms, this interval generally shorter than the phoneme duration. Consequently, the speech2unit frequently predicts identical symbols. To prevent this redundancy, we have opted to consolidate consecutive identical symbol sequences into a single representation. For instance, if the speech2unit outputs a sequence like \([3,3,3,50,200,200]\) (where the numbers indicate symbol indices), we simplify this to \([3,50,200]\). It's important to note that the speech2unit, particularly the \(k\)-means clustering model, is highly sensitive to language of input speech. Hence, it's imperative that the model and the encoded speech align in terms of language. ## 3 Methodology We delineate two methodologies employing GSLM speech symbols to explore Zipf's law. **Examining the applicability of Zipf's law to speech symbols.** This methodology is conducted utilizing text-speech pairs. In the case of text, we employ natural language symbols, represented either as words or as character \(n\)-grams. Words are extracted through morphological analysis to discern the original form of the words, while character \(n\)-grams involve sequences of \(n\) consecutive characters. For speech, we obtain speech symbols as outlined in Section 2, calculate speech symbol \(n\)-grams, that is, sequences of \(n\) consecutive speech symbols. We then calculate the ratio of the lengths of natural language symbol sequences to speech symbol sequences to determine the value of \(n\). To describe this intuitively, \(n\) signifies the average number of speech symbols corresponding to a single natural language symbol. We then assess whether Zipf's law holds true for speech symbol \(n\)-grams within text adhering to the law. **Identifying non-textual deviations from Zipf's law.** As explained in Section 1, identifying deviations from this law can be a method to pinpoint non-standard word usage. In this methodology, we explore to detect non-standard speech patterns based on rank-frequency distribution. By comparing distributions of standard and non-standard speech, we aim to identify deviations in non-standard speech from the standard speech. ## 4 Experimental evaluation ### Experimental condition We utilized HuBERT [13], which was trained on LibriSpeech [14], as the SSL model in the GSLM speech2unit. The language-specific \(k\)-means clustering models were trained using JSUT/JVS [15], J-KAC [16], and J-MAC [17] for Japanese, and LibriSpeech for English. The number of classes was set to \(200\) for both languages. The models, implemented using the fairseq toolkit [18], are publicly available23. The speech sampling frequency was set at \(16\) kHz, and speech symbols were extracted every \(20\) ms. Given that high-frequency and low-frequency items often deviate from Zipf's law[3], we estimated model parameters \(a\) and \(\eta\) based only on the top \(0.1\) % to \(10\) % of the frequencies. These model parameters were determined using the least square method. For comparative analysis, we fixed \(\eta\) at \(1.0\) while estimating only \(a\), indicating a strict adherence to Zipf's law in the rank-frequency distribution. To reduce the data size of figures, we thinned the data to be plotted. Footnote 2: [https://huggingface.co/nonmetal/gslm-japanese](https://huggingface.co/nonmetal/gslm-japanese) (Japanese)[19] Footnote 3: [https://github.com/facebookresearch/fairseq/tree/main/examples/textless_nlp/gslm](https://github.com/facebookresearch/fairseq/tree/main/examples/textless_nlp/gslm) (English) To verify Zipf's law in Section 4.2, we utilized approximately \(7,600\) Japanese utterances from JSUT [15] and \(13,000\) English utterances from LJSpeech [14]. For word tokenization, McCab4 and NLTK5 served as morphological analyzers. In the character \(n\)-gram analysis, we used a set of characters encompassing Chinese/Japanese characters and marks for Japanese, and lowercase alphabets, symbols (e.g., ":", "?"), and whitespace for English. The average number of characters per word was \(1.6\) (ja) and \(5.1\) (en), while the average number of speech symbols per character stood at \(5.7\) (ja) and \(1.9\) (en). Moreover, the average number of speech symbols per word was \(8.9\) (ja) and \(9.0\) (en). The value of \(n\) in both character \(n\)-gram and speech symbol \(n\)-gram was set to the ceiling of these values, for instance, \([1.6]=2\) for Japanese characters. Footnote 4: [https://tauku910.github.io/mecab/](https://tauku910.github.io/mecab/) For identifying non-textual deviation in Section 4.3, we employed native and Japanese-accented English utterances from UME-ERJ6. The corpus contained about \(20\) native and \(200\) non-native speakers, each reading approximately 300-500 English sentences. These sentences varied between speakers but maintained a balanced phoneme distribution. Non-native speakers were assigned a language fluency score on a five-point scale. Based on these scores, we categorized non-native speakers into three groups: low- (\(\mathrm{score}<3.0\)), mid- (\(3.0\leq\mathrm{score}<3.5\)), and high-level (\(\mathrm{score}\geq 3.5\)). These groups consisted of \(74\), \(66\), and \(46\) speakers, respectively. We aggregated speech symbols separately for each non-native speaker group and the native speakers. To equalize data size across groups, we randomly selected \(10,000\) utterances per group. For symbol encoding, we used the English GSLM speech2unit. Footnote 5: [https://www.nltk.org/](https://www.nltk.org/) Footnote 6: [https://research.nii.ac.jp/src/en/UME-ERJ.html](https://research.nii.ac.jp/src/en/UME-ERJ.html) ### Verifying Zipf's law of speech symbols We verify the law via three steps: word, character \(n\)-gram, and speech symbol \(n\)-gram. #### 4.2.1 Word First, we verify that the words in the corpora we used adhere to Zipf's law. Figure 2 illustrates the rank-frequency distributions in Figure 1: Generative spoken language model. both Japanese and English. These distributions appear to be linear and the value of \(\eta\) is close to \(1.0\). Therefore, we can say that the words follow Zipf's law, and that the corpora used have the universal statistics in terms of the word distribution. A minor observation, as highlighted in previous studies [3], is that the high-frequency (\(\mathrm{rank}<20\)) and low-frequency (\(10^{3}<\mathrm{rank}\)) items deviate from the regression line. #### 4.2.2 Character \(n\)-gram Next, we explore another natural language symbol: the character \(n\)-gram. Figure 3 shows these distributions. When analyzed for the same value of \(n\), differences between languages become evident. In Japanese, the distribution attains linear at \(n=2\), whereas in English, it displays convex. The English distribution gradually transitions to a linear shape as \(n\) increases, and at \(n=6\), it remains convex but closely resembles a linear trend. At \(n=2\) for Japanese and \(n=6\) for English that correspond to a word, the distributions of character \(n\)-grams more closely follow a power law, rather than Zipf's law. #### 4.2.3 Speech symbol \(n\)-gram Finally, we explore whether speech symbols follow Zipf's law by comparing the results with those of words and character \(n\)-grams. Figure 4 shows these distributions. **Difference between languages.** When analyzed for the same value of \(n\), we find that the distributions are almost identical. These results suggest that the distributions of consecutive speech symbols, which indicate the frequency of speech segment use, are language-independent, at least between Japanese and English. The distributions for English are slightly shifted upwards compared to those for Japanese. This discrepancy is due to the difference in data size; the English corpus is twice as large as the Japanese corpus. **Comparison in \(n\) corresponding to a character.**\(n=6\) for Japanese and \(n=2\) for English corresponds to a character. We observe that distributions where \(n\) corresponds to a character maintain shapes similar to those of character 1-gram. Specifically, the distribution for \(n=6\) in Japanese is nearing linearity (but not \(\eta=1.0\)), while the distribution for \(n=2\) in English is convex (Although not illustrated in Figure 3, the distributions of character 1-grams resemble those of the 2-gram distributions: they are linear for Japanese and convex for English.). This difference can be traced back to differences in writing systems, as discussed in Section 1. Namely, Japanese and English are more akin to logographic and phonographic languages, respectively. The closer a symbol is to representing semantic information rather than phonetic one, the more linear the distribution becomes. Our findings suggest that this trend holds true for speech symbols and that the speech symbol \(n\)-gram might statistically reflect the sound or meaning of a character. The ability to attain character statistics without relying on characters also facilitates analysis based on Zipf's law (and power law) without involving characters. **Comparison in \(n\) corresponding to a word.**\(n=9\) signifies for both Japanese and English. As \(n\) escalates, the distributions gravitate towards linear (not \(\eta=1.0\)). We theorize that this phenomenon occurs because, as noted previously, \(n\)-grams tend to represent semantic information as \(n\) increases. \(\eta\) not being \(1.0\), but the distribution at \(9\)-grams is linear, mirroring the word distribution observed in Figure 2. This suggests that our methodology holds promise for statistically analyzing features associated with words without utilizing words themselves. ### Identifying non-textual deviation Figure 5 displays the speech symbol \(n\)-gram (\(n=1,3,5,7\)) frequencies in both native and non-native speakers. We delve into the distinctions between native and non-native speakers based on the data presented in the figure. **Linearization by increasing \(n\).** The distribution tends to linearize as \(n\) increases, a trend observed in both native and non-native speakers. Since the corpus utilized in this experiment employs the same reading text for both groups, language fluency does not influence the linguistic content. Consequently, the linearization is anticipated to be driven by the textual information (particularly the semantic information hypothesized in Section 4.2), rather than language Figure 4: Speech symbol \(n\)-gram rank-frequency distributions. left: Japanese, right: English. \(n=6\) in Japanese and \(n=2\) in English correspond to a character. \(n=9\) corresponds a word. Figure 3: Character \(n\)-gram rank-frequency distributions. left: Japanese, right: English. \(n=2\) in Japanese and \(n=6\) in English correspond a word. Figure 2: Word rank-frequency distributions. left: Japanese, right: English. fluency. **Deviation of non-native speakers.** When we examine the deviations of non-native speakers compared to native speakers, we notice deviations correlated with language fluency. Specifically, non-native speakers tend to use high-frequency symbols more frequently. These findings indicate that Zipf's law (and the power law) can be employed to discern differences between standard and non-standard speech with regards to language fluency. Further exploration is required to ascertain the specific nature of these deviations. Intriguingly, speakers with high-level proficiency diverge more from native speakers. This counters the intuitive expectation that individuals with low-level fluency would deviate more significantly. ## 5 Conclusion In this paper, we investigated whether speech symbols follow Zipf's law. Through our experiments, we determined that: 1) speech symbol \(n\)-grams corresponding to a word follow a power law rather than Zipf's law, and 2) non-textual deviations in non-standard speech can be identified through the power law of the rank-frequency distributions. Our research paves the way for textless analysis methods applicable to various audio data. The following are possible directions for future research. * **Language development.** As outlined in Section 1, the Zipf's (or power) law can potentially aid in analyzing infants' vocabulary acquisition. Despite the existing challenges in robust automatic speech recognition of infants' voices [20], our approach can encode voices into symbols without transcriptions. This enables statistical analyses utilizing voices exclusively. * **Animal voices and non-speech audio.** Zipf's law has been demonstrated to apply to symbolic audio, including animal calls [21] and music scores [22]. While current research often relies on human-invented symbols and annotations, our method could potentially extend to audio sources beyond human speech without using the human-invented symbols and annotations. General-purpose audio representation models [23] could be employed for this purpose. * **Emergent speech communication.** The analysis of emergent languages constitutes a significant area of research, with objectives including 1) fostering language-based machine-machine communication (i.e., communication between artificial intelligence) [24, 7], and 2) evaluating the extent to which these communications mirror statistics of human-to-human communications [25, 6]. Given that Zipf's law (and related principles, such as the Zipf's law of abbreviation [26]) embodies the principle of least effort in human communication, it may offer a way to explore machine-machine communication [27]. Our method holds the potential to forge a way in examining speech-based machine-machine communication, potentially benefitting developments of exploring human spoken language emergence and human-machine communication.